Testing SDKs at Twilio

July 26, 2016
Written by
Twilio
Twilion

Twilio Bug Logo

The SDK Engineering group at Twilio is responsible for building and maintaining two real-time communications products – Twilio Client – a set of web and mobile SDKs for making and receiving VoIP calls, and Programmable Video – a set of SDKs for adding video chat capabilities to developers’ apps.

We’ve seen first-hand how the testing requirements for building video SDKs differ vastly from those for typical REST services as our teams build both the back-end infrastructure that supports these real-time communications services as well as developer SDKs. We’ve learned a few things along the way:

Firstly – mobile development comes with a set of test dimensions and special considerations.

Secondly – development of real-time media has unique requirements of its own.

To address all these requirements, we have developed a unique and comprehensive testing approach that allows us to iterate quickly and ship with confidence.

How We Test SDKs In Collaboration With TestDevLab

TestDevLab-logo

As the base of our testing framework we have taken the same layered approach to testing our services. This is an approach where there are several layers of tests at increasingly higher levels of scope – from unit tests that test individual methods at the very bottom to end-to-end tests that verify the end-to-end flows of the system at the very top.

To this base we have added tests for the various scenarios particular to mobile development – such as handing network handover between WiFi and LTE, or handling of applications going into and out of the background.

We pay special attention to resource consumption and have developed tests looking both at bandwidth and battery consumption of our SDKs in various network conditions.

One of the challenges of mobile development is testing on a wide variety of devices, especially in the Android ecosystem. To that end we have established a test lab with a fleet of devices we use to test our software on current and upcoming versions of mobile operating systems. Likewise, for the JavaScript libraries we have developed a set of automated tests for detecting new development, beta, and stable versions of the leading browsers and for running regression tests on these new versions automatically.

Lastly, we have developed a series of tests for measuring and ensuring media quality. We use standard algorithms to analyze audio and video samples for automated measurement of MOS* quality rating. In addition, we use software tools that comply with PEVQ and POLQA standards and are based on the perception model of human hearing and vision. This gives us a meaningful insight of the overall quality of media transmission and the quality of service as we change key aspects of our media stack. These measurements also allow us to assess how users would experience our SDKs under various network conditions.

Since comprehensive testing of new code in mobile environments involves so many dimensions and scenarios, we have found that automation is absolutely critical. We have also found that the quality engineering efforts are significant enough to create a separate team and we have built such a team, dedicated solely to quality engineering of the web and mobile SDKs.

One aspect that has made our quality engineering efforts especially successful has been working with with TestDevLab. TestDevLab works with both REST API and mobile development, as well as particular expertise in media quality and performance testing.

Working with TestDevLab has allowed us to use the follow-the-sun model keeping the development process moving forward 24 hours a day.