Build the future of communications.
Start building for free

What’s New in Programmable Video: Rooms API, Better iOS Integration, Improved Rendering, and more

logo

2017 started with a bang: The Video team has pushed out a myriad of improvements to our Programmable Video SDKs. If you’ve been thinking about adding a video feature to your web or mobile application or wondering where to start with WebRTC development, jump in now with our Video SDKs. Here’s what’s new.

Rooms API

If you’ve used just a handful of real-time video apps, you’ve probably noticed that each of them implements video calling a little bit differently.

 

  • Social media apps add video as a feature on top of chat, letting two users take a conversation real-time whenever they like. 
  • Business collaboration apps often use a traditional calling model, where one user rings” another user and waits for them to answer. 
  • Conferencing apps often revolve around the concept of an invitation,” where users connect to a named conference after being invited.

Each of these models is a bit different.

There’s no one size fits all approach” to calling when it comes to mobile video.

Introduced last year, Video’s new Rooms API gives you the ability to build a flexible call model so you can easily create any of the above experiences and more. The approach is really simple:

 

  1. Present the UI that lets the user initiate a video session in whatever way you like.
  2. When you’re ready for a user to connect to a video session, use our Video SDKs to connect the user to a Room, passing a unique identifier from your application to Twilio to use as the Room’s unique name.
  3. Whenever other users need to connect to the same video session, just connect them to the Room with the same unique name used in #2.

The Rooms API replaces the earlier Conversations API that was available in early beta releases of our Video SDKs.

CallKit and Extensions Support in iOS

The release of iOS 10 brought us CallKit, giving developers the ability to build better VoIP apps for the Apple ecosystem than ever before.
To better support CallKit, we’ve added the new TVIAudioController class, which gives you the ability to configure CallKit audio sessions provided by our SDK, and stop and start audio in response to CallKit’s audio session callbacks. To see how to use these APIs in your own CallKit app, take a look at our Video Quickstart project on Github.
Additionally, iOS 10 Extensions let you extend any iOS interaction with your app’s functionality. With the beta-7 release of our iOS SDK you can now use our Video SDK in an iOS extension. This gives you the ability to add your app’s video or audio calling functionality into all kinds of interesting places!

New guides for screen sharing

Screen sharing is one of the most popular features of our SDKs. So we’ve made it even easier to get started:

Screen capture is really just one of many ways you can use our Video Capture APIs to grab video frames from any source in your mobile application not just the camera. To see custom screen capturers built on top of these lower-level capture APIs, check out this Android or iOS example on Github.

Network handoff in IPv4 environments

We beefed up call resiliency in both our iOS and Android SDKs with support for network handoff. Now, when your user moves from one network to another, our Video SDK automatically handles the network swap making sure your user’s call doesn’t drop. 
This feature works with IPv4 networks today and support for this capability in NAT’d / mapped IPv6 is coming soon.

Advanced camera controls in Android

Some apps use the camera for more than just getting two users connected with one another. That’s why we added takePicture  and updateCameraParameters to the CameraCapturer class in the Android SDK. They enable developers to snap a picture, enable or disable the flashlight, and change the camera’s parameters even while in an active video session. 

Improved display and performance in iOS rendering

We continue to optimize video rendering in our iOS SDKs, both in the Metal API-based renderer we introduced last summer, and OpenGL ES-based rendering where Metal is not available. 
The first feature helps you build better-looking video apps: We added new content modes to the iOS SDK’s TVIVideoViewRenderer class. These new content modes enable you to control how your video will be fit into the associated view.
ScaleToFill fills the view without maintaining aspect ratio. 

ScaleAspectFit scales the video to fit the view while maintaining aspect ratio.


ScaleAspectFill scales the video to fill the view while maintaining aspect ratio. 


Check out the TVIVideoViewRenderer API reference here.

In the most recent release of our iOS SDK, we moved to a zero-copy rendering pipeline for OpenGL ES, and we’ll have a similar update to our Metal renderer very soon. In this model, video frames are shared directly with the GPU rather than copied. This means your app gets more efficient, and your users get to keep talking longer. What could be better?

Start building with Video today

These improvements are just the latest of many more coming soon, including the integration of new features powered by our expanded media server team in Spain. With so much good stuff on the horizon, now’s a great time to get started building your video app.

To get started building with Video, check out our quickstart apps on Github:

You can also dive straight into the documentation.

We can’t wait to see what you build!

Authors
Sign up and start building
Not ready yet? Talk to an expert.