Skip to contentSkip to navigationSkip to topbar
Rate this page:
On this page

Voice Intelligence - Best Practices


(warning)

Warning

Public Beta

Voice Intelligence is currently available as a public beta release. Some features are not yet implemented and others may be changed before the product is declared as Generally Available. Beta products are not covered by a Twilio SLA.

Learn more about beta product support(link takes you to an external page).


Overview

overview page anchor

Twilio Voice Intelligence transcribes your call recordings to then generate data insights from your conversations. This document includes are some best practices for working with the recordings intended for transcription, assigning Participants to the transcripts, and using webhooks.


Use dual-channel recordings

use-dual-channel-recordings page anchor

Using dual-channel recordings with Voice Intelligence provides not only a higher accuracy, but also adds the ability to map and override participants with additional metadata for search and business reporting. The following are guides about enabling dual-channel recordings in the different Twilio products.

Enable dual-channel recording with Conferences

enable-dual-channel-recording-with-conferences page anchor

Any custom implementations that use Conferences to orchestrate a meeting need to change how the recordings are created.

By default, the conference recording is single-channel. To get a dual-channel recording, it's recommended to record the Participant leg of the call when the Participant joins the Conference. Learn more how to create a Conference Participant with Record set to true.

The call leg being recorded would be on the left channel of recording, and all other participants will be mixed on the second channel. When recording a particular call leg, it's recommended to record the call leg with the most call time to avoid incomplete recordings. For example, for an Inbound call, recording the customer's leg would ensure any customer utterances are recorded, even if the agent has not yet joined the conference.


Make third-party media recordings public

make-third-party-media-recordings-public page anchor

Voice Intelligence supports third-party media recordings. If your call recordings aren't stored in Twilio and you want to use them with Voice Intelligence, the recordings need to be publicly accessible for the duration of transcription. The recordings can be hosted or better used on a time-limited pre-signed URL. For example, to share a recording on an existing AWS S3 bucket, please follow this guide(link takes you to an external page). Then add the public recording url to the media_url when creating a Transcript.


Create an audio recording from Twilio Video

create-an-audio-recording-from-twilio-video page anchor

If you use Twilio Video and want to transcribe the audio of a Twilio Video recording, it needs additional processing to create an audio recording that can be submitted for transcription.

To create a dual-channel audio recording first, transcode a separate audio-only composition for each participant in the Video Room.

Create a dual-channel audio recording

create-a-dual-channel-audio-recording page anchor

_10
curl -X POST "https://video.twilio.com/v1/Compositions" \ --data-urlencode "AudioSources=PAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
_10
\ --data-urlencode "StatusCallback=https://www.example.com/callbacks"
_10
\ --data-urlencode "Format=mp4"
_10
\ --data-urlencode "RoomSid=RMXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
_10
\ -u $TWILIO_ACCOUNT_SID:$TWILIO_AUTH_TOKEN

Next, download the Media from these compositions and merge them into a single audio stereo audio.

Download the Video Room Media

download-the-video-room-media page anchor

_10
ffmpeg -i speaker1.mp4 -i speaker2.mp4 -filter_complex "[0:a][1:a]amerge=inputs=2[a]" -map "[a]" -f flac -bits_per_raw_smaple 16 -ar 441000 output.flac

In case the recording duration for each participant is different, you can avoid overlapping audio tracks. Use ffmpeg to create a single-stereo audio track with delay to cover the difference in track length. For example, if one audio track last 63 seconds and the other 67 seconds, use ffmpeg to create a a stereo file with the first track, with four seconds of delay to match the length of the second track.

Create a single stereo audio track

create-a-single-stereo-audio-track page anchor

_10
ffmpeg -i speaker1.wav -i speaker2.wav -filter_complex "aevalsrc=0:d=${second_to_delay}[s1];[s1][1:a]concat=n=2:v=0:a=1[ac2];[0:a]apad[ac1];[ac1][ac2]amerge=2[a]" -map "[a]" -f flac -bits_per_raw_sample 16 -ar 441000 output.flac

Finally, send a CreateTranscript request to Voice Intelligence by providing a publicly accessible URL for this audio file as media_url in MediaSource.


Include metadata for Call participants

include-metadata-for-call-participants page anchor

By default, Voice Intelligence assumes Participant One is on channel One, and Participant Two is on channel Two and associates a phone number from the recording. Since a recording can be created in different ways, this assumption may not work for all use cases.

For any such cases and/or the need to attach additional metadata to call participants, it's recommended to use the Voice Intelligence APIs to create a Transcript by providing optional Participant metadata and mapping the participant to the correct audio channel.


Provide a CustomerKey with the CreateTranscript API

provide-a-customerkey-with-the-createtranscript-api page anchor

Providing a CustomerKey with the CreateTranscript API allows you to map a Transcript to an internal identifier known to you. This can be a unique identifier within your system to track the transcripts. The CustomerKey is also included as part of the webhook callback when the results for Transcript and Operators are available. This is an optional field and cannot be substituted for Transcript Sid in APIs.


Check the status of the transcript with the webhook callback

check-the-status-of-the-transcript-with-the-webhook-callback page anchor

Use the webhook callback to know when a create Transcript request has completed and when the results are available. This is preferable to polling the GET /Transcript endpoint. The webhook callback URL can be configured on the Voice Intelligence Service settings.


Rate this page: