With Autopilot, you can build AI-powered conversational IVRs that recognizes user's intent, collects data from users, answers frequently (or infrequently) asked questions, and routes calls to other users.
Autopilot does the speech recognition and natural language understanding (NLU) to detect what users say and match it to Tasks that they need to acomplish. Tasks can be programmed to ask questions to collect data, answer questions or connect calls to other users. They are trained to recognize different phrases or ways users might invoke a given Task.
You can use TwiML's
<Autopilot> noun to connect a call to an Autopilot Conversational IVR.
The following example shows how to use the
<Autopilot> TwiML noun:
<?xml version="1.0" encoding="UTF-8"?> <Response> <Connect> <Autopilot>UAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</Autopilot> </Connect> </Response>
action attribute takes an absolute or relative URL as a value. The Autopilot dialogue ends either by doing a handoff or the absence of a listen.
If you do not provide an
action parameter, Twilio will POST to the URL that houses the active TwiML document.
<?xml version="1.0" encoding="UTF-8"?> <Response> <Connect action="https://myapp.com/autopilot"> <Autopilot>UAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</Autopilot> </Connect> </Response>
action URL, Twilio will re-request the URL that hosts the TwiML you just executed. This can lead to unwanted looping behavior if you're not careful. See our example below for more information.