Add TAC to your agent
This guide walks you through connecting your AI agent to Twilio's platform using the TAC SDK. You'll configure channels, handle messages, retrieve memory context, define custom tools, and search knowledge bases — all wired together in a single integration flow.
If you haven't tried TAC yet, start with the Quickstart to get a working application first. To add a single capability to an existing TAC application, see the how-to guides in the sidebar.
Before you begin, make sure you have:
- A Twilio account with API credentials (Account SID, Auth Token, API Key, and API Secret)
- A Twilio phone number configured for SMS and/or Voice
- A Conversation Configuration created through the Console or REST API
- Python 3.10+ or Node.js 22.13.0+
pip install twilio-agent-connect[server]
Create a .env file in your project root with your Twilio credentials.
1TWILIO_ACCOUNT_SID=your_account_sid2TWILIO_AUTH_TOKEN=your_auth_token3TWILIO_API_KEY=your_api_key_sid4TWILIO_API_SECRET=your_api_key_secret5TWILIO_PHONE_NUMBER=+12345678906TWILIO_CONVERSATION_CONFIGURATION_ID=your_configuration_id7TWILIO_VOICE_PUBLIC_DOMAIN=your-ngrok-domain.ngrok-free.app
For Voice, set TWILIO_VOICE_PUBLIC_DOMAIN to your ngrok domain without https://.
Create a TAC instance that loads configuration from your environment variables.
1from dotenv import load_dotenv2from tac import TAC, TACConfig34load_dotenv()5tac = TAC(config=TACConfig.from_env())
Configure channels for each communication method you want to support. Set memory_mode to "always" (Python) or memoryMode to "always" (TypeScript) if you want TAC to retrieve memory context with each incoming message. All channels share the same on_message_ready callback, so your agent handles every channel with a single function.
1from tac.channels.voice import VoiceChannel, VoiceChannelConfig2from tac.channels.sms import SMSChannel, SMSChannelConfig3from tac.channels.whatsapp import WhatsAppChannel, WhatsAppChannelConfig4from tac.channels.rcs import RCSChannel, RCSChannelConfig56voice_channel = VoiceChannel(tac, config=VoiceChannelConfig(memory_mode="always"))7sms_channel = SMSChannel(tac, config=SMSChannelConfig(memory_mode="always"))8whatsapp_channel = WhatsAppChannel(tac, config=WhatsAppChannelConfig(memory_mode="always"))9rcs_channel = RCSChannel(tac, config=RCSChannelConfig(memory_mode="always"))
For more details on channel configuration, see Channels.
Register a callback that TAC calls when a user message is ready for processing. The callback receives the user's message, a ConversationSession with context, and an optional memory response. Return the response string and TAC automatically sends it through the correct channel.
1from tac.models.session import ConversationSession2from tac.models.tac import TACMemoryResponse34conversation_history: dict[str, list] = {}56async def handle_message_ready(7message: str,8context: ConversationSession,9memory: TACMemoryResponse | None,10) -> str | None:11conv_id = context.conversation_id1213if conv_id not in conversation_history:14conversation_history[conv_id] = []1516conversation_history[conv_id].append({"role": "user", "content": message})1718# Process with your LLM (see following sections)19llm_response = await generate_response(conversation_history[conv_id])2021conversation_history[conv_id].append({"role": "assistant", "content": llm_response})2223return llm_response2425tac.on_message_ready(handle_message_ready)
Define your agent's role and behavior through a system prompt. You can combine a static prompt with dynamic memory context.
The Python SDK provides MemoryPromptBuilder to format memory data into a prompt string:
1from tac.adapters.prompt_builder import MemoryPromptBuilder23SYSTEM_PROMPT = "You are a helpful customer service agent. Be concise and friendly."45async def handle_message_ready(6message: str,7context: ConversationSession,8memory: TACMemoryResponse | None,9) -> str | None:10conv_id = context.conversation_id1112if conv_id not in conversation_history:13conversation_history[conv_id] = []1415conversation_history[conv_id].append({"role": "user", "content": message})1617# Build memory context from profile traits and conversation history18memory_context = MemoryPromptBuilder.build(19memory_response=memory,20context=context,21)2223# Combine your prompt with memory context24if memory_context:25system_prompt = f"{SYSTEM_PROMPT}\n\n{memory_context}"26else:27system_prompt = SYSTEM_PROMPT2829# Pass to your LLM as the system message30messages = [31{"role": "system", "content": system_prompt},32*conversation_history[conv_id],33]3435response = await openai_client.chat.completions.create(36model="gpt-4o-mini",37messages=messages,38)3940llm_response = response.choices[0].message.content41conversation_history[conv_id].append({"role": "assistant", "content": llm_response})4243return llm_response
This approach works with any LLM provider. For OpenAI specifically, the Python SDK provides a built-in adapter that retrieves memory context automatically.
1from openai import AsyncOpenAI2from tac.adapters.openai import with_tac_memory34openai_client = AsyncOpenAI()56async def handle_message_ready(7message: str,8context: ConversationSession,9memory: TACMemoryResponse | None,10) -> str | None:11# Wrap the OpenAI client — memory context is added to your messages before each LLM call12client = with_tac_memory(openai_client, memory, context)1314response = await client.chat.completions.create(15model="gpt-4o-mini",16messages=conversation_history[context.conversation_id],17)1819llm_response = response.choices[0].message.content2021return llm_response
Use Enterprise Knowledge bases to give your agent grounded responses.
1from tac.tools import create_knowledge_tool23knowledge_tool = await create_knowledge_tool(4knowledge_client=tac.knowledge_client,5knowledge_base_id="know_knowledgebase_xxxxx",6)78# Add to your LLM's tool list alongside custom tools9tools = [check_order_status, knowledge_tool]
The previous examples wait for the full LLM response before sending it to the caller. For Voice conversations, you can stream tokens from your LLM as they're generated so the caller hears the response sooner. Pass an async generator to send_response instead of a complete string.
1from collections.abc import AsyncGenerator23async def handle_message_ready(4message: str,5context: ConversationSession,6memory: TACMemoryResponse | None,7) -> str | None:8conv_id = context.conversation_id910if conv_id not in conversation_history:11conversation_history[conv_id] = []1213conversation_history[conv_id].append({"role": "user", "content": message})1415async def stream_tokens() -> AsyncGenerator[str, None]:16response_tokens = []1718stream = await openai_client.chat.completions.create(19model="gpt-4o-mini",20messages=conversation_history[conv_id],21stream=True,22)2324async for chunk in stream:25if chunk.choices and chunk.choices[0].delta.content:26token = chunk.choices[0].delta.content27response_tokens.append(token)28yield token2930full_response = "".join(response_tokens)31conversation_history[conv_id].append({"role": "assistant", "content": full_response})3233if context.channel == "voice":34await voice_channel.send_response(conv_id, stream_tokens())35elif context.channel == "sms":36llm_response = await generate_response(conversation_history[conv_id])37conversation_history[conv_id].append({"role": "assistant", "content": llm_response})38await sms_channel.send_response(conv_id, llm_response)3940tac.on_message_ready(handle_message_ready)
TAC includes a built-in server that sets up webhook and WebSocket routes for your configured channels.
1from tac.server import TACFastAPIServer23if __name__ == "__main__":4server = TACFastAPIServer(5tac=tac,6voice_channel=voice_channel,7messaging_channels=[sms_channel],8)9server.start()
The server registers routes based on the channels you configure:
Voice routes (when voice_channel is provided):
POST /twiml— Incoming call handlerWebSocket /ws— Streaming via Conversation RelayPOST /conversation-relay-callback— Completion callback
Messaging route (when messaging_channels is provided):
POST /webhook— Webhook for SMS and other messaging channels
Conversation Intelligence route (when cintel_webhook_path is set on TACServerConfig):
POST <cintel_webhook_path>— Conversation Intelligence events (for example,/ci-webhook). Disabled by default.
- Escalate to a human agent: Transfer conversations to human agents through a Twilio Studio Flow.
- Channels: Learn more about channel configuration and routing.
- Troubleshooting: Common issues and solutions.