Build a Mock Interview Agent Using Twilio WhatsApp API, LangGraph, and OpenAI

July 15, 2025
Written by
Denis Kuria
Contributor
Opinions expressed by Twilio contributors are their own
Reviewed by

Build a Mock Interview Agent Using Twilio WhatsApp API, LangGraph, and OpenAI

Preparing for interviews is often a lonely process. Mostly, you Google common questions, rehearse alone, or ask a friend to help. But these methods rarely mimic the pressure and unpredictability of real interviews. On the other hand, hiring platforms offer mock interviews, but they can be expensive and inflexible.

That’s where WhatsApp comes in. It’s familiar, widely used, and already installed on most people’s phones. In this tutorial, you'll learn how to build a mock interview agent using LangGraph, OpenAI’s GPT-4o, and Twilio’s WhatsApp API. The agent will ask questions based on your chosen role, review your responses, and offer customized feedback to help you understand how to answer interview questions and what interviewers are looking for.

Prerequisites

To follow along with this tutorial, ensure you have the following:

You don't need advanced knowledge of AI. You will learn the necessary concepts to build this agent as you progress.

The full source code can be found in this GitHub repository.

Setting up your development environment

Having covered the basics, you're ready to set things up. This includes creating a Python workspace, installing the required libraries, and configuring both Twilio and OpenAI.

Python Environment Setup

Start by creating a dedicated directory for your project anywhere on your computer and name it mock-interview. Then open it using your preferred IDE; this tutorial uses Visual Studio Code. Head to the IDE’s terminal and run the following command to create a virtual environment.

python -m venv interview_bot_env

Then activate the virtual environment by running the following command on the terminal.

# On Windows (Command Prompt)
interview_bot_env\Scripts\activate

# On Windows (PowerShell)
interview_bot_env\Scripts\Activate.ps1

# On macOS/Linux
source interview_bot_env/bin/activate

When you see (interview_bot_env) at the beginning of your command prompt, it indicates that your virtual environment is active. Creating a virtual environment will help you manage dependencies by keeping them isolated from the system-wide Python installation and other projects.

Installing Required Libraries

With your virtual environment active, you need to install the packages that your agent will need to run successfully. These will be:

  • langchain-openai: Provides integration with OpenAI's language models
  • langgraph: Enables building and managing generative AI agent workflows
  • flask: Creates the web server to handle WhatsApp webhooks
  • twilio: Official SDK for sending and receiving WhatsApp messages
  • python-dotenv: Loads environment variables from a .env file

To install the above packages, run the following command in the terminal:

pip install flask twilio langchain-openai python-dotenv langgraph

After the installation is complete, you are ready to move on to the next phase of the setup.

Creating the Environment Configuration File

Your AI agent will need API Keys to access the OpenAI and Twilio services. But you should never hard-code them in your source code, as this might expose them to bad actors. Instead, create a file named .env in the mock-interview directory. This is where all the API Keys will reside. Go on and paste the following placeholders:

# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key_here
# Twilio Configuration
TWILIO_ACCOUNT_SID=your_twilio_account_sid_here
TWILIO_AUTH_TOKEN=your_twilio_auth_token_here
TWILIO_WHATSAPP_NUMBER=whatsapp:twilio_whatsapp_number

You will later replace these placeholders with the actual API Keys and Twilio number.

Configuring Your Twilio Account

Log in to your Twilio account and navigate to MessagingTry it outSend a WhatsApp message. This will open the Twilio WhatsApp Sandbox as shown below:

Configuring WhatsApp sandbox on Twilio console

The sandbox enables you to test the agent without undergoing WhatsApp's business approval process. Copy the WhatsApp number and add it to your .env file. Then, go to WhatsApp and send the displayed code to the Twilio number shown.

Verifying a WhatsApp number with the Twilio sandbox

You should receive a confirmation message welcoming you to the sandbox, like the one above. This confirms that your number has been verified.

Now, proceed to your Twilio Console, find your Account SID and Auth Token in the main dashboard.

Twilio Dashboard displaying Account SID and Auth Token

Update your .env file with these values to complete setting up Twilio for your project.

Configuring OpenAI API Access

This is the final step in your development environment setup. Visit the OpenAI API platform page and create an account or log in. Once logged in, navigate to the API Keys section in the left sidebar. Click Create new secret key and give it a descriptive name like WhatsApp Interview Bot. Copy the generated key immediately because OpenAI only shows it once for security reasons. Then, update your .env file with your OpenAI API key. If your account is new, OpenAI will give you test credits. If not, you will need to add credits to your account. $5 is sufficient since the GPT-4o-mini model you will use for this project is cost-effective.

You now have everything you need to start building the application. Let's begin.

Building the Autonomous Agent

In this section, you will concentrate on creating the actual agent itself. This will be the brain of the system. It will manage the conversation, ask questions, and guide the user through the interview. Inside your mock-interview folder, create a new file named interview_graph.py. All the code in this section will go into this file.

Importing the Required Libraries and Initializing the Agent Configuration

Open the interview_graph.py file and start by importing the following classes and functions from the libraries you installed earlier.

from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain.prompts import PromptTemplate
from langchain_core.messages import SystemMessage
import os
from dotenv import load_dotenv

These importations will enable you to call and use the methods and functions they support in your code. The create_react_agent function is the heart of the agent, as it allows you to quickly create an agent that can reason and act( ReAct) without coding from scratch. The ChatOpenAI will help you make an object to connect with the OpenAI API and access their models. The tool import is a decorator that will mark your functions as actions the agent can perform during interview conversations. As for the PromptTemplate it will help you build dynamic and reusable prompts. A prompt is the way you tell an AI model or system what you want it to do and how to do it. The SystemMessage will define the behavior and tone of the agent. Finally, the os and load_dotenv will help access and load the environmental variables from the .env file.

After the imports, load the .env file to make its values available to your program, then initialize the large language model to enable communication with OpenAI’s API.

load_dotenv()
llm = ChatOpenAI(model_name="gpt-4o-mini", temperature=0.5)

The most important parameter is the temperature. The LLM's temperature is what controls the creativity of the model’s output. The lower the temperature, the more deterministic the output, and the higher the temperature, the more random the output will be. For the mock interview agent, you should aim for a value that balances between creativity and consistency. This will prevent the agent from sounding robotic and also help maintain professional quality. But you don't have to guess it right the first time; you should adjust the value as you are testing and iterating on your agent’s responses.

Defining the Core Interview Prompt Templates

After setting up the LLM, the next step is creating the prompt templates. These will guide the agent on how to formulate its messages and what details to include. You will need a template for each part of the conversation that the agent will handle. This will include a welcome prompt, a question prompt, a feedback prompt, a help prompt, and a final review prompt. For each prompt template, you will need to specify the input variables, the pieces of information the agent needs for that message, and the template, which is the structure the agent will follow when responding.

Start by defining the welcome prompt.

welcome_prompt = PromptTemplate(
    input_variables=[ ],
    template="""
    Create a warm, professional welcome message for a mock interview coach.
    Be concise, not more than 80 words.
    Include:
    - Friendly greeting with a wave emoji
    - Brief explanation of the 5-step process
    - What info you need (role, experience level, number of questions)
    - What happens during the interview (questions + feedback)
    - Available help options during the session
    - End by asking for their job role, experience level, and number of questions one at a time
    Format for WhatsApp: Use *bold* for headers, _italic_ for emphasis, emojis where appropriate.
    Keep it conversational and encouraging.
    """
)

This template will be used to greet the user at the beginning of each WhatsApp session. It will inform the users of the steps they will encounter throughout the interview. In this case, there are no input_variables because it’s a static greeting for new users.

After greeting the user, the agent will ask the user’s job role, experience level, and number of questions. This happens early to enable the agent to personalize the interview questions based on the user’s experience and niche. You will need to validate each of these user responses using the LLM, as any invalid answer might lead to a failed session. For example, if a user provides an invalid job role, the agent will struggle coming up with the right questions. Go on and create the validation prompts.

role_validation_prompt = PromptTemplate(
    input_variables=["role"],
    template="""
    Validate and standardize this job role: {role}
    If valid: Return ONLY the clean, standardized job title.
    If invalid: Return "INVALID: [brief reason]"
    Examples:
    "software engineer" → "Software Engineer"
    "data scientist" → "Data Scientist" 
    "xyz123" → "INVALID: Not a recognizable job role"
    """
)
level_validation_prompt = PromptTemplate(
    input_variables=["level"],
    template="""
    Validate and standardize this experience level: {level}
    Valid levels: Entry (junior/new), Mid (experienced), Senior (expert/lead)
    Return ONLY: "Entry", "Mid", "Senior", or "INVALID: [reason]"
    Examples:
    "junior" → "Entry"
    "experienced" → "Mid"
    "5 years" → "INVALID: Use Entry, Mid, or Senior"
    """
)
num_questions_validation_prompt = PromptTemplate(
    input_variables=["num_questions"],
    template="""
    Extract the number from: {num_questions}
    Valid range: 1-10 questions
    Accept words or digits: "two"→2, "5"→5
    Return ONLY the integer or "INVALID: [reason]"
    Examples:
    "three" → 3
    "7" → 7
    "fifteen" → "INVALID: Must be 1-10"
    "abc" → "INVALID: Not a valid number"
    """
)

The role_validation_prompt takes in the user’s job role input and checks and validates it. If it is a valid job, it cleans it up for consistency. If it is not valid, it rejects it with a clear reason. The same process happens during the validation of both the level and the number of question inputs.

After validating the interview setup questions, the next step is to define a prompt that the agent will use to generate the interview questions.

question_prompt = PromptTemplate(
    input_variables=["role", "experience_level", "previous_context", "question_number"],
    template="""
    You are interviewing for a {role} position at {experience_level} level.
    Previous context: {previous_context}
    Generate interview question #{question_number}.
    Requirements:
    - Natural, conversational tone
    - Appropriate difficulty for {experience_level} level
    - Relevant to {role} responsibilities
    - Avoid repeating previous topics
    - WhatsApp format: *bold* for emphasis, _italic_ for keywords
    Don't return tips, structure, or hints to answer the question.
    """
)

This prompt will generate personalized questions based on the user's job role and experience level, which are passed as input variables. The previous_context variable contains a summary of what has already been covered in the interview to avoid repeating topics.

The instruction on not returning tips, structure, or hints is very crucial. Without it, the agent might generate the question and hints together. This is not optimal for users who might want to attempt answering the question first.

After the user answers the question, the agent will need to provide feedback on how they performed. For this, you will need a feedback prompt.

feedback_prompt = PromptTemplate(
    input_variables=["role", "experience_level", "question", "response"],
    template="""
    Provide expert feedback on this {role} interview answer at {experience_level} level.
    Question: {question}
    Answer: {response}
    Give concise feedback (max 600 chars) with:
    - Key strength shown
    - Main improvement area  
    - Missing elements (if any)
    - Specific next step
    **critical**: Always tell the user what an interviewer is looking for in a strong answer for that question
    If incomplete/weak, add: "_Interviewers look for..._" with key expected points.
    End with: "Ready for the next question, want to retry, or need an example answer?"
    Format for WhatsApp with *bold* headers and _italic_ emphasis.
    """
)

This prompt will provide the LLM with the user’s role, experience_level, question, and the response given. Then the LLM will critique the response, pointing out what the user did well and how to improve. It also coaches the user on what interviewers are looking for in a good answer when asking the question at hand. But what happens when the user asks for help rather than answering? This is where you need the help prompt template.

help_prompt = PromptTemplate(
    input_variables=["role", "experience_level", "question"],
    template="""
    Provide quick coaching for this {role} interview question at {experience_level} level:
    {question}
    Give 2-3 key points and a simple answer structure (max 300 chars).
    Format for WhatsApp:
    - *Key Points:* bullet list
    - *Structure:* brief framework
    - _Tip:_ one actionable insight
    Be concise and actionable.
    """
)

This PromptTemplate uses role, experience_level, and question to generate coaching tips. When the user says they don’t know the answer to a question or explicitly asks for help, the agent will use this template to generate tips on how the question should be answered and give the user another try.

After looping over and reaching the number of specified questions, the agent will give a final feedback to the user on how they fared during the full interview session and tips on how to proceed. For this, you will need the final_review_prompt.

final_review_prompt = PromptTemplate(
    input_variables=["role", "experience_level", "interview_summary"],
    template="""
    Provide overall interview performance review for {role} at {experience_level} level.
    Interview summary: {interview_summary}
    Include:
    - *Overall Strengths:* 2-3 key positives
    - *Areas to Improve:* 2-3 specific areas
    - *Action Items:* concrete next steps
    Keep it encouraging but honest and concise max 600 chars. Format for WhatsApp.
    End with: "Want to practice more questions or wrap up?"
    """
)

This PromptTemplate uses role, experience_level, and interview_summary (a summary of your answers and feedback) to wrap up the interview. You can still continue with the interview if you want more questions by notifying the agent that you want to practice more. This is because the agent sets the number of questions the user gives, hence it can update it at any stage of the interview.

Up to now, you have defined the PromptTemplates, but you haven’t seen how the agent will actually use them. In the next step, you will create tools that call these templates and send them to the language model when the agent needs to generate a message during the conversation.

Creating Tool Functions For Agent Tasks

Tools are helpers that the agent can pick up when it needs to do something specific. For example, in this case, it can be actions like asking a question, giving feedback, etc. Each tool wraps a function and describes what the function does, what it needs as input, and what it will return. When you add tools to your agent, it can decide which tool to use based on the conversation, making it autonomous. Proceed to define the tools we will need. Each PromptTemplate will have its own tool.

@tool
def generate_welcome_message() -> str:
    """Generate a personalized welcome message for new users."""
    print("[TOOL] generate_welcome_message() called")
    return llm.invoke(welcome_prompt.format()).content

@tool
def validate_role(role: str) -> str:
    """Validate and standardize the job role."""
    print(f"[TOOL] validate_role(role={role!r}) called")
    return llm.invoke(role_validation_prompt.format(role=role)).content

@tool
def validate_level(level: str) -> str:
    """Validate and standardize the experience level."""
    print(f"[TOOL] validate_level(level={level!r}) called")
    return llm.invoke(level_validation_prompt.format(level=level)).content

@tool
def validate_num_questions(num_questions: str) -> str:
    """Validate and extract the number of questions."""
    print(f"[TOOL] validate_num_questions(num_questions={num_questions!r}) called")
    result = llm.invoke(num_questions_validation_prompt.format(num_questions=num_questions)).content
    return result

@tool
def generate_interview_question(role: str, experience_level: str, previous_context: str, question_number: int, total_questions: int = None) -> str:
    """Generate a tailored interview question."""
    print(f"[TOOL] generate_interview_question(role={role!r}, experience_level={experience_level!r}, previous_context={previous_context!r}, question_number={question_number!r}, total_questions={total_questions!r}) called")
    question = llm.invoke(question_prompt.format(
        role=role,
        experience_level=experience_level,
        previous_context=previous_context,
        question_number=question_number
    )).content
    if total_questions:
        return f"*Question {question_number}/{total_questions}:*\n{question}"
    return question

@tool
def provide_feedback(role: str, experience_level: str, question: str, response: str) -> str:
    """Provide expert feedback on the user's interview answer."""
    print(f"[TOOL] provide_feedback(role={role!r}, experience_level={experience_level!r}, question={question!r}, response={response!r}) called")
    return llm.invoke(feedback_prompt.format(
        role=role,
        experience_level=experience_level,
        question=question,
        response=response
    )).content

@tool
def give_help(role: str, experience_level: str, question: str) -> str:
    """Provide coaching help for the current question."""
    print(f"[TOOL] give_help(role={role!r}, experience_level={experience_level!r}, question={question!r}) called")
    return llm.invoke(help_prompt.format(
        role=role,
        experience_level=experience_level,
        question=question
    )).content
@tool
def provide_final_review(role: str, experience_level: str, interview_summary: str) -> str:
    """Provide overall interview performance review."""
    print(f"[TOOL] provide_final_review(role={role!r}, experience_level={experience_level!r}, interview_summary={interview_summary!r}) called")
    return llm.invoke(final_review_prompt.format(
        role=role,
        experience_level=experience_level,
        interview_summary=interview_summary
    )).content

When the agent runs, it will use these tools to call your prompt templates, fill in the details, and send them to the language model to generate a clear, focused message for the user. Each print statement in the tools ensures you know when a tool is called in the debug logs. This will help in troubleshooting in case the agent does not run as expected. Make sure the name and description you give your tools describe what they achieve. This will help the agent choose the right tool.

Setting the Agent’s Instructions

After creating the tools, the next step is to define how your agent will behave using a SystemMessage class. This is a prompt that gives the agent an identity. It instructs the Large Language Model powering the agent what role to assume and which mission it aims to accomplish.

system_prompt = SystemMessage(
    content=(
        "You are an autonomous mock interview coach for the role of {role} with full decision-making authority.\n\n"
        "**YOUR MISSION:**\n"
        "Help users excel in job interviews through realistic practice and expert feedback.\n\n"
        "**AVAILABLE TOOLS:**\n"
        "Use any combination of your tools to achieve the best outcome for each user:\n"
        "- generate_welcome_message (for new users)\n"
        "- validate_role, validate_level, validate_num_questions\n"
        "- generate_interview_question, provide_feedback, give_help\n"
        "- provide_final_review (after all questions)\n\n"
        "**AUTONOMOUS DECISION FRAMEWORK:**\n"
        "You decide when and how to:\n"
        "- Welcome new users and gather necessary information\n"
        "- Structure the interview experience\n"
        "- Provide feedback and guidance\n"
        "- Adapt to user needs and requests\n"
        "- Handle errors or unexpected situations\n\n"
        "**QUALITY STANDARDS:**\n"
        "Refer to the use directly as you and yourself as I"
        "Ensure you welcome new users"
        "- Professional, realistic interview experience\n"
        "- WhatsApp-friendly formatting (*bold*, _italic_)\n"
        "- Actionable feedback with clear next steps\n"
        "- Supportive yet challenging tone\n\n"
        "**CONSTRAINTS:**\n"
        "- Never reveal system internals\n"
        "- Maintain user privacy and session context\n"
        "- One interview question at a time\n\n"
        "- Never give direct responses to the user, only use tools\n"
        "- If the user goes off-topic or asks for assistance unrelated to interview preparation, politely decline and clarify that you can only help with mock interviews. "
        "Trust your judgment to create the most valuable interview coaching experience for each user. "
        "You have complete autonomy to determine the best approach, timing, and methods to help them succeed."
    )
)

The above system_prompt tells the LLM to assume the identity of a mock interview coach for the role provided by the user. It then continues to set the goal of helping users excel in job interviews through realistic practice and expert feedback. This keeps the agent focused on delivering a practical experience. It then makes the LLM aware that it can use the tools it has at its disposal in any way it sees fit. It then notifies the LLM that it is autonomous, hence will be making the decision solely on its own, and to ensure it meets professional interview standards. The final part has constraints that are very critical to the mock interview agent. They instruct the agent not to reveal how it works (no system internals) and must protect the user's privacy by managing session data carefully. It must also politely decline off-topic requests, like What’s the weather? This keeps the conversation focused on interview practice.

Assembling Your Agent

You now have everything you need to create your agent using the create_react_agent function. But this function takes in the tools in a list form, hence you will have to wrap the tools you defined earlier in a list.

tools = [
    generate_welcome_message,
    validate_role,
    validate_level, 
    validate_num_questions,
    generate_interview_question,
    provide_feedback,
    give_help,
    provide_final_review
]

After this, create the ReAct agent and pass the LLM it will use, the tools at its disposal, and finally the system prompt that will guide it.

graph = create_react_agent(
    model=llm,
    tools=tools,
    prompt=system_prompt,
)

Now that you have your agent ready, you need a way to manage users and their conversations.

Managing User Conversations

Go ahead and create a function named run_interview that connects the users to your agent.

def run_interview(user_id: str, user_message: str, session_store: dict) -> str:
    # Initialize or retrieve user state
    user_state = session_store.get(user_id, {
        "messages": [],
        "is_new_user": True
    })
    messages = user_state["messages"]
    is_new_user = user_state.get("is_new_user", True)
    # For new users, let the agent handle welcome via tool
    if is_new_user:
        # Add a system message to trigger welcome behavior
        messages.append(("system", "New user started conversation."))
        user_state["is_new_user"] = False
    # Add user message
    messages.append(("user", user_message))
    try:
        # Run the agent
        result = None
        for state in graph.stream({"messages": messages}, stream_mode="values"):
            result = state
        # Extract AI response
        ai_messages = [m for m in result["messages"] if getattr(m, "type", None) == "ai"]
        tool_messages = [m for m in result["messages"] if getattr(m, "type", None) == "tool"]
        if ai_messages:
            reply = ai_messages[-1].content
        elif tool_messages:
            reply = tool_messages[-1].content
        else:
            reply = "I'm here to help with your interview prep!"
        # Update session
        user_state["messages"] = result["messages"]
        session_store[user_id] = user_state
        return reply
    except Exception as e:
        logger.error(f"Agent error: {e}")
        return "Sorry, something went wrong. Let's try again!"

The run_interview function handles passing the user_id and user_message to the LangGraph agent while maintaining conversation state in a session_store dictionary. It retrieves or initializes the user’s session based on the user_id, adds a system message for new users to trigger a welcome message. It then appends the user_message to the conversation history. The function then feeds this to the agent’s graph.stream method, which uses the ReAct(Reason then Act) pattern to select tools to generate a response. It extracts the agent’s reply, updates the session with the user_id, and returns it, falling back to an error message if needed.

Your system is now ready to interact with users and create realistic mock interviews. But before you connect it to WhatsApp, it's best to test the backend independently.

Running the agent locally for testing

Create a main block that creates a command-line interface where you can type messages, interact with the LangGraph agent, and receive responses.

if __name__ == "__main__":
    print("🎯 Mock Interview Agent - Local Test Mode")
    print("Type 'exit' to quit, 'reset' to start over\n")
    session_store = {}
    user_id = "test_user"
    while True:
        try:
            user_input = input("You: ").strip()
            if user_input.lower() in ["exit", "quit"]:
                print("👋 Good luck with your interviews!")
                break
            if user_input.lower() == "reset":
                session_store.clear()
                print("Session reset. Starting fresh!\n")
                continue
            response = run_interview(user_id, user_input, session_store)
            print(f"Coach: {response}\n")
        except KeyboardInterrupt:
            print("\n👋 Goodbye!")
            break
        except Exception as e:
            print(f"Error: {e}\n")

The above code simulates how a real user using WhatsApp would interact with the agent. It initializes an empty session_store dictionary to track conversation history and sets a user_id as test_user. It then creates a while True loop that prompts for input, processes it via run_interview, and prints the response.

Proceed to test the agent by running the following command on the terminal:

python .\interview_graph.py

You should see an input prompt to start interacting with the agent, as shown below.

Input Prompt on the terminal that shows the agent is ready to interact with the user

Send a message to start the conversation. For a Python Developer role, here are the results.

Results of testing the Mock interview agent on the terminal

The screenshot does not show the full interaction, but as you can see, the agent is doing an incredible job. When you are satisfied with the results, the next step is to connect the agent to WhatsApp.

Setting Up the WhatsApp Flask Server for Agent Integration

To connect the agent to WhatsApp, you will need a Flask server. Start by creating a new file named app.py inside your mock-interview folder. All the code under this section will go into this file.

Importing the Necessary Libraries

Start by importing the following Twilio and Flask classes and objects. Also, import the run_interview function you created.

from flask import Flask, request
from twilio.twiml.messaging_response import MessagingResponse
from interview_graph import run_interview

Flask and request create a web server and handle incoming HTTP requests. MessagingResponse from twilio.twiml.messaging_response formats replies for WhatsApp. Finally, the run_interview will process messages using your agent.

Initializing the Flask App

After the imports, proceed to initialize a flask application.

app = Flask(__name__)

This creates a basic Flask application. The __name__variable tells Flask where to find the app’s files. Also, initialize a session store that will store user conversation states, mapping each user’s WhatsApp number to their session data.

user_sessions = {}

This will ensure that when multiple people message the Twilio WhatsApp number, their interactions don’t get mixed up.

Creating the Webhook Endpoint

After this, create a webhook that Twilio calls when a WhatsApp message arrives.

@app.route('/webhook', methods=['POST'])
def webhook():
    """
    Webhook endpoint for Twilio WhatsApp messages.
    Processes incoming messages and returns appropriate responses using the agent.
    """
    try:
        # Get incoming message details
        incoming_msg = request.values.get('Body', '').strip()
        sender = request.values.get('From', '')
        # Initialize response
        resp = MessagingResponse()
        # Ensure sender is present
        if not sender or not incoming_msg:
            resp.message("Sorry, I couldn't process your message. Please try again.")
            return str(resp)
        # Use run_interview to process the message and get a response
        response = run_interview(sender, incoming_msg, user_sessions)
        resp.message(response)
        return str(resp)
    except Exception:
        # Return a generic error message to the user (no system details)
        resp = MessagingResponse()
        resp.message("Sorry, something went wrong. Please try again later.")
        return str(resp)

The code above serves as the endpoint for Twilio WhatsApp messages, extracting the message text and sender’s WhatsApp number from the request. It then processes them with the run_interview function from interview_graph.py and returns a TwiML-formatted reply via MessagingResponse. The error handling ensures there is a friendly fallback response if the input is invalid or an exception occurs.

Running the Server

Create a main block that will run the server when the app.py file is executed.

if __name__ == '__main__':
    # Run the Flask app with debug mode enabled
    app.run(host='0.0.0.0', port=5000, debug=True, use_reloader=True)

The main block runs the Flask server locally on all network interfaces (host='0.0.0.0') at port 5000 with app.run. This enables the webhook to receive WhatsApp messages for the mock interview agent.

Exposing the Flask Server to the Internet

Now your agent and server are ready. But you need to expose the local server to the internet so the Twilio WhatsApp API can send messages to it. To achieve this, proceed to the Ngrok website and sign up for free. Then, download the installer that matches your computer and follow the given instructions to set up Ngrok.

After you finish setting up Ngrok, you are ready to test your agent via WhatsApp.

Running and Testing the Agent Via WhatsApp

To test your application, proceed to the terminal and run the following command.

python app.py

This will start your development server as shown below.

A flask development server starting on the terminal

This shows the server is active and listening for requests on port 5000 locally. Now, run the Ngrok application you downloaded. Then, run the following command on the Ngrok terminal to create a secure tunnel to your Flask server

ngrok http 5000

Ngrok should display a public URL in the terminal, indicating the terminal is active and forwarding requests to http://localhost:5000, as shown below.

Ngrok running on the terminal

Now, copy the public URL shown. In this case, it is https://bab1-102-208-183-99.ngrok-free.app. Proceed to the Twilio WhatsApp Sandbox page and click Sandbox settings. Then, under Sandbox ConfigurationWhen a message comes in, paste the Ngrok’s public URL and add your webhook at the end so that it will look like this https://bab1-102-208-183-99.ngrok-free.app/webhook and save.

Setting the webhook URL in Twilio sandbox settings

This tells Twilio where to send incoming WhatsApp messages so that they can be forwarded to the Flask server by Ngrok. You are now ready to chat with your agent on WhatsApp. Proceed to WhatsApp and try interviewing for a role of your choice. Here are the sample results for interviewing for the machine learning engineering role.

Mock interview chat interface
Results of interacting with the mock interview agent

Text summarizing the performance review for a mid-level ML engineer, highlighting strengths and areas to improve.

From the chats, you can see the mock interview agent handles the interview flawlessly. Try it with more questions to see how it holds up.

Conclusion

In this tutorial, you have created an autonomous mock interview agent. You now have the foundational skills of how to create agents using LangGraph and how you can serve them via WhatsApp using Twilio WhatsApp. But this is not the end of the road; in the world of AI, there are many possibilities for what you can create. Use the knowledge you have learnt to challenge yourself and create more amazing AI agents.

Happy Coding!

Denis works as a machine learning engineer who enjoys writing guides to help other developers. He has a bachelor's in computer science. He loves hiking and exploring the world.