How to Build a Smart Inventory Chatbot on WhatsApp with LangChain, LangGraph, OpenAI, and Flask

September 17, 2025
Written by
Ezzeddin Abdullah
Contributor
Opinions expressed by Twilio contributors are their own
Reviewed by

How to Build a Smart Inventory Chatbot on WhatsApp with LangChain, LangGraph, OpenAI, and Flask

In an earlier post, we saw how to build an inventory chatbot with keyword-matching for products. We didn’t integrate AI with it or made it smarter so that the chatbot can know what the user wants.

In this post, you’re going to learn how to build a smart solution so that the chatbot / AI agent can understand the context and take actions accordingly.

This WhatsApp-based AI agent should be able to retrieve product details, know if it’s in or out of stock, can list the available products, and finally can place an order for the desired product.

In this tutorial, you will build this service using Flask, Twilio's WhatsApp messaging API, Pyngrok, LangChain, LangGraph, and SQLAlchemy.

You'll use Twilio's API to access the WhatsApp messaging product to let the clients send messages via WhatsApp and start the chatbot. This chatbot is built using aFlask backend. It is then communicated withPyngrok, which is a Python wrapper for ngrok that puts the Flask localhost on the internet. This chatbot will fetch data from the PostgreSQL database and update inventory data in the database using the SQLAlchemy ORM.

At the end of this tutorial, you'll be able to create a chatbot like so:

Chat conversation between a customer and a support agent inquiring about various products and orders.

The app is straightforward. You have a product in your inventory. When the client orders that product, the chatbot replies to place that order for them or says it's not available. The chatbot knows what products you have in the inventory and knows what products you don’t have.

Prerequisites

To follow along with this tutorial, you need to have the following:

  • Python 3.6+ installed
  • PostgreSQL installed
  • A Twilio account set up. If you're new, create a free account here.
  • A smartphone with WhatsApp installed
  • A basic knowledge of Flask
  • A basic knowledge of what ORM is. If you have no idea what ORM is, consult this wiki page!
  • A basic knowledge of AI agents
  • A basic knowledge of LangChain

Creating your Smart Inventory

Start with setting up your database. Say, you have two products in your inventory:

  • 2 collapsible umbrellas
  • 1 hybrid smartwatch

To be able to create these two records in the database, you should define a schema and then create a table based on that schema.

Initiating the Python Project and the PostgreSQL Database

Download and set up PostgreSQL if you haven’t already. Create a new database — for this project, we would call it mydb.

createdb mydb

Create a new directory called smart_inventory_chatbot and initiate a virtual environment inside:

mkdir smart_inventory_chatbot
cd smart_inventory_chatbot
python3 -m venv venv;
. venv/bin/activate; # or . venv/Script/activate if on Windows
pip install --upgrade pip

For now, you only need to install these two libraries:

pip install python-dotenv sqlalchemy psycopg2-binary

The psycopg2-binary library is a binary version of psycopg2, which is a Python driver for the PostgreSQL database.

Setting up the SQLAlchemy Model

For this project, you will use the SQLAlchemy ORM to access a PostgreSQL database engine and interact with it. Create a new file called models.py. Model the data and create the schema in the models.py file:

# Standard imports
import os

# Third-party imports
from dotenv import load_dotenv
from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.engine import URL
from sqlalchemy.orm import declarative_base, sessionmaker

load_dotenv()

DB_USERNAME = os.getenv("DB_USERNAME")
DB_PASSWORD = os.getenv("DB_PASSWORD")
DB_HOST = os.getenv("DB_HOST")
DB_PORT = os.getenv("DB_PORT")
DB_NAME = os.getenv("DB_NAME")

# Create a URL object for the database connection
url = URL.create(
    drivername="postgresql",
    username=DB_USERNAME,
    password=DB_PASSWORD,
    host=DB_HOST,
    database=DB_NAME,
    port=DB_PORT
)

engine = create_engine(url)
Session = sessionmaker(bind=engine)
session = Session()

Base = declarative_base()

class Product(Base):
    __tablename__ = "products"
    id = Column(Integer, primary_key=True)
    name = Column(String)
    amount = Column(Integer)

Base.metadata.create_all(engine)

To use the environment variables, create a .env file and fill it like so, replacing the placeholder texts with your database credentials:

DB_USERNAME=<your-db-username>
DB_PASSWORD=<your-db-password>
DB_HOST=<your-db-host>
DB_PORT=<your-db-port>
DB_NAME=<your-db-name>

As you can see, the url object defines the database URL with the following options:

  • The drivername as the database engine
  • The username and password of your database
  • The host in this example is localhost if you’re running it locally
  • The database is the name of the database you need to access; in this tutorial let’s name it mydb
  • The port, you can set it to 5432 which is the default port for PostgreSQL

You can choose your own specifications depending on your case.

You then define the engine instance which is the starting point of your application. That instance is passed into the sessionmaker function to create a Session class which is instantiated to create a session object.

The session object manages the persistence layer and does the ORM operations.

The Product class inherits from the Base class, which declares the mapped classes into tables. Inside that Product table, there is a definition of its name and its attributes.

Similarly, if you want to define another table, you can create a class that inherits from the base class and declare your attributes.

The Base.metadata.create_all() function creates all the tables defined in the mapped classes.

Inserting Data in SQLAlchemy

Now, insert records into the products table. To do that, create a file called insert_db.py and add the following code to the file:

from models import Product, session

umbrella = Product(name="Collapsible Umbrella", amount=2)
smartwatch = Product(name="Hybrid Smartwatch", amount=1)

session.add_all([umbrella, smartwatch])
session.commit()

The umbrella and smartwatch objects are products in the database. The former amounts to two in the inventory, while the latter is just one piece.

In the code above, you add them into the database with the session.add_all() method and then commit it into the PostgreSQL database with session.commit().

To run this script, make sure you have the Python virtual environment activated, thenrun the script with this command:

python insert_db.py

To check if everything worked correctly, you can access your PostgreSQL database through a SQL client or a PostgreSQL client like psql.

Alternatively, you can of course use any database administration tool like pgAdmin, or a multi-platform database tool like DBeaver.

Here is how you can use psql to explore the data in your database.

Running psql -U newuser -d mydb lets you use the psql client to connect to the database named mydb as the user newuser. You can replace newuser with your own username.

Take a look at the psql documentation to learn more about the commands you can run.

Once you are logged in and connected to your database, you can run the following query to list the products in your inventory:

select * from products;

Below is what the psql log statements might look like if you're running them on a Linux machine:

$ sudo -u postgres psql
[sudo] password for <username>:     
psql (10.19 (<operating_system>))
Type "help" for help.

postgres=# \c mydb
You are now connected to database "mydb" as user "postgres".

mydb=# select * from products;
 id |         name         | amount 
----+----------------------+--------
  1 | Collapsible Umbrella |      2
  2 | Hybrid Smartwatch    |      1
(2 rows)

As you can see, both records are now created. Once you are finished, you can enter \q and then press the Enter key to quit psql.

Creating the Chatbot

The database is now set up and the tiny inventory is already there. Now you need to track the orders for this simple data. Let's configure the Twilio WhatsApp API first.

Configuring the Twilio Sandbox for WhatsApp

Assuming you've already set up a new Twilio account, go to the Twilio Console and choose the Messaging tab on the left panel. Under Try it out , choose Send a WhatsApp message . You'll be redirected to the Sandbox tab by default, and you’ll see a phone number "+14155238886" with a code to join next to it on the left side of the page. You’ll also see a QR code to scan with your phone on the right side of that page .

To enable the Twilio testing environment, send a WhatsApp message with this code's text to the displayed phone number. You can click on the hyperlink to direct you to the WhatsApp chat if you are using the web version . Otherwise, you can scan the QR code with your phone.

Now, the Twilio sandbox is set up, and it's configured so that you can try out your application after setting up the backend.

Setting up the Flask Backend

To set up Flask, navigate to the project directory and create a new file called main.py. Inside that file add this very minimal FastAPI backend :

from flask import Flask

app = Flask(__name__)

@app.route('/')
def index():
	return jsonify({
    	'status': 'success',
    	'message': 'Welcome to Smart Inventory Chatbot API'	})
if __name__ == '__main__':
	app.run(debug=True, port=8000)

To run this backend, you need to install flask:

pip install flask

To run the app:

python main.py

Open your browser to localhost:8000. The result you should see is the success JSON response:

{
    'status': 'success',
    'message': 'Welcome to Smart Inventory Chatbot API'
}

To connect Twilio with your backend, you need to host your app on a public server. An easy way to do that is to use Ngrok.

If you’re new to Ngrok, you can consult this blog post and create a new account.

To use it, open a new terminal window, activate the same virtual environment you created earlier, and then install pyngrok, which is a Python wrapper for Ngrok:

pip install pyngrok

Leave the Flask app running on port 8000, and run this ngrok command:

ngrok http 8000

This will create a tunnel between your localhost port 8000 and a public domain created on the ngrok.io site. W hen a client visitsyour Ngrok forwarding URL , the Ngrok service will automatically forward that request to your backend.

Go to the URL (preferably, with the https prefix) as shown below:

The Ngrok details including the forwarding URL that redirects the localhost to the global URL

After you click on the forwarding URL, Ngrok will redirect you to your FastAPI app’s index endpoint.

Configuring the Twilio Webhook

To be able to receive a reply when you message the Twilio WhatsApp sandbox number, you need to configure a webhook known to Twilio.

To do that, head over to the Twilio Console and choose the Messaging tab on the left panel. Under the Try it out tab, click on Send a WhatsApp message. Next to the Sandbox tab, choose the Sandbox settings tab.

Copy the ngrok.io forwarding URL and append /message . Paste it into the box next to WHEN A MESSAGE COMES IN :

Twilio Sandbox configuration screen for WhatsApp messaging with URL fields and save button

The full URL will be something like: https://d8c1-197-36-101-223.ngrok.io/message.

Note: The /message is the endpoint we will set up in the Flask application. This endpoint will have the chatbot logic.

Once you finish, click the Save button.

Authenticating your Twilio Account

Earlier, you created the index route to just test that your backend is working with Ngrok. From this point onwards, you won’t need it anymore.

Before setting up the /message endpoint, authenticate your Twilio account to be able to use the Twilio client. Open the main.py file and add the following highlighted lines of code:

import os
from dotenv import load_dotenv
from twilio.rest import Client
from flask import Flask

load_dotenv()

account_sid = os.environ["TWILIO_ACCOUNT_SID"]
auth_token = os.environ["TWILIO_AUTH_TOKEN"]
client = Client(account_sid, auth_token)

app = Flask(__name__)

In your Twilio console, find your Account SID and Auth Token. In the next step, you will set these as environment variables.

Screenshot of Twilio account info with Account SID, Auth Token, and Twilio phone number partially obscured.

You will use dotenv to read and set the environment variables and twilio to create the Twilio client. Install these two packages in your virtual environment with the following:

pip install twilio

To use the environment variables, you can add up the Twilio credentials to the .env file and fill it like so, replacing the placeholder text with your values from the Console:

TWILIO_ACCOUNT_SID=<your-twilio-account-sid>
TWILIO_AUTH_TOKEN=<your-twilio-auth-token>

Now, the app is ready to use Twilio capabilities to create the chatbot through WhatsApp.

Adding the Chatbot Logic

Preparing the Tools

Create a new file called tools.py and put the following imports:

# Standard imports
import uuid
from typing import List, Dict, Any

# Third-party imports
from langchain.tools import tool

# Local imports
from models import Product, session

Then define four tools that the AI agent should know. Here is the first tool to list all available products in the inventory:

# Define LangChain tools for interacting with the database
@tool
def list_available_products() -> List[Dict[str, Any]]:
	"""
	List all available products in the inventory with their quantities.
	Returns a list of products with their details.
	"""
	products = session.query(Product).all()
	return [{"id": p.id, "name": p.name, "amount": p.amount} for p in products]

And here is the second one to be more specific and get details about a certain product:

@tool
def get_product_details(product_name: str) -> Dict[str, Any]:
	"""
	Get details about a specific product by name.
	Args:
    	product_name: The name of the product to look up
	Returns:
    	A dictionary with product details or a message if not found
	"""
	product = session.query(Product).filter(Product.name.ilike(f"%{product_name}%")).first()
	if product:
    	return {"id": product.id, "name": product.name, "amount": product.amount}
	return {"error": f"Product '{product_name}' not found in inventory"}

And here is the third one to place an actual order and update the database accordingly:

@tool
def place_order(product_name: str) -> Dict[str, Any]:
	"""
	Place an order for a product and update inventory.
	Args:
    	product_name: The name of the product to order
	Returns:
    	A dictionary with order details or error message
	"""
	product = session.query(Product).filter(Product.name.ilike(f"%{product_name}%")).first()
    
	if not product:
    	return {"status": "error", "message": f"Product '{product_name}' not found in inventory"}
    
	if product.amount < 1:
    	return {"status": "error", "message": f"Sorry, {product.name} is out of stock"}
    
	# Generate tracking ID and update inventory
	tracking_id = str(uuid.uuid4())
	product.amount -= 1
	session.commit()
    
	return {
    	"status": "success",
    	"message": f"Order placed successfully for {product.name}",
    	"tracking_id": tracking_id,
    	"product": product.name,
    	"remaining_stock": product.amount
	}

And here is the last one to check if a certain product is in stock or out of stock:

@tool
def check_stock(product_name: str) -> Dict[str, Any]:
	"""
	Check if a product is in stock and how many units are available.
	Args:
    	product_name: The name of the product to check
	Returns:
    	A dictionary with stock information
	"""
	product = session.query(Product).filter(Product.name.ilike(f"%{product_name}%")).first()
    
	if not product:
    	return {"status": "error", "message": f"Product '{product_name}' not found in inventory"}
    
	return {
    	"status": "success",
    	"product": product.name,
    	"in_stock": product.amount > 0,
    	"amount": product.amount
	}

Next, you’re going to use these tools in the agent.

Preparing the Agent

After defining the tools, we need to use them and make the LLM decide which tool to use:

# Standard imports
import os
from typing import Dict, Annotated, Any, TypedDict

# Third-party imports
from dotenv import load_dotenv

## LangChain imports
from langchain_openai import ChatOpenAI
from langchain_core.messages import AIMessage
from langchain.schema import SystemMessage, HumanMessage

## LangGraph imports
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.memory import MemorySaver

# Local imports
from tools import (
	list_available_products,
	get_product_details,
	place_order,
	check_stock,
)

Import the OpenAI API key:

load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")

after defining the environment variable for it in the .env file:

OPENAI_API_KEY=<your-openai-api-key>

Define the state of LangGraph:

class AgentState(TypedDict):
	messages: Annotated[list, add_messages]
	session_id: str

and get an instance of the LLM with the following function:

def get_llm():
	return ChatOpenAI(
    	model="gpt-4o-mini",
    	temperature=0.5,
    	api_key=openai_api_key
	)

Create the system prompt with the following function:

def get_system_prompt():
	return """
	You MUST follow these rules:
	1. ALWAYS use tools for inventory-related queries
	2. For "what products do you have?", IMMEDIATELY use list_available_products
	3. Never ask for clarification on product listing requests

	You are a helpful inventory assistant for a smart store. You can help customers with:
	1. Checking product availability
	2. Providing product details
	3. Placing orders for products
	4. Checking stock levels
    
	When a customer wants to:
	- check available products, make sure to use the list_available_products tool.
	- know details about a certain product, make sure to use the get_product_details tool.
	- check if a certain product exist in the inventory, make sure to use the check_stock tool.
	- place an order, make sure to use the place_order tool.  	 
    
	Always be polite and helpful."""

Define the first node in the graph:

def assistant_node(state: AgentState) -> Dict[str, Any]:
	"""Assistant node that decides what to do next based
	on the conversation history."""
    
	# Get the messages from the state
	messages = state["messages"]
    
	# Create the prompt
	system_prompt = get_system_prompt()
	all_messages = [SystemMessage(content=system_prompt)] + messages

	llm = get_llm()
    
	# Define the available tools
	tools = [
    	list_available_products,
    	get_product_details,
    	place_order,
    	check_stock
	]
    
	# Configure the LLM to use tools
	llm_with_tools = llm.bind_tools(tools)
    
	# Get the LLM response
	response = llm_with_tools.invoke(all_messages)
    
	# Add the response to the messages
	messages.append(response)

	# Check if the response contains a tool call
	if hasattr(response, "tool_calls") and response.tool_calls:
    	# Return the tool calls to be executed
    	return {"messages": messages, "session_id": state["session_id"], "next": "tools_executor"}
	else:
    	# No tool calls, we're done
    	return {"messages": messages, "session_id": state["session_id"], "next": END}

Create the second node (the tools executor):

tools = [
	list_available_products,
	get_product_details,
	place_order,
	check_stock
]

tools_executor_node = ToolNode(tools=tools)

Create a function to be used in the conditional edge:

def route_tools(state: AgentState):
	"""
	Use in the conditional_edge to route to the ToolNode if the last message
	has tool calls. Otherwise, route to the end.
	"""
	if isinstance(state, list):
    	ai_message = state[-1]
	elif messages := state.get("messages", []):
    	ai_message = messages[-1]
	else:
    	raise ValueError(f"No messages found in input state to tool_edge: {state}")
	
    if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
    	return "tools_executor"
	return END

Create the graph:

def create_graph():
	# Define the workflow graph
	workflow = StateGraph(AgentState)
    
	# Add the nodes
	workflow.add_node("assistant", assistant_node)
	workflow.add_node("tools_executor", tools_executor_node)
    
	# Add the edges
	workflow.add_conditional_edges(
    	"assistant",
    	route_tools,
    	{"tools_executor", END},
	)    
	workflow.add_edge("tools_executor", "assistant")
    
	# Set the entry point
	workflow.set_entry_point("assistant")

	memory = MemorySaver()
	# Compile the workflow
	return workflow.compile(checkpointer=memory)

And finally, instantiate it:

inventory_agent = create_graph()

This is the instance you’ll import at the main.py script.

Putting Things Together

Now, you can use the inventory agent instance into our main Flask script. Here are the updated imports at the top of the file as follows:

# Standard imports
import os
import uuid

# Third-party imports
from dotenv import load_dotenv
from twilio.rest import Client
from flask import Flask, jsonify, request
from langchain.schema import HumanMessage
from langchain_core.messages import AIMessage

# Local imports
from models import Product, session
from agent import inventory_agen

Now, in main.py, below the index() endpoint, add the following logic:

def send_message(body_text):
	client.messages.create(
    	from_=f"whatsapp:{os.getenv('TWILIO_PHONE_NUMBER')}",
    	body=body_text,
    	to=f"whatsapp:{os.getenv('MY_PHONE_NUMBER')}"
	)

Don’t forget to define your phone numbers in .env file:

MY_PHONE_NUMBER=+<your-country-code><your-number>
TWILIO_PHONE_NUMBER=+14155238886

And finally, you can create the reply() view method:

@app.route("/message", methods=["POST"])
def reply():
	print("Sending a WA message...")
	# Get the message body from the request
	body = request.form.get("Body", "")
	user_phone_number = request.form.get("From", "")
    
	# Use a unique session ID for each user, e.g., their phone number
	session_id = user_phone_number

	message = body
    
	try:
    	initial_state = {
        	"messages": [HumanMessage(content=message)],
        	"session_id": session_id,
    	}
    	# Process the message with the LangGraph agent
    	agent_response = inventory_agent.invoke(
        	initial_state,
        	config={"configurable": {"thread_id": session_id}}
    	)
    	print("Agent response:", agent_response)

    	# Extract the last AIMessage content from the response
    	for msg in reversed(agent_response.get("messages", [])):
        	if isinstance(msg, AIMessage) and msg.content:
            	response_message = msg.content
            	break # Found the last AI message, exit loop
       	 
    	# Send WhatsApp message with the agent's response
    	send_message(response_message)
    	return jsonify({"status": "success", "message": response_message})
	
    except Exception as e:
    	error_message = f"Sorry, I encountered an error"
    	send_message(error_message)
    	print(f"Error: {e}")
    	return jsonify({"status": "error", "message": error_message})

The send_message() function takes the text message that you want to send from the Twilio WhatsApp sandbox account to your phone number. Change the placeholder text in the to option value to your phone number in E.164 format (country code followed by your phone number).

The /message endpoint will receive a POST request that

The /message endpoint handles incoming WhatsApp messages and processes them through a LangGraph-based inventory agent.

Here's a breakdown of how it works:

  1. Request handling:
    • The method is a Flask route handler for POST requests to the "/message" endpoint
    • It extracts the message body and the sender's phone number from the Twilio webhook request
  2. Session management:
    • Uses the user's phone number as a unique session identifier
    • Considers the body as the message without any processing
  3. Agent invocation:
    • Creates an initial state with the user's message wrapped in a HumanMessage object
    • Invokes the LangGraph agent with this state and configures it to use the session ID as a thread ID which is used in the MemorySaver
    • This allows the agent to maintain conversation history for each unique user
  4. Response extraction:
    • Searches through the agent's response messages in reverse order
    • Finds the last AIMessage with content, which represents the agent's final response
  5. Response delivery:
    • Sends the agent's response back to the user via WhatsApp using the send_message function
    • Returns a JSON response indicating success
  6. Error handling:
    • Catches any exceptions that occur during processing
    • Sends a generic error message to the user
    • Logs the actual error for debugging
    • Returns a JSON response indicating an error occurred

This method effectively creates a bridge between the WhatsApp messaging platform and the LangGraph-powered inventory agent, maintaining separate conversation sessions for each user based on their phone number.

Testing the Final App

Earlier, you defined two products in the inventory:

  • 2 collapsible umbrellas

  • 1 hybrid smartwatch

When you first send a greeting to the app, it replies with a friendly welcome: “Hello! How can I assist you today?”

If you ask, “What products do you have?”, the chatbot lists the available inventory. Asking, “How many collapsible umbrellas are in stock?”, gets the correct reply of 2 umbrellas. On the other hand, if you try something outside the catalog, like “Do you have Rolex watches?”, the chatbot apologizes and explains that the item isn’t available.

For valid items, the chatbot goes a step further. If you ask about hybrid smartwatches, it replies that there is 1 in stock and asks whether you’d like to order it. Confirming with “Yes please” places the order and removes the item from inventory. If you then try to order another one, the chatbot apologizes and informs you there are none left.

This flow shows how the chatbot manages conversations: it can greet users, share product availability, confirm orders, and gracefully handle requests for out-of-stock or unknown items. You can continue experimenting with your own questions to explore just how flexible the system is.

Finally, if you’re in a production environment you’ve got to take care of the following:

  1. setting up a Twilio phone number instead of using the default Twilio number used for testing
  2. using the live credentials of Twilio instead of the ones used for testing
  3. hosting on a VPS instead of tunneling with Ngrok
  4. setting debug to False in the Flask main app

Ezz is a data platform engineer with expertise in building AI-powered chatbots. He has helped clients across a range of industries, including nutrition, to develop customized software solutions. Check out his website for more.