Build a WhatsApp Bot with Sentiment Analysis using Python and Twilio

February 28, 2020
Written by
Enrique Bruzual
Contributor
Opinions expressed by Twilio contributors are their own

Build a WhatsApp Bot with Sentiment Analysis using Python and Twilio

As chatbots become more popular in the marketplace assisting companies to manage customer interaction, it is important to provide a high-level customer experience. Chat Robot technology is still early and much of the focus has been on the basic question/answer interaction.

Although some chatbots have been trained to be incredibly accurate in their response, most lack the ability to monitor the human emotion and perhaps for good reason.

Emotions vary not only among individuals but also within a person and the moment. Detecting an emotional response requires multisensorial awareness, something most of us do instinctively.

The only way we are able to capture emotions in a chatbot is through sentiment analysis of a chat, posing its own set of challenges. If we are able to detect and analyze each user response to obtain an emotional index, we could use this information then to help us monitor the level of satisfaction of a customer.

whatsapp bot demo

With sentiment analysis tools we can try to detect the emotional level of an individual on the per sentences bases. As each user sentence is captured during a chat session, it can be evaluated for negative sentiment.

Normally the analyzer vectorices the sentence, returning a float value, depending on the model the index value can then be used to trigger a response.

Based on the index range we can extract the emotional level of each response. As we evaluate each sentence, we can program our chatbot to also take into consideration the emotional level of the user input.

In this tutorial we will write a WhatsApp proof-of-concept bot using Twilio’s WhatsApp API, the Twilio's Python Helper Library, ChatterBot and TextBlob.

We will use ChatterBot to create a corpus file in JSON format that defines a custom built, rule based chatbot. Textblob will provide sentiment analysis capabilities and its other built-in tools to help us write our chatbot.

Project requirements

To get started, the following are needed.

Project setup

Please make sure you have the appropriate Python version installed on your system.

To create our  virtual environment follow the instructions below. We will be using Python’s built-in virtual environment, venv.

For Unix or Mac OS systems, open a terminal and enter the following commands to do the tasks described above:

$ mkdir twily
$ cd twily
$ python3.7 -m venv venv
$ source venv/bin/activate

On Windows, enter the following commands in a command prompt window:

$ md twily
$ cd twily
$ python -m venv venv
$ venv\Scripts\activate.bat

Our project will consist of six files, five python scripts and one JSON file. They exist at root level in our project folder, twily. These files will be created and explained throughout the tutorial.

|-- twily
    `-- app.py
    `-- chatter_trainer.py
    `-- simplebot.py
    `-- stop_words.py
    `-- twily_classifier.py
    `-- twilybot.json

Installing the Python Dependencies

In addition to the tools needed to get the project all setup we are also going to need to install a few python dependencies.

  • Twilio Python Helper Library: comes with a number of tools to get started with Twilio APIs.
  • ChatterBot: a Python library making it easy to generate automated responses to a user’s input.
  • Flask: a flexible web Framework supporting just about any type of app configuration.
  • TextBlob: a Python library for processing textual data.

Once you have created and activated your virtual environment, you can pip install your packages.

$ pip install twilio chatterbot chatterbot_corpus Flask

To install TextBlob the documentation requires doing a couple of steps

$ pip install textblob
$ python -m textblob.download_corpora

If you type pip list in your activated virtual environment, it should give you a list of all your installed libraries and dependencies.

ChatterBot

"ChatterBot uses a selection of machine learning algorithms to produce different types of responses. This makes it easy for developers to create chat bots and automate conversations with users." ~ ChatterBot documentation

We are going to use ChatterBot's training corpus as the dataset for our sentiment analysis robot. At this point the complexity and branching of the conversation is limited, but we will be able to write an adequate application to explore a user's emotional state during a chat session with a chatbot.

In ChatterBot the user needs to create a conversation and train the bot in order for the chatbot to answer properly. ChatterBot also allows us to export the trained data. We are going to train a bot and export the data into a JSON formatted file, which we'll then use for our custom bot. We won't be using ChatterBot beyond that point.

If you would like to download the corpus file and skip this section, please click here to download.

ChatterBot allows a chatbot to be trained using a list of strings where the list represents a conversation. For example, each list below could be considered a conversation with ChatterBot responding “Hello” to three different types of user inputs or questions.

  • [ "Hi there!", "Hello",]
  • [ "Greetings!",  "Hello",]
  • [ "What’s up!",  "Hello",]

Although ChatterBot is able to handle longer lists of training conversations, for our project we will use a simple “question/answer” format.

Start by creating a python file named, chatter_trainer.py in the root of our project folder, twily. In the file let's write our code.

Setting up the trainer

  • We first need to import the ChatterBot library and trainers
  • We assign our ChatBot() instance by passing a chatbot name and assign it to the chatbot variable.
  • We instantiate the appropriate trainer. We will use the ListTrainer(), pass the bot name as an argument and assign it to the trainer variable.
from chatterbot import ChatBot
from chatterbot.trainers import ListTrainer

chatbot = ChatBot("Twily")
trainer = ListTrainer(chatbot)

Training lists

Our three types of arbitrary conversations are documentation_topics, twilio_knowledge and classifier.

  • documentation_topics provides documentation links based on keywords found in the user's question
  • twilio_knowledge will answer predefined general questions
  • classifier has some keywords associated with a user frustration response, the bot will use these to try to de-escalate the user's frustration.
documentation_topics = [
("sdk", "https://www.twilio.com/docs/sms/whatsapp#sdks"),
("python helper library", "https://www.twilio.com/docs/libraries/python"),
("tutorials", "https://www.twilio.com/docs/sms/whatsapp/tutorial/ send-and-receive-media-messages-twilio-api-whatsapp"),
("autopilot", "https://www.twilio.com/docs/autopilot/channels/whatsapp"),
("contact us", "https://support.twilio.com/hc/en-us")]

twilio_knowledge = [
("Twilio description", "Simply put, Twilio is a developer platform for communications.\
        Software teams use Twilio APIs to add capabilities like voice, video, and messaging \
        to their applications. This enables businesses to provide the right communications \
            experience for their customers."),
("Twilio email", "Sorry, you can submit a ticket at: https://www.twilio.com/console/support/tickets/create"),
("Twilio phone number", "We don't have a phone number for this type of account"),
("mailing address", "You can email our corporate headquarters at hello@craft.com"),
("chatterbot", "library making it easy to generate automated responses to a user's input, visit https://chatterbot.readthedocs.io/en/stable/"),
("textblob", "library for processing textual data, please visit https://textblob.readthedocs.io/en/dev/")
]

classifier = ["silly", "dumb", "stupid", "I'dont think so", "I don't care",
                   "do you know anything", "not good", "omg",
                   "this is bad", "not what I want", "live help",
                   "get me a rep", "I need a real person"]

Training iterators

Below the training list we will write three list iterators, each with a pre-formatted question/answer. Each loop calls the trainer instance and the train() method, passing the name of the list as an argument. This will generate conversations using the information found in each list.

for topic, link in documentation_topics:
    trainer.train([
        f"{topic}",
        f"Sure, here is the {topic} link: {link}"
    ])

for topic, description in twilio_knowledge:
    trainer.train([
        f"{topic}",
        f"Ok, here is the {topic} information you are looking for: {description}"
    ])

for i in classifier:
    trainer.train([
        f"{i}",
        "I am sorry you feel that way, please ask the question again"
    ])

Corpus file export

Finally we export our trainer instance to a JSON file using ChatterBot's .export_for_training() method, which takes an argument in the form of the JSON filename.

trainer.export_for_training('twilybot.json')

To generate the JSON file, twilybot.json, all we have to do is run the program as python chatter_trainer.py, and the file will be saved in the twily folder. We can now put ChatterBot aside. We are going to keep this file in the same directory as the others.

TextBlob

"TextBlob provides a simple API for diving into common natural language processing (NLP) tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, classification, translation, and more." ~ TextBlob documentation

This application will mostly rely on TextBlob to create a custom classifier as well as do sentiment analysis and create our rule base chatbot.

Textblob custom classifier

In this section we will write our custom classifier, the other Textblob features will be explained as they are implemented throughout the tutorial.

For this portion we need to create our train data. The data sets can be written in the script or imported from a file. TextBlob supports a few file formats for this operation, but we will write the data in our script.

The data is a list of tuples, each housing the training string separated by a comma followed by "pos" or "neg", representing positive or negative sentiment.

("phrase one"

Let's write our classifier module. Begin by creating a python file named twily_classifier.py, in our twily folder

We will create a function in order to import it into other scripts as a module.

  • We will import the Naive Bayers classifier from TextBlob.
  • Then we will create a function called trainer() which does not take any parameters at this point.
  • In it, we will assign our training data list to the train variable. The train data listed below is truncated to minimize the length of this tutorial.
  • Finally return NaiveBayesClassifier() constructor passing the train data as an argument.
from textblob.classifiers import NaiveBayesClassifier


def trainer():
    """Trainer function for Naive Bayers classifier"""

    train = [
        ("i am good", "pos"),
        ("fine thanks", "pos"),
        ("doing ok", "pos"),
        ("thank you", "pos"),
        ("i am crappy", "neg"),
        ("omg", "neg"),
        ("no thanks", "pos"),
        ("do you have information on the SDK", "pos"),
        ("not good", "neg"),
        ("you are awesome", "pos"),
        ("i don't know", "neg"),
        ("forget you", "neg"),
        ("it didn't answer my question", "neg")
        ]

    return NaiveBayesClassifier(train)

To test our classifier as a stand alone script, we will write test code below our trainer function.

  • We test our script by calling the trainer() function
  • And get the probability distribution by calling the .prob_classify() method and pass a test string we want analyzed as an argument.
  • We can extract the negative value by calling the .prob() method and passing the "neg" label as an argument.
if __name__ == "__main__":
    user_input = "I didn't find this helpful"
    classy = trainer().prob_classify(user_input)
    print()
    print(f'String:  {user_input} ')
    print(f'---------{len(user_input)* "-"}+')
    print(Negative probability: {classy.prob("neg")}')
    print(Positive probability: {classy.prob("pos")}')

To run the test above simply run the file with python twily_classifier.py. Edit the user_input variable if you want to see how other sentences are classified.

NLTK stop word set

We are going to need a list of stop words. These are commonly used filler words that we want filtered out from the user input. Later we will learn how to use the stop word set.

As NLTK was installed as one of the dependencies of TextBlob, we can use it to generate a set of stop words.

We can use Python’s interactive console to generate a set of stop words and print them to the screen, then copy and paste this set into a new python file.

>>> from nltk.corpus import stopwords
>>> set(stopwords.words('english'))
  • Create a document called stop_words.py.
  • In the new file add a variable called sw_list.
  • Paste the set to the variable.
  • Save and close the file.

The list below has been truncated. The one you generate will be a lot longer. This list can be edited if need be by adding or removing words.

sw_list =  {'ll',  'haven',  "that'll",  'against',  'himself',  'his',  'all',  'very'}  # This set is truncated

By saving the set of stop words into a new python file our bot will execute a lot faster than if, everytime we process user input, the application requested the stop word list from NLTK.

Sentiment analysis

We will write our chatbot application as a module, as it can be isolated and tested prior to integrating with Flask.

We are going to call our module simplebot.py, this will be the core application. The application will take one argument, the user_input as a str.

The application will do a sentiment analysis of the un edited string, and it will also normalize the string and turn keywords into a set that will be used to intersect the dialog corpus file for matching responses.

Module setup

Lets create a file named simplebot.py, in the twily folder, let's start by importing the libraries.

  • twily_classifier is our TextBlob trained sentiment classifier.
  • stop_words is the list of words to exclude from our input string.
  • json to open the conversation corpus generated with ChatterBot.
""" Simple rule based chatbot with emotional monitoring """

from textblob import TextBlob
import twily_classifier as cl
import stop_words as stopwords
import json

Below the imports, start by opening our JSON file with the  trained conversationcorpus. using the built-in open() function to access the file and load it into the array variable.

We can then call the array index conversation, it holds all the corpus data.

with open('twilybot.json', 'r') as f:
    array = json.load(f)

CONVERSATION = array["conversations"]

Lets go ahead and set up a few other constants and variables.

  • BOT_NAME constant (string type) holds the name of our bot.
  • STOP_WORDS constant (set type) holds a list of all the stop words. This was the set we generated using NLTK.
  • neg_distribution variable (list type) holds the negative sentiment floating point value. This will be used to monitor the user's sentiment index. It will be appended everytime there is user input, with the negative probability percent value.
BOT_NAME = 'Twily'
STOP_WORDS = stopwords.sw_list
neg_distribution = []

Our simplebot.py module is made up of three functions, sentiment(), simplebot() and escalation(), where escalation() is the main function while the other two are auxiliary dependencies.

The sentiment() function

This function appends the neg_distribution list with negative probability and returns the appended value.

sentiment() function flow chart
  • From twily_classifier call the cl.trainer() function and the .prob_classify(u_input) passing our input string when called. Assign it to blob_it variable.
  • From the returned value in blob_it we are going to extract just the negative values, rounded up to two decimal points and assigned to the npd variable.
  • With the most recent negative value, we are going to update our neg_distribution list.
  • We also return the appended value in case it is needed for another operation.

This function will be called by our main function escalation().

def sentiment(u_input):
    """Auxiliary function: Appends 'neg_distribution'
    with negative probability also returns Negative Probability"""

    blob_it = cl.trainer().prob_classify(u_input)
    npd = round(blob_it.prob("neg"), 2)
    neg_distribution.append(npd)
    return npd

The simplebot() function

This function implements a rule-based bot. It takes the user input in the form of a string and in sequence it pre-processes the input string, converts it to lowercase, tokenizes it and removes stop words. It then iterates through CONVERSATION, if filtered_input intersects response_set is updated. if the set is empty, it returns a message, else it returns the longest string in the set.

simplebot() function flow chart

User input

  • Define the function, def simplebot(user):
  • Here we take the user input and turn it into a TextBlob object.
def simplebot(user_input):
    """Rule base bot, takes an argument, user input in form of a string. (truncated)"""

    user_blob = TextBlob(user_input)

pre-processing and normalization

Once we have a textblob object we can modify it by using textblob built in tools

  • We normalize our input string by turning it into an all lower case string.
    • user_blob.lower().
  • We then tokenize the textblob object into words by calling:
    • lower_input.words.
  • Finally we create a list of words not listed in STOP_WORDS from our textblob object and assign it to the filtered_input variable.
    lower_input = user_blob.lower()
    token_input = lower_input.words
    filtered_input = [w for w in token_input if w not in STOP_WORDS]

Set iterator

We are going to create an empty set to be updated with all the possible matches returned by our set intersection of user input and CONVERSATION.

  • Assign an empty set to response_set
  • Create a for loop to iterate through every list in CONVERSATION
  • Add a nested loop to iterate through every sentence in each list.
  • Turn each sentence into a list of words by using the .split() method. Assign it to the sentence_split variable
  • If the set() of filtered_input **intersects** sentence_split
    • Update response_set with the intersection
    response_set = set()
    for con_list in CONVERSATION:
        for sentence in con_list:
            sentence_split = sentence.split()
            if set(filtered_input).intersection(sentence_split):
                response_set.update(con_list)

Returned value based on response_set

We want to return one value from the response_set, it could be empty, with one value or more.

  • If response_set is empty we want to return a string
    • "I am sorry, I don't have an answer, ask again"
  • If the set has one value or more we will return the longest value
    • return max(response_set, key=len)
    if not response_set:
        return "I am sorry, I don't have an answer. Ask again please!"
    else:
        return max(response_set, key=len)

Although returning the longest value seems arbitrary, in our case it works, most correct answers will be the longest, but there is room for error. In our case some arbitrary errors are desirable as we need to increase the emotional index of the users.

The escalation() function

This function takes an argument user_input in the form of a string and calls sentiment() to monitor the user sentiment index. If the emotional index, set by sentiment() and taken from neg_distribution, increases above a set threshold and is sustained, an automatic response/action is triggered. The function also sends user_input to simplebot() to generate a chatbot response.

escalation() function flow chart
  • live_rep represents the trigger action taken if the emotional conditions are met for escalation.
  • We pass the user_input to sentiment(), to analyse the negative sentiment of the sentence.
  • We calculate the length of neg_distribution by using the len() function and assign it to list_len
  • We send the input_string to simplebot() to get a response
  • Create a condition, if the list_len is greater than 3
    • Take the last three items in the list
      • If the first item of the last three is greater than .40 and greater or equal to the next two items take action by triggering live_rep
    • If not, return bot_response
  • If none of the conditions above are met, return bot_response
def escalation(user_input):
    """ Takes an argument, user_input, in form of a string ..."""

    live_rep = f"We apologize {BOT_NAME} is unable to assist you, we are getting a live representative for you, please stay with us ..."

    sentiment(user_input)
    list_len = len(neg_distribution)
    bot_response = simplebot(user_input)
    if list_len > 3:
        last_3 = neg_distribution[-3:]
        if last_3[0] > .40 and last_3[0] <= last_3[1] <= last_3[2]:
            return live_rep
        else:
            return bot_response
    else:
        return bot_response

Stand-alone testing

In order to test our script, write a stand-alone test block that runs the chatbot as follows:

  • while True we are going to try to run our program.
  • In case of an except we end the program.

Here we can test our application as we write it and before transforming it into a web service with Flask.

By printing neg_distribution we are able to see the negative index on every response and see the application trigger on the last three values of the list. In our final WhatsApp application we won't display the emotional values.

if __name__ == '__main__':
    while True:
        try:
            user_input = input('You: ')
            print(escalation(user_input))
            print(neg_distribution)
        except (KeyboardInterrupt, EOFError, SystemExit):
            break

When we run the simplebot.py in your interactive prompt, we should see a prompt waiting for our input. This is an example of an actual interaction with simplebot.py

You: do you have information on the whatsapp api
Ok sure, library making it easy to generate automated responses to a user's input. For more information please visit https://chatterbot.readthedocs.io/en/stable/
[0.0]
You: Can you show me a tutorial
I am sorry, I don't have an answer, ask again
[0.0, 0.6]
You: do you have any tutorials
Sure, here is the tutorials link: https://www.twilio.com/docs/sms/whatsapp/tutorial/send-and-receive-media-messages-twilio-api-whatsapp
[0.0, 0.6, 0.33]
You: thank you
I am sorry, I don't have an answer, ask again
[0.0, 0.6, 0.33, 0.23]
You: this is not good
I am sorry you feel that way, please ask the question again
[0.0, 0.6, 0.33, 0.23, 0.99]
You: no thanks
I am sorry, I don't have an answer, ask again
[0.0, 0.6, 0.33, 0.23, 0.99, 0.18]
You: no
I am sorry, I don't have an answer, ask again
[0.0, 0.6, 0.33, 0.23, 0.99, 0.18, 0.56]
You: get me a live rep
I am sorry you feel that way, please ask the question again
[0.0, 0.6, 0.33, 0.23, 0.99, 0.18, 0.56, 1.0]
You: no
We apologize Twily is unable to assist you, we are getting a live representative for you, please stay with us ...
[0.0, 0.6, 0.33, 0.23, 0.99, 0.18, 0.56, 1.0, 0.56]
You:

Our application is now working as intended. We can tweak it by modifying the values in our conditions as well as generating a more granular conversation corpus file.

Creating a Flask Application

Let's now create the Flask server, which will ultimately call our chat sentiment analysis application and be a liaison between Twilio Sandbox for WhatsApp and our bot application.

In our project folder, begin by creating a file called app.py, We will host our Flask app in this file, where we will perform the following tasks:

  • Import the flask library.
  • Instantiate the app.
  • Enable DEBUG mode.
  • Write placeholders for the routes that we will use.
from flask import Flask, request

app = Flask(__name__)
app.config['DEBUG'] = True


@app.route("/")
def home():
    return "<h2>Hello world</h2> Welcome to twily!"


@app.route("/get")
def get_bot_response():
    pass


@app.route("/test")
def bot_response():
    pass


if __name__ == "__main__":
    app.run()

Simplebot as a web service

Integrating our simplebot.py module with Flask and testing it is relatively easy. If we open the app.py file we created above we can import our module and escalation() function.

In the app.py file, we can import our simplebot as sb. We can do the import right below the Flask import:

from flask import Flask, request
import simplebot as sb

app = Flask(__name__)
app.config['DEBUG'] = True

Lets update the ("/test") route, as this will allow us to test the bot through our browser by getting the user_input and integrating the escalation function as well as importing the neg_distribution list.

  • We will request the user input through request and use the .lower() method to turn the input string to all lowercase.
  • Inside an f string, we can call the sb.escalation() function in order to get our response and update neg_distribution
@app.route("/test")
def bot_response():
    user_input = request.args.get('msg').lower()
    return f'<h2>{sb.escalation(user_input)}</h2> {sb.neg_distribution}'
  • To test the app through flask in your interactive console type
    • python app.py
  • In your browser, enter a question as a query string argument to the /test URL exposed by the Flask server.
    • http://localhost:5000/test?msg=”get me some Twilio SDK”

web browser chat demo

As we can see above, we call our test route, followed by a question mark, then the appropriate parameter, msg=“<your question here>”. Simplebot is working with Flask as intended!

Running the chatbot with ngrok

We have our flask app up and running. We can make sure it also responds through ngrok.

Expose a local web server to the internet

ngrok allows you to expose a web server running on your local machine to the internet. Just tell ngrok what port your web server is listening on.

If you don't know what port your web server is listening on, it's probably port 80, the default for HTTP.

Launch ngrok, make sure Flask’s and ngrok ports match.

ngrok http 5000

When you start ngrok, it will display the public URL of your tunnel in your terminal, along with other status and metrics information about connections received.

ngrok by @inconshreveable

Tunnel Status                 online
Version                       2.0/2.0
Web Interface                 http://127.0.0.1:4040
Forwarding                    http://92832de0.ngrok.io -> localhost:5000
Forwarding                    https://92832de0.ngrok.io -> localhost:5000

Connnections                  ttl     opn     rt1     rt5     p50     p90
                              0       0       0.00    0.00    0.00    0.00

Now that we are running ngrok and our Flask app, we can test by accessing the http forwarding domain with another device, you could use a smartphone for this test.

Enter the forwarding URL followed by /test in a web browser, either in your computer or your smartphone. Note your forwarding URL will have a different subdomain every time you run ngrok. Example:

  • https://92832de0.ngrok.io/test?msg=“<your question here>’

If all is working well, your testing device should return an answer to your question.

Configure the Twilio WhatsApp Sandbox

Twilio provides a WhatsApp sandbox where you can easily develop and test your application. Once your application is complete you can request production access for your Twilio phone number, which requires approval by WhatsApp.

Let’s connect your smartphone to the sandbox. From your Twilio Console, select Programmable SMS and then click on WhatsApp. The WhatsApp sandbox page will show you the sandbox number assigned to your account, and a join code.

To enable the WhatsApp sandbox for your smartphone send a WhatsApp message with the given code to the number assigned to your account. The code is going to begin with the word join, followed by a randomly generated two-word phrase. Shortly after you send the message you should receive a reply from Twilio indicating that your mobile number is connected to the sandbox and can start sending and receiving messages.

whatsapp sandbox configuration

Note that this step needs to be repeated for any additional phones you’d like to have connected to your sandbox.

Using the WhatsApp app

Before we can use our app from WhatsApp we need to format the text response as required by Twilio. We will leave the /test route as is for testing and add the WhatsApp formatting on the /get route.

Twilio route ("/get")

The input message from the user comes in the payload of the POST request with a key of ’Body’. It can be accessed through Flask’s request object.

from flask import Flask, request
import simplebot as sb
from twilio.twiml.messaging_response import MessagingResponse

app = Flask(__name__)
app.config['DEBUG'] = True


@app.route("/")
def home():
    return "<h2>Hello world</h2> Welcome to twily!"

# --------------- Twilio route --------------------------
@app.route("/get", methods=['POST'])
def get_bot_response():
        user_input = request.values.get('body', '').lower()
resp = MessagingResponse()
msg = resp.message()
            twily_response = sb.escalation(user_input)
        msg.body(twily_response)
            return str(resp)


@app.route("/test")
def bot_response():
        user_input = request.args.get('msg').lower()
        return  f'<h2>{sb.escalation(user_input)}</h2> {sb.neg_distribution}'


if __name__ == "__main__":
    app.run()

Testing the Twily chatbot from WhatsApp

We have configured all the necessary elements in order to test our first chat with Twily. To recap let's make sure the following are working and completed.

  • Our script has been tested outside of Flask.
  • Our script is working with Flask with the test route.
  • ngrok is working with Flask

Run the Flask app.py file to start twily bot again.

python app.py
* Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

If ngrok isn’t still running, start it again.

ngrok http 5000

Now the Twilio endpoint needs to be updated with the ngrok URL.

Once in the appropriate menu, paste the https:// URL from ngrok into the “When a message comes in” field. Remember to add the Flask route, in this case /get

  • https://<subdomain>.ngrok.io/get
  • Make sure the request method is set to HTTP Post
  • Click the Save button

WhatsApp sandbox configuration

We are all set to start using Twily, the Twilio chatbot for sentiment analysis from WhatsApp. Start sending messages using the phone you connected to the sandbox.

 

This chat bot has been trained to answer some very basic Twilio API questions as well as detect any negative user input and take appropriate action based on set rules.

Conclusion

Although this is a relatively simple application showing the potential of sentiment analysis, there is still a lot to learn from human emotions as a communication layer.

As we become more and more dependent on software automation, it is important to monitor the emotional variances from the users responses to try to improve service and perhaps develop artificial empathy. Digital response systems can be designed to detect human emotions and respond with follow up questions and emotional consideration.

Although simplebot is able to show the basic theory, it would require a more sophisticated robot, able to carry a conversation thread and branching to further engage the user. This level of interaction would encourage the user to become more emotionally vested in the exchange thus allowing for better emotional monitoring and empathetic response.

Things to try

  • Try to expand and improve the conversation.
  • Refine the classifier.
  • Implement the built-in TextBlob translator in order to detect and translate into English.
  • Expand the level of granularity in the train corpus with more detailed answers without generating false positives.
  • Future enhancement: SimpleBot would benefit greatly from being integrated with user sessions, allowing for multiple and more complex conversations between users and the bot.

This project is available for download from GitHub

Enrique Bruzual is a software developer, active Python community member and technical writer. Please feel free to contact him through the links below.