What Is GPT-3: How It Works and Why You Should Care

November 24, 2020
Written by
Reviewed by
Matt Makai

What Is GPT-3

No, robots aren't taking over the world (not yet anyway). However, thanks to Generative Pre-trained Transformer 3 (GPT-3), they are well on their way to writing digital text as well as humans—and even better, in some cases.

Human-equivalent writing sounds like a solid step on the path to a Terminator-like future...but cynicism aside, GPT-3 is establishing new ceilings for what AI and machine learning can accomplish.

What is GPT-3?

GPT-3, a new language model from the whizzes over at OpenAI, generates AI-written text that has the potential to be practically indistinguishable from human-written sentences, paragraphs, articles, short stories, dialogue, lyrics, and more.

OpenAI trained GPT-3 on a massive corpus of text with more than 175 billion (yes, with a "b") parameters, making it the largest language model—ever.

Lost? In non- technical terms, GPT-3 was shown how millions of people write and taught how to pick up on writing patterns based on user entry. You just feed it some input, and the model will generate intelligent text following the submitted pattern and structure.

For example, Drew Baumann (CTO of Repeat) got the following decadent recipe by feeding the model a prompt and some input:

AI-Generated Recipe

I love a good banana shake. I must have tried a thousand of them over the years. So, when I wanted to add one to the menu of the cafe, I was pretty picky. I wanted something that tasted good but wasn’t overly sweet. I wanted the banana and vanilla flavors to really shine through.

Since I was using fresh bananas, I wanted them to be the ripest, sweetest ones I could find. I use a banana or two (depending on size), one cup of vanilla almond milk, 3/4 cup of ice, 3/4 cup of milk, 1/4 cup of nonfat plain greek yogurt, 2 tbl of almond milk creamer, 1 tbl of cocoa powder, 1/2 tsp of cinnamon, 1/4 tsp of nutmeg, and 1/8 tsp of ground cloves. For the Oreo shake, I add 1/2 cup of nonfat plain greek yogurt. That’s it! So simple and so good.

It’s a simple recipe, but that’s the best part. Fresh ingredients make a huge difference. The bananas, vanilla, and cocoa are such a great combo.


  • 1 frozen banana
  • 1/2 cup cold coffee (or 1/2 cup cold water + 1/2 teaspoon instant coffee)
  • 1/2 cup vanilla ice cream
  • 1 teaspoon cocoa powder
  • 1/4 cup crushed Oreos
  • 1/4 cup blueberries
  • 1/4 cup ice


  1. Place all ingredients in a blender and blend until smooth.

Check out Soulless Food for more of Drew's AI-generated recipes.

It seems like GPT-3 can cook, sing, answer, translate, summarize, program, inform, and even write an article—much like I'm doing right now. Which begs the question: did artificial intelligence write this article? Or a human? If you're asking questions like these, you're already starting to fathom the potential of GPT-3.


How does GPT-3 work?

GPT-3 is a language model, which is a statistical program that predicts the probable sequence of words. Trained on a massive dataset (from sources like Common Crawl, Wikipedia, and more), GPT-3 has seen millions of conversations and can calculate which word (or even character) should come next in relation to the words around it.

When you type in an initial set of words, like "go to the store to buy...", GPT-3 will start predicting what would naturally come next based on its training. Probably something like:

  • Eggs
  • Milk
  • Bread
  • Fruit
  • Vegetables
  • Snacks
  • Drinks
  • Etc.

Multiply that complexity for thousands of different potential scenarios and tasks and you have GPT-3.

What makes GPT-3 special is its ability to respond intelligently to minimal input. It's been extensively trained on billions of parameters, and now it only needs a handful of prompts or examples to perform the specific task you desire—this is known as "few-shot learning."

For example, after analyzing thousands of poems and poets, you can simply input the name of a poet, and GPT-3 can create an original poem similar to the author's style. GPT-3 replicates the texture, rhythm, genre, cadence, vocabulary, and style of the poet's previous works to generate a brand-new poem.

GPT-3 works as a cloud-based LMaas (language-mode-as-a-service) offering rather than a download. By making GPT-3 an API, OpenAI seeks to more safely control access and rollback functionality if bad actors manipulate the technology.

GPT-3 use cases

GPT-3 has various potential for real-world applications. Developers and businesses are just beginning to dabble with the potential use cases, and it's exciting what they've already found. Here are a few ways GPT-3 is revolutionizing communications.

Whether you're looking for an answer to a question or more relevant search results, GPT-3 can help. Rather than just keyword matching, GPT-3's extensive knowledge can be used to answer complex natural-language questions quickly and accurately.

For example, Twilio Developer Miguel Grinberg shows how you can build a bot to provide definitions to anything you input in his Ultimate Guide to OpenAI's GPT-3 Language Model.


Empower your chatbots to communicate efficiently and intelligently rather than chatting like, well, a bot. Whether you need to provide answers, recommendations, or advice, GPT-3 powered AI agents can have more effective conversations with your customers.

For example, Twilio Developer Diane Phan built a WhatsApp Michelin star chef to answer all your cooking questions. The "chef" has been mentored by Gordon Ramsay and worked in a Michelin star restaurant, meaning it's been trained on data examples of how Gordon Ramsay thinks and text on Michelin star restaurant operating know-how.

Miguel Grinberg built a chatbot that has helpful, human-like conversations with users. Check out how Miguel's bot helps a customer learn how to write a web application and which language they should use.

Content Generation

Whether you need creative writing, educational content, adventure-based games, product pages, or lyrics to your next punk song, GPT-3 can help make it all happen. While it’s not an API you should unleash to produce content freewill, after some basic training, it does do a decent job at creating original pieces. Still, it always needs a thorough edit to fact-check and clean out the more disparate ideas it can spit out.

See how this GPT-3 powered program creates buzzworthy captions for all your Instagram posts, or learn how this ghostwriting app tells original, scary Halloween stories.

Productivity Boosters

GPT-3 can be used to enhance your work and fine-tune everything from your emails to your code.

For example, Gmail can auto finish your sentences and suggest responses. GPT-3 can also be used to summarize larger articles, or it could provide feedback based on something you've written.

After fine-tuning from thousands of open source GitHub repositories, OpenAI’s API can even complete code and provide context-aware suggestions.


GPT-3's API can be used to translate conversations or even chat with users in their preferred language. This empowers businesses to build more sophisticated chatbots to interact with a variety of customers, as well as translate content for other markets.

While you might not want to rely on GPT-3 as your sole translator, it could be used as a great backup checker when verifying translations.


Google developed its own natural language processing (NLP) solution, and it's known as Bidirectional Encoder Representations (BERT). Google uses BERT to understand the context behind user searches, instead of just focusing on matching keywords in search queries.

Here are the key nuances:

  • GPT-3 has been trained on 175 billion parameters, while BERT has been trained on 340 million parameters
  • BERT requires elaborate fine-tuning, while GPT-3 uses few-shot learning to quickly predict output results with minimal input
  • GPT-3 isn't publicly available (you need to be accepted to OpenAI's waitlist), whereas BERT is a publicly accessible open-sourced model

With fine-tuning, BERT can carry out tasks extremely well, but it’s just not as out-of-the-box of a NLP solution as GPT-3.

Downsides to GPT-3

GPT-3 is still in its infancy, so it's far from perfect. Yes, it delivers robust solutions, but it still has room to grow. Sam Altman, a founder of OpenAI, summed it nicely on Twitter.

As of now, here are the "serious weaknesses" of GPT-3.


GPT-3 may seem like the perfect AI-communications solution, but it's not without its imperfections. There are a few downsides to this powerful machine learning technology:

  • Lack of true intelligence: GPT-3 is a deep learning model that uses machine learning algorithms, but it's still not "intelligence." This AI is only using existing text to predict future results—it's not necessarily coming up with anything truly original as it lacks true understanding and meaning (unlike something like Artificial General Intelligence (AGI)).
  • Privacy risk: It's unclear whether GPT-3 retains any portion of the training data, making it a potential privacy issue.
  • Bias: GPT-3 can be fooled into creating incorrect, racist, sexist, and biased content that's devoid of common sense and real-world sensibility. The model’s output is dependent on its input: garbage in, garbage out.

What GPT-3 thinks of itself

Beyond generating new Taylor Swift songs and interesting conspiracy theories, I thought I’d get a little bit more practical with applying GPT-3, so I let it contribute to this article, too.

This is how GPT-3 responded with the input: “Risks of OpenAI's GPT-3.”

There are many risks that come with the creation of GPT-3. The first and foremost of which is the risk of AI being used to create an even more advanced AI.

This is not a new risk—it has always been a risk. In fact, it is the risk that gave birth to AI in the first place.

The fear that the primitive AI of the 60s, 70s, and 80s would take over the world is what led to the creation of the Asilomar AI principles, the birth of OpenAI, and all the other initiatives that exist today to stop AI from going the way of the Terminator.

Now, instead of worrying about primitive artificial intelligence, we must worry about advanced artificial intelligence.

The risk of an AI taking over the world is still there, but it has evolved.

The other worries that come with the creation of GPT-3 are the same worries that come with all AI research. The worry that it will be used for harm by malicious agents.

These worries are valid, and they should be discussed. But there is one more worry that is unique to GPT-3, and that is the worry that an AI will be created that is so advanced that it can predict the future.

...Woah. That’s quite a dystopian possibility.


Integrating OpenAI with IVRs

Originally, IVRs were automated menus that could queue customers and direct them to the correct agents. They were very limited in functionality, required a lot of manual effort to build out flows, and demanded specialized knowledge and excessive time (and money) to complete changes.

Over time, these systems became more flexible, allowing for easy automation and integrations with advanced features like text-to-speech functionality—no longer were IVRs limited to interacting with DTMF tones. This allowed for dynamic interactions between automated systems and customers, providing a better experience.

Modern IVRs, such as those you can build on Twilio, also allow gathering input from a user and submitting the transcribed speech to an endpoint. As you can imagine, this allows for interesting integrations with AI systems.

Using TwiML (the Twilio Markup Language) to define steps, you can build an IVR application where every user interaction is fed into OpenAI, and the response dynamically drives the conversation forward. For example, consider a flow where a customer calls a salon to schedule an appointment. Currently, this requires a human rep to answer the phone, gather information, enter that information into an application, and make a series of decisions around scheduling.

Augmenting human interaction with a scheduling AI and IVR

Instead, we could use TwiML with a static starting point to change this approach—a prompt of “You have reached the Owl Salon, how can we help you?” would be a great kick-off. The user’s response could then be fed to OpenAI, and an actual conversation could take place.

The engine could also be trained to give higher precedence to scheduling or asking for help. Our application could also send the conversation to OpenAI and ask if the user needed to schedule an appointment, see a beautician, or otherwise summarize the conversation and invoke different workflows based on a set of known tasks.

While this isn’t representative of all the requirements necessary in a production application, it gives an idea of what’s possible with programmable voice systems when connected to an AI. So let’s look at how you might build it.

Building the app

Twilio provides a great Programmable Voice Quickstart for Node.js, but—as in the previous example—we won’t need to reproduce the entire process to see how this might work in action.

If you want to follow along, you’ll need a phone number for use with Voice so our sample application can respond to a phone call. Other than that, we’ll write our own code to create the flow.

We’ll utilize Localtunnel to expose our application externally so that the Twilio webhooks will function (but ngrok or port forwarding would work just as well for development).

Here’s an example of the Node.js code needed to send a conversation to OpenAI and parse the response (the full source is available here). First, we create a little wrapper around the OpenAI library to make the querying easier (found in openai.js):

const config = require('./config');
const OpenAI = require('openai-api');
const openai = new OpenAI(config.openai.apiKey);
module.exports = {
query: async function (question) {
console.log(`Received a message (${question} ), sending to
const prompt = `The following is a conversation with an AI
assistant. The assistant is helpful, creative, clever, and
very friendly.\n\nHuman: ${question}\nAI: `;
console.log(`OpenAI prompt: ${prompt}`);
const gptResponse = await openai.complete({
engine: 'davinci',
prompt: prompt,
maxTokens: 250,
temperature: 0.9,
topP: 1,
presence penalty: 0.6,
frequency penalty: 0,
best of: 1,
n: 1,
stream:false ,
stop: ['\n','Human:','AI:']
return gptResponse.data.choices[0].text;

We then import and consume the query function and use it in our application to interact with OpenAI as part of the IVR flow:

app.post('/completed', async (req, res) => {
 console.log(`Voice input from user:
 const openApiResult = await
 const response = new VoiceResponse();
 voice: 'woman',
 language: 'en-CA'
 }, `The AI response was: ${openApiResult}`);
 const responseText = response.toString();
 console.log(`TwiML AI response: ${responseText}`);

In this code, you can see that we’re collecting the user’s information (i.e., the question they ask) and then passing it along to OpenAI. When we plug all this together and call the Twilio Voice number and say, “My name is Michael. What is yours?” we can see our application triggering actions:


$ node server.js
TwiML OpenAI app listening at http://localhost:13337
TwiML initial response: <?xml version=”1.0” encoding=”UTF8”?><Response><Gather language=”en-CA” voice=”woman”
input=”speech” action=”https://twiml-openai.loca.lt/
completed”><Say>Welcome to our A I demo, ask us a
question.</Say></Gather><Redirect method=”POST”>/timeout</
Voice input from user: Hi, this is Michael. What’s your
Received a message (Hi, this is Michael. What’s your name?),
sending to OpenAI.
OpenAI prompt: The following is a conversation with an AI
assistant. The assistant is helpful, creative, clever, and
very friendly.
Human: My name is Michael. What is yours?
TwiML AI response: <?xml version=”1.0” encoding=”UTF8”?><Response><Say voice=”woman” language=”en-CA”>The AI
response was: my name is Peter . thank you for asking
michael. I want to make you happy and comfortable, so we
can be good friends.</Say><Hangup/></Response>

Based on our question, OpenAI responded with a few sentences (“My name is Peter. Thank you for asking Michael. I want to make you happy and comfortable …”) Of course, this isn’t a production-ready response. However, it does validate that we can connect our IVR directly to the AI and use it to build a more interactive system.

Since the responses we are looking for are more specialized than the sentiment analysis in the previous example, we’d need to train OpenAI with transcripts of previous customer conversations for a production-level app. We could seed it using past interactions with our live agents or public datasets (such as 26 million turns from Ubuntu technical support, 3 million customer support tweets and responses3 million customer support tweets and responses, or 1 million multiturn eCommerce customer service conversations in Chinese). This would give it a greater understanding of the types of questions the system typically receives—and the expected answers. Eventually, we would expect OpenAI to respond with a name and a follow-up prompt of “How can I help you today?”.

Moving forward with GPT-3

Excited to get your hands dirty with GPT-3? Well, unfortunately, you'll have to get in line. There's a lot of hype around the deep-learning language model right now, and you'll have to join a lengthy waitlist to get your turn at the private beta.

There are no plans for general availability anytime soon, but if you can't wait for the beta, tinker around with the prior version, GPT-2. It's not as robust as GPT-3, but it'll give you a good idea of how the language model works.

Want to explore more of what's possible with GPT-3? Take a look at all the awesome ways our developers have used GPT-3 to build helpful, Twilio-powered apps. Here’s a teaser of applications they’ve built: