BAM

Add a Personalized ChatBot to Your App With OpenAI Assistants

On their websites or apps, many companies like LG or Honda have a chatBot. These chatBots can answer users’ questions in a fun and easy way. Implementing such a thing can seem complicated at first but, with OpenAI’s latest announcements, it has never been easier.

How OpenAI's DevDay Innovations Can Transform Your Applications?

On the sixth of November, during their first ever DevDay, OpenAI announced many interesting things such as a new model : GPT-4-turbo, the access to personalized GPTs on their website, or, what is going to interest us today, the possibility to create what is called Assistants. These Assistants are tailored chat interfaces that can be integrated into applications via an API. They are capable of:
- Executing functions within your code to seamlessly interact with your application.
- Writing and running code thanks to OpenAi’s code interpreter.
- Using provided files as context to answer the user’s questions
For instance, the AI team at BAM is developing a VS Code extension and I want a chatBot to be able to explain what this extension does and how to use it. Therefore, I created an assistant and I gave it the documentation of the extension in a PDF file. This is an extract of a chat I had with this assistant :

These assistants are very easy to set up and can be used for many things such as :

  • Providing answers to user queries, reducing the need for customer service interactions.
  • Describe the features of a new product
  • Or even just assisting users in navigating your app, whether it’s for selecting a movie, renting a car, or anything else.

In the second part of this article I will guide you through setting up your bot, discuss the performance and pricing options, and help you determine the optimal chatbot solution for your specific requirements.


Configuring Your Assistant for Optimal Performance

The beginning of Your Assistant :

To start with OpenAi’s assistants, you first need to create an openAI account that will be used for the configuration and the billing of the bot. Then go to OpenAI’s Assistants board. And finally, create a new Assistant to start filling the configuration panel (see below).

Set Instructions to Control How Your Assistant Interacts with Users

The instructions section of the configuration is important for the branding of your app. You might need the assistant to be very polite or to be fun. You might need the assistant to be descriptive or to be concise. You can specify all this by setting :

  • its role (e.g., 'software engineer')
  • its focus areas (e.g., 'technical aspects')
  • the content of its responses (e.g., 'code snippets, solutions to common integration')
  • and its communication style (e.g., 'clear, concise, and practical', 'consistently professional and helpful').

If you have a few ChatGPT good practices in mind, you can use them here: it is the same model that is used in both cases. You can find a few good tips here, most of them are quite code-oriented but you can generalize them to many use cases.

Finally, if you're unsure about this section, the GPT builder can assist. It helps you craft a customized chatbot for use within OpenAI's interface. You can then paste the content from the "Instructions" section created by the builder in your Assistant’s configuration.

The Latest Model, GPT-4-Turbo, Might Not Be the Best Solution for Your Use Case

When choosing the model you are going to use, 3 elements have to be considered :

  • The features available : For a customized chatbot, you'll want to give product knowledge to the bot. This is achievable through the Retrieval functionality, which allows you to give files to the assistant as context. This feature is currently exclusive to gpt-4-1106-preview (GPT-4 turbo, released last week) and gpt-3.5-turbo-1106. GPT-4 turbo, expected to give better results, is currently in preview mode but will be production-ready soon.
  • The Rate Limits : Each model has specific usage limits. The primary distinction between gpt-4 turbo and gpt-3.5 turbo is the size of requests you can make: larger with GPT-4, but more frequent with GPT-3.5. For a personalized bot, the ability to include extensive information in a single request is advantageous, as the bot incorporates relevant parts from the provided files into the context (refer to the “Tools” section). Detailed information on rate limits is available at OpenAI’s documentation.
  • The Price : The cost per request, dependent on both the query and response lengths, varies between models. GPT-4, offering more precise answers, is understandably pricier (about 10 times more). For context, conducting a conversation equivalent to the length of Shakespeare's 'Hamlet' via OpenAI's API would cost around 80 cents with GPT-4 and 6 cents with GPT-3.5. More pricing details can be found at https://openai.com/pricing.

To sum up, GPT-3.5 can then be used if you need short, simple informations, often, and for cheaper while GPT-4-turbo is better if you have a lot of context, want precise answers, and can afford to put more money in the bot.

Add Tools to Give Your Assistant More Skills

For your assistant to be able to do many different things, you have to configure tools.

  • The most important tool for us here is the Retrieval tool. This tool allows you to define a context for your assistant by adding various file types such as .txt, .md, .pdf, .json, and others. This is where you can add the documentation of your app, the answers to the most frequently asked questions, the list of movies available on your platform, your program for the next weeks etc.Based on my experience, the assistant processes .txt and .pdf files most effectively, while .md files are less ideal. Once added, your document is segmented and analyzed to enrich the context of your requests. Remember that adding content to your request might slightly increase the cost of each query. Moreover, from the 17th of November, to store your documents in the Assistants will cost $0.20/ GB / assistant / day.

Two other tools are available:

  • the Functions : they enable the assistant to call functions in your code in order to retrieve data for example or to create an item in your app
  • the code interpreter : it allows the assistant to write and execute code which can be useful if you want to generate a file or do maths for example.

More information about these tools can be found in the documentation.

After setting up your Assistant, you can test its capabilities in the playground.

Once you are satisfied with the results, you can add your brand new assistant to your app.

If you can’t obtain the quality of answers you want with this tool, you can find other ways to integrate AI in your products here.


Call Your Assistant From Your Typescript App

To make your assistant available in your app, you first need to add the openai library to your dependencies. (I used typescript in my example but everything is also available in python, it is described in details here)

Then, you need to setup your openAI API key in your environment variables or in the function variables and create an openAI instance. This instance will help you communicate easily with the OpenAI api.

Once this is ready, you can start a discussion with your assistant. To do so, you need to create a thread. A thread represents your conversation, this is what enables the assistant to keep the history of your exchanges. All the messages are stored in it. To create one, use :

Once your thread is created, you can add a message to it and then launch what is called a run to indicate to openai that you want to retrieve an answer from your assistant

Currently, live visualization of the assistant's response generation isn't possible. To know when your answer has been generated, you need to check periodically the status of your run until it is completed (it usually takes between 5 and 20 seconds)

Once the run is completed you can retrieve your thread.

All in all, once your Thread is created, you can use this hook to handle your conversation :

After retrieving the answer, you can add another message, run the thread again, get an answer, add another message…. over and over and over… et voilà!  You have had a nice chat with your assistant. With only a nice design left to implement, you have created a personalized chatbot in a record-breaking amount of time.


Next Steps

In summary, OpenAI's new Assistants simplifies the process of integrating a sophisticated chatbot into your application.

Using these assistants to answer your users’ questions is already a significant step forward. However, I think that OpenAI’s assistants can do much more. Looking ahead, I plan to take a look at how the Functions feature can further elevate your app's performance. This will involve exploring ways to perform in-app actions based on user requests such as adjusting settings, scheduling meetings, or assisting in travel and accommodation arrangements. Stay tuned for more insights on making use of the full potential of AI to enhance user experience and app functionality.

Développeur mobile ?

Rejoins nos équipes