GPT's Assistant
OpenAI's new tool simplifies and speeds-up MASSIVELY the creation of the next generation of powerful bots
Background
In a previous post, I introduced Jo-bot, a GPT-driven customer support bot (see here). It took about 3-4 weeks to build, involving fun and excotic techie concepts like - don’t worry if these terms sound like ancient Greek - vector database storage, chunking, RAG (retreival augemented generation), temperature settings, prompt chaining, prompt routing, LangChain…
I just built the same functionality using GPT’s Assistant tool, in about 5 minutes. Literally. Except that it now performs MUCH better. Essentially, GPT has massively simplified the complexity of building a bot. They are making a powerful statement: “we’ll take care of all the technical, under-the-hood stuff; you just focus on applications and use cases, and we’ll make that as simple as possilbe for you”.
Setting up the Assistant is a breeze
In the new world of Assistnants, you simply specifiy a few settings, and you’re good to go (see screenshot below):
Overall instructions for how the bot should behave - it’s mission statement
“Retrieval” - this is where I loaded my fact-base, which the bot should draw from. I used the set of FAQ’s from Tesco Online, as an example
The results
If the previous version of Jo-bot scored a 5 out of 10 - good for a Proof of Concept - the new Assistant, which took just a fraction of the time to build, scores about an 8 out of 10. It performs better in the following ways:
Sounds more human and natural, and less computer-bot-like.
Draws on the sources of information more thorogughly and accurately. The previous version of Jo-bot was a bit hit-and-miss.
It seemlessly blends in the ability to do calculations, using Code Interpreter.
As an exmaple, I asked it to estimate the total cost for the purchase of 3 items at 5 GBP each. It nailed the answer. It was able to calculate the total basket cost, and draw on a range of sources to add on the various costs: the 5 GBP charge for being less than the 50 basekt minimum, the delivery cost range, and the Whoosh option. The accuracy and completeness of the answer is impressive.
Note: This is a simple and quick example. More complete testing may throw up the age-old problem of hallucination, where GPT makes up stuff. TBD on this front.
Final thoughts
OpenAI are truly impressive as a tech company. They are setting the standars for everyone else to follow. They move quickly, and every new release changes the rules of the game.
OpenAI are increasingly providing the tools for developers to make the most of their models. This seems to make intermediate framework players like LangChain much less relevant (maybe obsolete?).
It’s an exciting time for anyone involved in creatively harnessing the power of these new models. Even those who are not hard-core techies, but like to dabble (like me!).
What will YOU build?