Quickly Connect LLMs to Chatbots
Key Points
- Connecting a large language model to a chatbot can be done in under 10 minutes and requires no coding experience, making it accessible to non‑developers.
- A rules‑based chatbot follows a fixed set of scripted answers, whereas a generative AI chatbot leverages LLMs trained on massive data to generate natural, on‑the‑fly responses to unforeseen questions.
- The demo shows how to enhance a simple virtual assistant by using IBM’s AI Toolkit repository for pre‑built integrations, enabling it to answer complex queries like animal‑shelter hours, pet‑ownership rules, and nutrition recommendations.
- The integration is performed with Watson X Assistant (IBM’s conversational platform) and the Granite LLM on Watson X, using an IBM Cloud API key and project ID to link the assistant to the model.
- Secure handling of API keys is emphasized, with the workflow involving retrieving the key, storing it safely, and following a five‑step process to complete the assistant‑LLM connection.
Full Transcript
# Quickly Connect LLMs to Chatbots **Source:** [https://www.youtube.com/watch?v=c7ZAceXakIE](https://www.youtube.com/watch?v=c7ZAceXakIE) **Duration:** 00:05:37 ## Summary - Connecting a large language model to a chatbot can be done in under 10 minutes and requires no coding experience, making it accessible to non‑developers. - A rules‑based chatbot follows a fixed set of scripted answers, whereas a generative AI chatbot leverages LLMs trained on massive data to generate natural, on‑the‑fly responses to unforeseen questions. - The demo shows how to enhance a simple virtual assistant by using IBM’s AI Toolkit repository for pre‑built integrations, enabling it to answer complex queries like animal‑shelter hours, pet‑ownership rules, and nutrition recommendations. - The integration is performed with Watson X Assistant (IBM’s conversational platform) and the Granite LLM on Watson X, using an IBM Cloud API key and project ID to link the assistant to the model. - Secure handling of API keys is emphasized, with the workflow involving retrieving the key, storing it safely, and following a five‑step process to complete the assistant‑LLM connection. ## Sections - [00:00:00](https://www.youtube.com/watch?v=c7ZAceXakIE&t=0s) **Quick Integration of LLM Chatbots** - This walkthrough demonstrates how to upgrade a simple, rules‑based virtual assistant to a generative AI chatbot in under ten minutes, using pre‑built integrations from an AI toolkit and requiring no coding experience. ## Full Transcript
everyone wants to use large language
models and chat Bots to get the most of
generative Ai and answer even more
questions with their solution but did
you know connecting the two is actually
easier and faster than you'd think I can
do it in under 10 minutes but I thought
using large language models was an
extremely complicated task once you get
hands- on with the technology you're
going to realize how accessible it is
for everyone even non-coders so there's
no coding experience required and I'm
going to walk you through the setup
process
I'm going to break it down in just five
steps let's go to our virtual assistant
and get started here I've created a
simple virtual assistant it can answer
very simple questions that I've manually
added but what if we want it to do more
we're going to utilize the AI toolkit
geub repository for some pre-built
Integrations for example right now it
answers questions like what time does
the animal shelter open but if I ask it
a more complicated question like how
common are Swedish Val Hans it doesn't
quite hit the mark but not for
long after we' finished it will be able
to answer this question in addition to
questions like am I allowed to have a
pet a possum or what is the recommended
amount of dog food for a 30 lb dog
before we move on let's talk about the
difference between a rules-based chatbot
and a generative AI based chatbot think
of a rules-based chatbot as a very
structured Limited dialog flow it has a
set of questions that it's prepared to
answer based on a user's input but
that's it just that one set of questions
it can't formulate any other answers
other than exactly what has been
provided now a generative AI based
assistant on the other hand utilizes
large language models to create an
answer to the user's question it's been
trained on massive amounts of data so
the model's able to use all of this
training in order to formulate a
humanlike response
so what we're going to do today is
basically hook up your chatbot interface
to a large language model allowing you
to utilize AI to create natural language
answers to questions you had not
anticipated for the purpose of this
demonstration I'll be using Watson X
assistant IBM's conversational
intelligence platform and Watson x.i
which is a part of IBM's data and AI
platform Watson X I'm going to assume
you've already chosen your own platform
and model for this one my model of
choice is going to be Granite but the
basic principles of this will work with
most AI platforms models and
assistants first we're going to grab our
IBM Cloud API key it's important to
always store your API keys in a safe
place so that you can reuse them later
or as I do create a new API key every
single time because I forget where mine
is stored so now you can head to your
data AI platform in my case Watson x. go
to the manage tab then copy the project
ID which is what we use to connect our
assistant and large language
model time to integrate our large
language model with our virtual
assistant since I'm using Watson X I can
go over to the assistant toolkit we're
going to download the Open API spec and
the sample action now let's add these to
our
assistant from within the assistant
click
Integrations you can think of an
integration as a simple way to connect
your assistant to some other service or
data set then just follow the prompts to
upload our open API spec now our open
API spec is going to contain all of the
URLs and data structures that we need to
connect to our large language
model okay so once that's completed
you'll want to go to your extensions
page and add your new extension follow
the steps and enter your IBM Cloud API
key to complete the process access bear
with me we're almost
done next let's upload the sample
dialogue provided in the assistant
toolkit here's where you're going to
drag and drop your sample actions file
and then click
upload wait for it to complete and then
click close now we're almost done I
really mean it this
time now we've got to set up our
variables so you're going to navigate to
variables created by then created by you
you're going to past paste your project
ID from Watson x. to the projector ID
variable we just have one more step
let's make sure our extension is
configured and the integration actually
works I don't know about y'all but I
usually need to do a lot of testing
because I may or may not rush and I
usually skip a few steps along the way
and end up causing a lot of Errors
anyways you're going to open the invoke
Watson X Generation API action and click
edit extension select the Watson X
extension and then just fill in the
parameters voila you've now connected
your Bot to a large language model in
less than 10 minutes your assistant is
going to be able to receive and reply in
natural language while this is
incredibly powerful making sure your llm
enabled assistant provides accurate and
appropriate responses that's a whole
other story so check out our videos on
AI governance or rag to D deeper into
that thanks for watching and don't
forget to like And subscribe for more
content