Learning Library

← Back to Library

Building a watsonx.ai Chat App

Key Points

  • The tutorial walks through creating a Next.js project named watsonx‑chat‑app using the CLI and sets up a basic React/TypeScript boilerplate.
  • The watsonx.ai JavaScript SDK is introduced for model inference and tool integration, including community tools from wxflows.
  • Tailwind CSS is used to style the app’s layout, adding a header, input bar, and placeholder message components.
  • The initial code replaces the default boilerplate with a simple chat UI, but the input bar is not yet connected to state or the AI backend.
  • Running npm run dev opens the app in a browser, displaying the header, sample AI/user messages, and an editable input field ready for further integration.

Sections

Full Transcript

# Building a watsonx.ai Chat App **Source:** [https://www.youtube.com/watch?v=txosy9PzAKg](https://www.youtube.com/watch?v=txosy9PzAKg) **Duration:** 00:35:12 ## Summary - The tutorial walks through creating a Next.js project named watsonx‑chat‑app using the CLI and sets up a basic React/TypeScript boilerplate. - The watsonx.ai JavaScript SDK is introduced for model inference and tool integration, including community tools from wxflows. - Tailwind CSS is used to style the app’s layout, adding a header, input bar, and placeholder message components. - The initial code replaces the default boilerplate with a simple chat UI, but the input bar is not yet connected to state or the AI backend. - Running npm run dev opens the app in a browser, displaying the header, sample AI/user messages, and an editable input field ready for further integration. ## Sections - [00:00:00](https://www.youtube.com/watch?v=txosy9PzAKg&t=0s) **Building a watsonx.AI Chat App** - The speaker demonstrates how to bootstrap a Next.js project in VS Code, install the watsonx.ai JavaScript SDK, and launch a basic chat‑application boilerplate for AI inference and tool integration. - [00:03:08](https://www.youtube.com/watch?v=txosy9PzAKg&t=188s) **Preparing UI and watsonx.ai Integration** - The speaker removes unused CSS, stops the current process, installs the watsonx.ai SDK with npm, and sets up a .env file containing the API key, project ID, and IAM auth type to enable the placeholder UI to connect to Watsonx.ai models. - [00:06:15](https://www.youtube.com/watch?v=txosy9PzAKg&t=375s) **Message Structure and Watsonx.ai Setup** - The speaker explains the message object fields (role and optional content), how to pass an array of messages, and configures environment variables and connection parameters to use the watsonx.ai SDK with the Mistral Large model for chat and tool calls. - [00:09:18](https://www.youtube.com/watch?v=txosy9PzAKg&t=558s) **Managing Message State in React** - The speaker explains creating a controlled input component with useState, handling onChange to update the input value, and storing messages in an array state (including type imports) for sending to a large language model. - [00:12:23](https://www.youtube.com/watch?v=txosy9PzAKg&t=743s) **Implementing Shadow Message History** - The speaker shows how to create a temporary message‑history array, send it to the language model, handle the response by updating both the shadow array and component state, and finally connect this logic to the send button’s onClick event. - [00:15:33](https://www.youtube.com/watch?v=txosy9PzAKg&t=933s) **Role‑Based Message Rendering in React** - The speaker explains using a .map function to return user and assistant messages with proper parentheses, conditional rendering based on the message role, dynamic content from state or the LLM, and unique keys (role + index) for each element. - [00:18:40](https://www.youtube.com/watch?v=txosy9PzAKg&t=1120s) **Implementing Loading State for Send** - The speaker outlines how to toggle a loading flag, disable the input, clear the message field, and display a loading indicator on the send button while a request is processed. - [00:21:52](https://www.youtube.com/watch?v=txosy9PzAKg&t=1312s) **Implementing LLM Tool-Calling Logic** - The speaker outlines configuring a tools constant, setting tool choice options, detecting tool call suggestions in LLM responses, adding code to execute the appropriate tool, and testing this flow with an arithmetic query. - [00:25:00](https://www.youtube.com/watch?v=txosy9PzAKg&t=1500s) **Recursive Message Loop for Tool Calls** - The speaker details how the system re‑calls the message function with an expanded message history—including prior messages, the LLM’s tool request, and the tool’s response—to allow the LLM to generate a final natural‑language answer, and introduces using community tool collections such as wxflows for complex mathematical operations. - [00:28:11](https://www.youtube.com/watch?v=txosy9PzAKg&t=1691s) **Deploying wxflows Tools and Setup** - The speaker explains how to deploy wxflows-created tools to an endpoint, configure the required environment variables (API key and endpoint), and install the wxflows SDK for integration into a Watsonx chat application. - [00:31:26](https://www.youtube.com/watch?v=txosy9PzAKg&t=1886s) **Refactoring Tool Calls with wxflows** - The speaker replaces custom if‑else tool‑call logic with the wxflows SDK, deletes unnecessary code, reruns the app, and demonstrates the new community math tool while advising on handling occasional tool errors. - [00:34:33](https://www.youtube.com/watch?v=txosy9PzAKg&t=2073s) **watsonx.AI SDK Quick Demo** - The speaker demonstrates querying a chat application with the watsonx.AI SDK to compute a numeric answer (17.77), showcases additional features like streaming and image‑based chat, and points viewers to the code in the video description. ## Full Transcript
0:00Let's build an AI application with the watsonx.AI SDK for JavaScript. 0:05Building your own AI application might sound like a challenging task, but in this video 0:09I'm going to break it down into simple steps. 0:11We'll be using Next.js to build a React frontend application. 0:15We'll be using the watsonx.ai SDK for JavaScript to inference with models. 0:20Then we'll be using the same SDK to work with tools. 0:23And finally we'll be importing some community tools from wxflows. 0:27So let's dive into VS Code and get started. 0:30In VS Code I set up a new project. 0:32And here I'm going to use the Next.js CLI to bootstrap my application. 0:38I'm going to run npm createNextApp. 0:42I'm going to make sure I'm using the latest version. 0:44And then I'm going to define my project name, which will be watsonx chat app. 0:51It will take a few moments to set up my boilerplate application, but first I need to answer some questions. 0:56I'm just going to go with all the defaults, but when you build your own app you might want to use different settings. 1:22Once finished it created a new directory called watsonx-chat-app, 1:26and in here you can find all the boilerplate for an Next.js application. 1:30I'm going to move into this directory using my terminal. 1:35So I'm going to see the watsonx-chat-app. 1:40And in here I can start the application by running npm run dev. 1:44And in my browser I should now be able to see the boilerplate application. 1:49From the boilerplate application it says get started by editing source 1:53app page which is a tsx file meaning that we're using TypeScript. 1:59And in this file I can add our chat application. 2:04I'm going to go ahead and delete all the code that's already in here because I don't need any of that boilerplate. 2:10Instead I'll be copying a new code which will show our chat application. 2:15First I'm going to create the canvas which is a simple div which has some settings for the composition of the page. 2:24As you can see I'm using Tailwind. 2:25Tailwind is a great library if you don't want to write all the CSS by hand, 2:31and in this div I can add the header. 2:33I can add an input bar which will be used to type in our question, 2:37and I also will be adding some components to render messages on the screen. 2:41So first let me add the header, 2:43and the header code will mention the title of the application. 2:47Then in here I can also add the boilerplate for input bar, 2:52and the input bar is not a controlled component yet but we'll be hooking it up to state later on, 2:57and then finally I'm going to add some code in here to render some placeholder messages, 3:04and once to add the messages, 3:05I'm going to format my code and then save this file. 3:09I'm going to format so it's all nicely structured. 3:12And then it's saved and we can find it in our browser. 3:16And in the browser you can see that we have a simple header. 3:20We have a few placeholder messages like hey are you today? 3:23And then the AI will reply with I'm okay what about you? 3:26And then we have an input bar which we can use to type our question. 3:30You can type or press or do whatever you want. 3:32It's not hooked up to anything yet. 3:34So that's what we'll be doing next. 3:38But first I'm also going to delete some code from globals .css which is CSS code that I won't be using. 3:45So I'm going to save this and then I can close the global CSS file. 3:49I'm also going to kill the process that's running in my terminal because I'm going to install the watsonx.ai SDK. 3:55I can install this from npm by running the command npm install at IBM cloud slash watsonx.ai. 4:05And this will install the library that I need to work with models available in watsonx.ai. 4:12After installing I first need to set up a file with my environment variable. 4:17These environment variables are needed in order to connect to the models that you have in your watsonx.ai account. 4:23And create a new file which I call .env. 4:26In here I need to set a couple of environment variables. 4:31I need to set my API key, my project ID and I also need to set the auth type and I'll be using IAM for this. 4:39To get your API key and your project ID you need to open up the watsonx.ai dashboard 4:44and in here you can find a developer access page which has all the details that you need. 4:50So I'm going to save this file but before saving make sure to substitute API key and project ID with your own details. 5:08I'm not going to be adding the connection to watsonx.ai for my page .tsx file. 5:13Instead I'm going to create a new file and I'm going to call this file actions .ts. 5:19This file will have all the functions that are running serve rside because with Next.js 5:23you can build both client-side code which is what you see in the browser, 5:27but also server -side code which is running in the background. 5:31So I'm going to make sure I have a server-side file by defining new server at the top 5:36and then in here I'm going to define a function. 5:39And this function I'm going to call it message. 5:42This is the message function that I will be using to send a message to the large language model. 5:48In here I can add the logic to connect to watsonx.ai 5:52So I'm going to import connection instance from the SDK. 5:57So I'm going to import watsonx.ai from the library I just installed and then I can use it inside my message function. 6:04I also need to set a type because we are using TypeScript. 6:08So I'm generating a message type. 6:11I'm going to export this as well because I might be using it in different places as well. 6:17This will have two fields we have role which is a string 6:20and the role field is being used to define who the message is coming from. 6:25Whether it's you, whether it's the large language model or whether it's a system prompt, 6:29but I'll explain more about system prompts later on, 6:31and it also has a content field which is the actual response from the large language model, 6:36a tool call or maybe the message that you're asking from the LLM. 6:41So this could be content. 6:43It is optional though because for example tool calls don't have a content field, 6:48and this is also a string. 6:50In my message function I can now set the input to be messages and messages will be of type message, 6:57but of course it's an array because you could have multiple messages. 7:02I'm also going to import my environment variables. 7:05So these are the environment variables that we created for the watsonx.ai SDK. 7:10So those should be in your .environment file which we created just a little bit ago. 7:18So let me put in some code inside of the message function. 7:22And this code is to connect with watsonx.ai. 7:26You need to set a service URL and this includes your region. 7:29So for me it's us-south, for you it might be something else. 7:32I've also set max tokens. 7:34So this is the maximum amount of return tokens the LLM is providing for us. 7:40I'm setting up a text chat function which connects to watsonx.ai with 7:45the model mistral large which is a very nice model if you're 7:48working with data sets or if you're doing tool call, which we'll be doing later on. 7:53In here I'm also passing all the messages. 7:56So these are the messages that you create in your front -end application. 7:59And then finally I'm returning the LLM response. 8:03So this is the response coming directly from the large language model. 8:07Let me format this code and then save it. 8:11Make sure you export this function because we're going to need it in our page .tsx file. 8:16At the very top I need to define that this is a client-side component. 8:20So I need to define use client before I actually start to import all the different functions from my actions .ts file. 8:30Before doing so I'm going to set up a state variable to keep track of our input message. 8:36So this is the message that you type, 8:38whether it's your question or maybe something else you want to ask the LLM. 8:42So I'm going to import use state from react which is the application framework that we're using. 8:50And then I can start creating my local state variables. 8:52I can create a local state which I called input message. 8:57So the first value is the actual value and the second value is the function to update the state. 9:03And I'm going to call this set input message. 9:07By default the input message will be an empty string, 9:11and I can use the state variable to turn my input bar which is at the very bottom 9:16of your application into a controlled component. 9:19Meaning that whenever we type something in there it's going to update the state. 9:23And we can use this state for example to send a message to the large language model. 9:28So I'm going to scroll down to my input field, 9:31and in here I'm going to set the value to be input message which is a state variable. 9:38And then I'm also going to set a onChange function which is taking the event, 9:46which is the value inside the message box, 9:49and it's going to use that value to update the input message. 9:54And we're going to take the target value. 9:57So I'm going to save this. 9:58And now I should have a controlled component for the input box, 10:01but of course I'm going to need more because I want to store this message into a message state. 10:08So I'm going to create a second state variable which I'll be calling messages, 10:14and then of course I need a function to update these messages. 10:18Again I'm using the useState hook to create the state variables. 10:24And now the value would be an array because messages are a list of messages and not a single one. 10:30I can also import the type definition for this which I already created in my actions .ts file. 10:36And I should also import the message function as we're going to use it to send a message to the large language model. 10:42So I'm going to import both the message function as also the type which we call message. 10:49So message will be the type of this state variable as well. 10:53I make sure to set it as a array of messages and not a single message. 11:00We can also add a default message in here which will be present in each of our applications. 11:05I need to set a role which is called system. 11:08So the system prompt is a very important prompt as you use it to give the LLM additional instructions. 11:14So your question will be sent to the LLM together with your system prompt. 11:19And the system prompt in here you can ask for details or you can give the LLM a certain role. 11:24Maybe you're building a chat application for an insurance company or a medical one. 11:28You want to use the system prompt to make the LLM aware of the context of your question. 11:34So we can set role to system. 11:37We can then set content to be your actual system prompt. 11:41And in here I can set something like you are a helpful assistant and you'll be answering questions related to math. 11:55Because we'll be using some bit of tools later on which are related to mathematics. 12:00So after setting the system prompt I can now start to create my function 12:05which will be used to send a message to the large language model. 12:08I'm going to create a async function because we need to await for the LLM response. 12:15And I'm going to call this send message. 12:19We don't need any input parameters because we can take the input message directly from state. 12:24And in here we'll be setting a shadow message history. 12:29So I'll be creating a const which I call message history which again is an array. 12:36It includes all the previous messages we might have in state. 12:41Including my system prompt or any other messages sent to the large language model. 12:46And then also the most recent message which is your question which is coming directly from the state variable input message. 12:54And a role for this would be user because you're sending the message. 12:58And content would then be input message. 13:01So this is a shadow history because we're not using it to update the state. 13:05We're using it to send a message to the large language model. 13:09So in here we're going to take the response coming from the model by awaiting 13:14the message function that we imported and created earlier on, 13:19and this will take the message shadow history. 13:23Once we get the response of course we want to update the state. 13:26So if there is a response we actually want to import the messages, 13:33but first we're going to update the shadow history. 13:36So we're going to say messagehistory.push, 13:40and the push will be our response. 13:44I made a small typo there so this should be response, 13:47and now all the red lines are gone. 13:49And finally if this is all done we're going to update our message history 13:55to include the total shadow message history that we created here. 14:00So once I save this we still need to hook up our function to the actual button. 14:04So I can scroll down to our send button and I can add a onclick function here. 14:11So whenever you click it it's going to send a message using the sendMessage function. 14:17Format this code and I make sure I start my application again, 14:21because we killed the application process earlier on, 14:27and now I should be able to go back to my browser and see the chat application. 14:34You can see it still has the placeholder messages in there. 14:37So what I'm going to do next I'm going to make sure that all the messages that are being rendered 14:41are actual messages that you sent or the LLM sent back to us. 14:46For this I'm going to this middle part, 14:49and in here I need to check for the messages state. 14:52I can put these around curly brackets and I can say messages is there, 14:58and also make sure messages length is bigger than zero. 15:03And once it's bigger than zero I can actually start to render these components. 15:10Let me put this in the outer loop as well. 15:13So I'm going to put this inside the div which is the div we already had before. 15:19And then inside this messages part I can start to map over all the messages that we have in history. 15:29So in here I'm going to map over the different messages. 15:33So I'm going to be calling a .map function, 15:39and this .map function will handle the return. 15:45Make sure of all the parentheses set up correctly. 15:47If I have something like this. 15:51So my return is here. 15:53Let me break up the return in two returns as well. 15:56Because we have messages coming from the LLM and we have messages coming from ourselves. 16:04Starting to look better. 16:05Let me format this code. 16:06And then the final bit we need to hook up here. 16:09We need to check what the role is. 16:10So the role of a message determines whether it's rendered on the left, on the right, in our application, 16:16and it also depends which label is being shown next to the message. 16:22I can check for messages here. 16:23So if role is equal to user I want to return this first message. 16:29If role is equal to assistant which is the large language model. 16:34Then I want to render the second type of message. 16:40So these roles are pretty important in order to make sure that we return the correct message, 16:46and then of course I want these to be dynamic. 16:48I don't want the placeholder messages anymore. 16:50Instead I want the actual content either coming from the state or content coming from the large language model. 17:02As we're using React we need to make sure that each of the elements we return from a map function has its own key. 17:09So we need to set a key here. 17:10Otherwise you might be seeing errors in your developer console. 17:15And for this we can use the role and the index which is the index of the iteration of the different messages. 17:22And we can use this same key construction for the different message types. 17:30So now if I go back to the browser I should be able to see no messages, 17:33and we can start typing and should see updates whenever we press the send message button. 17:43So let me refresh the browser and our screen should be empty now. 17:47I can start typing in this box. 17:48I can ask the LLM. 17:50"So hey how are you doing?" 17:54And then the LLM should respond with a message. 17:58As you can see the LLM is pretty friendly. 18:00It's taking the role of a helpful assistant. 18:03And it's going to ask us how they can help with math questions. 18:07Because we told the large language model you should be helping us with mathematical questions. 18:12In order to work with mathematical questions we can set up tools later on, 18:16but first I want to do some housekeeping. 18:18I make sure that whenever we press this send button we're going to show a small loading 18:22indicator so you know something's happening in the background. 18:27For this I'm going to create a loading state. 18:31I can call this is loading. 18:33And then I need a function of course to update the loading state, 18:36so set is loading, 18:38and I can do use state is false. 18:40So by default my application isn't loading. 18:44Whenever I press send message I want to set the loading state to true. 18:48So I know the application is doing something in the background. 18:52Make sure it is set to true. 18:54And then whenever it finishes and updated my message history I want to make sure that it's no longer in a loading state, 19:02so set this to false. 19:05What else I want to do is whenever I finish sending the message 19:09I want to make sure that my input message is being emptied out, 19:13because this way we don't need to delete it every time we want to send a follow-up question. 19:18Let me save this. 19:20Let me also scroll down because whenever I press the send message button I want the input field to be disabled. 19:27So I can set a disabled tag here and make sure this is looking at is loading. 19:33And then on our button 19:34we can also make sure we show a small loading indicator so you know something is happening in the background. 19:41If the application is loading we want to set this to loading with three dots. 19:47Otherwise the label for the button should still be sent. 19:52I can do more things here as well. 19:54For example I can also make sure that whenever I press enter it's going to send a question. 19:59So we're not going to be doing this for this application but this 20:02is something you can implement yourself by using the onKeyDown property. 20:09So back in the browser we should now be able to see a small loading indicator whenever we ask a question. 20:17I can ask another question so what do you know about tool calling? 20:21Which is what we will be implementing next? 20:27As you can see when I finish typing and press the button the 20:30input field is now disabled and you can see a small loading state over here. 20:38Looking at the LLM's response you can see it's also truncated. 20:41Meaning that we need to increase the max token size that the LLM generates for us, 20:46and you can do this in our actions .tss file which we'll need to head over to anyways to add our tools. 20:58In here you can see we are passing model parameters. 21:00I have max tokens set to 200. 21:03For example you can increase it to 400 and this way it shouldn't truncate any of the responses. 21:09In the same file we're also going to set up our tools now. 21:12To set up the tools you need to define a tool definition, 21:15and the tool definition includes what your tool is and how the LLM should use it. 21:20And this is really important because otherwise the LLM might not pick your correct tool 21:25or it might not know it should use your tool to answer certain questions. 21:29In order to do this I'm going to paste some code in here which defines a tool to add two numbers together. 21:39So I have this tool which I call add and the add tool adds the values of a and b to get a sum. 21:47So it's going to take two input properties which are a and b and it's going to add these numbers together. 21:53I'm defining tools as a constant here and I need to pass this to my chat function right there. 22:00So I'm going to say messages and then after messages comes tools. 22:06I'm also going to set the tool choice option. 22:09Meaning that I can force the LLM to use a certain tool or I can just set it to auto meaning that the LLM can use any tool. 22:19I'm going to save this and after saving this I still need to implement a logic 22:24in order to call the correct tool when the LLM is saying us a certain tool needs to be called. 22:29When you work with tool calling you the application builder still needs to implement a logic to actually call the tools. 22:35The LLM is only going to propose which tool you should call and how you should call that tool. 22:42So instead of returning a message here we need to look if the LLM is going to propose a tool call 22:48and if it does we need to execute that actual tool call. 22:51For this we need a bit more code. 22:54So instead of returning the message generated by the LLM we're going to look for any tool calls in its response. 23:02And we can actually console log this to see if it's going to propose a tool call 23:06whenever we ask a question that's related to adding two numbers together. 23:12So I'm going to console log this right here and then head over to my browser. 23:20Let me refresh the page so we have a fresh message history and let me ask a question like what is the outcome of 6 plus 6? 23:35You can see it's not giving me any return because the LLM actually proposed to do a tool call. 23:39If I go back to VS Code I can actually see what tool call it proposed us to do. 23:46If you look at the log here you can see we have a tool with a tool call ID. 23:51We also have a function which is named add. 23:54So this is the same as we have in our tool definition. 23:57And then there are two arguments which are a and b and both are 6. 24:01So we need to implement a logic right here to work with the add tool. 24:09And this is what we're going to do next. 24:13For this I'm first going to map over all the different tool calls, 24:16and I'm going to do this by taking the tool call response from the LLM 24:22and I'm going to look for its ID and then I'm also going to look for a function. 24:28I'm going to check for the name add because I already know I have a tool called add 24:32and then I'm going to look at the different arguments. 24:34So the arguments are a and b which in the previous case were both 6. 24:39And I'm going to add these together in a final response and this all together will be my tool response, 24:44and the large language model is going to look at my tool response and based on it it's going to 24:49generate natural language that's going to return to you in the application. 24:59Let me format this code. 25:01Once we have the tool responses we should return a message history back 25:05so the LLM can interpret the history and give us natural language in return. 25:11For this I'm going to check for the length of tool responses. 25:16If it's bigger than zero I'm going to call the message function again. 25:19So the message function will call itself but now with an updated message history. 25:25It will include all the previous messages, it will include the tool call 25:29the LLM proposed and then it will include the response of the tool, 25:34and then finally it's going to go through this loop again and in the 25:37end it should return your final message which is natural language. 25:42So after saving this I can head back to the browser and I can ask the question again what is the outcome of 6 plus 6? 25:51And this time it should use the tool, it should look at the tool response and finally show you the outcome of 6 plus 6 is 12. 26:00So you can easily see these tool calls get more complex over time, 26:03and at some point you might see yourself creating tools for all the different mathematical options 26:08like adding or multiplying or working with pi. 26:12So instead of defining all your tools by hand you can also use community tools. 26:16For example tools created by different frameworks, 26:19and next we're going to use tools created in wxflows which is a way to share 26:23community tools such as tools to do different mathematical calculations. 26:29I'm going to kill the process running in my terminal 26:32and I'm going to start creating a new directory which I call wxflows which is where we'll be adding our tools. 26:39We're going to run mkdir wxflows. 26:42I'm going to move into this directory and in here I need to use the wxflows CLI. 26:48So if you install wxflows CLI by looking at their documentation on github you can actually start to use it to import tools. 26:56First let me check if I have the correct version installed and this will validate that the CLI is installed without any issues, 27:05and then I can use it to import a tool. 27:08For example I can import a wolfram alpha tool which is used to create mathematical calculations. 27:15Let me also create a project configuration which I can do by running wxflows in it. 27:22And then it's going to ask me for a project name. 27:25And I can call this api slash watsonx chat app. 27:31So the name of my endpoint or project will be the same as the project I created in here. 27:36So this will make things easier for me in the future when I create multiple tool endpoints. 27:43And now I can start importing a community tool. 27:45The community tool I'm importing from github is a tool to do mathematical calculations. 27:51It might take a few seconds to import this tool and then you can see within my 27:55wxflows directory all sorts of files have been created. 28:00The most important one is this tools .graphql file where you can see we have a math tool. 28:06It's able to perform calculations, data unit conversations and all sorts of formulas. 28:12And you can also see it's using wolfram alpha under the covers 28:15which is a very nice way to work with different mathematical calculations. 28:20I'm going to close this file and I'm going to run the wxflows deploy step. 28:25So your tools created or imported using wxflows will be deployed to an endpoint. 28:30And this endpoint is what we connect in our chat application. 28:34So I won't be defining all the tools locally but instead I have them deployed to an endpoint 28:39from which I can pull them in and where they will also be executed. 28:43Make sure to remember this endpoint because we need to add more environment variables to our file right here. 28:49We need to add two additional environment variables which are my wxflows api key and my wxflows endpoint. 29:01You can get the endpoint by looking at your terminal as it's being printed right there. 29:05You can get your api key by running the command wxflows boomi dash dash api key. 29:12So this should print your api key right here in the terminal and after that you can paste it inside your environment file. 29:19Make sure to save the environment file before continuing. 29:27I'm going to move back into my project root which is watsonx chat app. 29:33And inside the actions .ts file I now need to start importing the wxflows sdk which of course I need to install first. 29:40So I can run the command npm install add wxflows slash sdk 29:45and then we're going to make sure to install the latest beta version 29:48because this is a community project still in development. 29:55So after installing the sdk I can make some additions to my actions.ts file. 30:00In here I need to import the following. 30:04I'm going to import wxflows from the library I just installed 30:08and I also need to make sure that I extend my message type to include a tool call id, 30:13which is used by the large language model to tell you which tool to call and how to call it. 30:18It is optional because not every message will have a tool call included. 30:24And this will also be a string. 30:28I can delete the tools I created previously because we'll be using tools that we just deployed to wxflows. 30:33So the mathematical tool that you just saw. 30:39Inside my message function I can now start to import the sdk. 30:47So I'm going to do this right here and I'm going to pass it my endpoint and API key 30:52which are in the environment file that we just set up. 30:55Using this tool client I can now retrieve tools. 30:58I can say const tools is await because it needs some time to render, 31:03tool client dot tools. 31:06So these tools are the tools I just deployed which is my mathematical tool. 31:10You don't need to change anything in your text yet because it still knows there's a tool. 31:16The tool name is still the same so we still have a variable called tools 31:20and we still want the LLM to decide for itself whether it's going to call one tool or the other or it's going to call no tools. 31:28We do need to make some changes in this if-else statement. 31:32So in here we're checking for all the potential tool calls and then we're executing the tool calls. 31:38By using wxflows and the community tools we don't need to set up any of these tool call logic or self. 31:43Instead we're going to let the wxflows sdk handle it for us. 31:48So I'm going to delete all this code and replace it with the following. 31:56So I'm still going to create a tool response as constant but this time it's using the 32:01tool client to execute the tools based on the entire chat history, 32:07and of course make sure to delete all the previous data that we don't really need here. 32:12So all the tool response logic can be deleted. 32:19Make sure to save this and then run your application again 32:22because we previously killed the process to run our frontend app, 32:28and this time it's going to be available in the browser again 32:30where you should make sure to refresh the page so you don't have any old data in there. 32:38You can ask another question like what is the outcome of 6 plus 6 32:42and it should render the response right here in the browser. 32:45But this time it's not using the add tool that we created ourselves 32:49but it's using the math community tools from wxflows. 32:58You can see it's not actually creating the result because there was some error while calling the tool. 33:03This sometimes happens and my advice will be ready to refresh the browser 33:07and bear with the large language models which could be a bit picky from day to day. 33:12So I'm going to refresh my browser and ask the same question again. 33:18This time you can see it's actually generating the outcome of 6 plus 6 which is 12. 33:23You can see there's also links in there. 33:25So the Wolfram alpha tool doesn't only give you the result it's 33:28also going to give you more information on how it came to the conclusion. 33:33This means that you can also ask more complex things. 33:36So let's try another question like what is the square root of something more complex? 33:42Something we won't be able to do with our add tool directly. 33:46I'm also going to make sure I refresh the page because I don't want old message history to complicate my response. 33:52So what is the square root of the third decimal of pi times 10? 33:58This is a quite complex question and I won't expect the LLM to be able to answer it without doing any tool calls. 34:06As you can see it's not able to do it on the first go so I always advise you to just try again. 34:15You can see it's generating the tool call but somehow it doesn't seem to execute the tool call directly. 34:21What I advise you to do in this scenario is go back to the application. 34:27Make sure to run ampere and run dev again because sometimes the models 34:30get too confused and we need to actually restart the entire application. 34:34We go back to our chat application. 34:37Ask the same question again and this time we should get the response that we expect 34:41which is the answer to what is the square root of the third decimal of pi times 10, 34:48and now you can see we're getting the answer which is 17.77. 34:55Of course there's much more you can do with the watsonx.AI SDK. 34:59For example you can implement streaming or you can have a chat with images. 35:03And that's how easy it is to build your AI applications using the watsonx.AI SDK. 35:08If you want to continue building make sure to look at the code which you can find in the description of this video.