Learning Library

← Back to Library

Building Your First IBM LLM Agent

Key Points

  • The IBM React Agent framework (B‑Framework) provides a TypeScript‑based, plug‑and‑play environment for building LLM‑powered agents with support for multiple LLM adapters, tools, memory, and logging.
  • You can stream responses from any supported model (e.g., Llama 3.1 70B via Watson X AI) by configuring API keys, importing the appropriate LLM class, and using the `llm.doStream` method with a simple prompt.
  • After setting up a Node project (using Yarn, TSX, and the IBM Generative AI SDK), the author demonstrates how to run a basic streaming script (`flow.ts`) to verify connectivity and output generation.
  • To evolve the script into a functional agent, the tutorial introduces the `BAgent` class, adds token‑based memory, and shows how function‑calling capabilities enable the agent to perform actions beyond static text generation.
  • The overall workflow is broken into three steps: initialize the project, generate streaming answers, and then extend the agent with memory and function calls to create a more interactive, real‑world‑ready LLM agent.

Full Transcript

# Building Your First IBM LLM Agent **Source:** [https://www.youtube.com/watch?v=C-pZXA6Te_o](https://www.youtube.com/watch?v=C-pZXA6Te_o) **Duration:** 00:04:19 ## Summary - The IBM React Agent framework (B‑Framework) provides a TypeScript‑based, plug‑and‑play environment for building LLM‑powered agents with support for multiple LLM adapters, tools, memory, and logging. - You can stream responses from any supported model (e.g., Llama 3.1 70B via Watson X AI) by configuring API keys, importing the appropriate LLM class, and using the `llm.doStream` method with a simple prompt. - After setting up a Node project (using Yarn, TSX, and the IBM Generative AI SDK), the author demonstrates how to run a basic streaming script (`flow.ts`) to verify connectivity and output generation. - To evolve the script into a functional agent, the tutorial introduces the `BAgent` class, adds token‑based memory, and shows how function‑calling capabilities enable the agent to perform actions beyond static text generation. - The overall workflow is broken into three steps: initialize the project, generate streaming answers, and then extend the agent with memory and function calls to create a more interactive, real‑world‑ready LLM agent. ## Sections - [00:00:00](https://www.youtube.com/watch?v=C-pZXA6Te_o&t=0s) **Building an LLM Agent with IBM Framework** - The speaker walks through creating a TypeScript‑based, streaming LLM agent using IBM’s React Agent Framework—setting up environment variables, selecting adapters, and configuring a Llama 3.1 70B model for tool use, memory, and logging. ## Full Transcript
0:00this is how to build your first llm 0:01powerered agent using the IBM framework 0:03so our research Labs have been cooking 0:05up a react agent framework it allows you 0:07to use a bunch of tools work with 0:08different llms use memory and logging 0:11pretty much everything needed to have a 0:12great agent but it gets better there's a 0:14bunch of features that are going to help 0:15you if you're trying to do this for real 0:17I'll come back to these in a sec I'm 0:18going to show you everything you need to 0:20know about it in a few minutes you're 0:21probably thinking why wouldn't I just 0:23use Lang chain or can I use open source 0:24llms or is it just limited to IBM stuff 0:27it's written in typescript and I haven't 0:29touched the language since my startup 0:30crash and burned so this is going to be 0:32fun I'm going to break it down in just 0:33three simple steps and it begins with 0:35generating an answer using streaming now 0:37my code just wants to belong so I'm 0:39going to create a new file called flow. 0:41TS to hold it there are a number of llm 0:43adapters in the B framework including 0:44grock Lang Chain O open Ai and what's an 0:47xod AI I'm going to use the lad I've 0:48already got my API key and project ID 0:50I'll make them available to the process 0:52by using the env. config method and 0:54while I'm at it I'll bring in the 0:55whatson X chat llm class I can connect 0:58an llm on what's an next at AI now 1:00rather than using any old model I can 1:01specify the Llama 3.1 70b instruct 1:04preset via the from preset class here I 1:06can also set parameters like the 1:07decoding method and the max token my 1:09goal right now is to just generate 1:10output using a prompt to do this I'm 1:12going to use the llm do stream method 1:14given I'm coding in typescript I need to 1:16import the base message and the RO 1:17primitive to form a prompt I can then 1:19throw together an asynchronous function 1:21and call the llm stream method to that I 1:23can pass through my prompt who is 1:25Nicholas or not just like that time I 1:26ate a Carolina Reaper before a 16-hour 1:28flight I've made a catastrophic error I 1:31haven't initialized my project yet or 1:32installed any dependencies let's fix 1:34that I'll initialize the node project by 1:36running yarn and it and install tsxv the 1:39Bagent framework and the IBM generative 1:41AI node SDK I'm going to make one quick 1:44tweak to the packages. Json file in add 1:46a script called flow which runs the 1:48flow. typescript file using TSX if I run 1:51Yan run flow I get an okay result from 1:53the llm but it doesn't really know me 1:55and it doesn't have access to the net to 1:57find out so I've got the streaming 1:59working but let's be you're watching 2:00this for agents and so far I haven't 2:02quite delivered this is all about change 2:04in part two building an agent with 2:06function calling let's change it up I 2:07can bring in the B agent from the 2:08framework and begin creating a new 2:10instance of the llm agent the first 2:12parameter I'll pass is my existing llm 2:14sometimes you'd rather not know who and 2:16what you texted after a big night out 2:17but for our llm adding memory is going 2:19to provide a lot of content so I'll 2:21import the token memory class and add it 2:22to the agent now tools I can bring in 2:24the Duck Duck Go Search tool to access 2:26the net and the open media tool for all 2:28things weather again I'll add the these 2:30as a new parameter to the L now to bring 2:32it all together we'll get rid of the 2:33function we wrote for the Baseline 2:34generation and use the agent there's two 2:36methods I need to run the agent run and 2:38observe to the run method I'll pass the 2:40prompt and execution parameters like 2:42number of retries then I'll use the 2:44emitter this allows me to see what's 2:45happening at each stage of the agentic 2:47workflow each time there's a completed 2:48action I'll be able to see the status by 2:50observing the update action in this case 2:52I'll console log the update key and the 2:54update value this will show things like 2:55the output from the function and the 2:57final response and last but not least 2:58I'll console log the the final result 3:00text for good measure I can ask when IBM 3:02was founded and after searching the net 3:04using duck ducko we get the correct 3:06response likewise if we want the agent 3:07to use the weather tool I can ask what 3:09the weather is like in New York and at 3:11the time get a valid response by 3:12leveraging open media these tools are 3:14slick we can search a net call an API 3:17but what if I wanted to write some code 3:18or execute some logic using a code 3:20interpreter this brings me to part three 3:23adding a python code interpreter first 3:25up I need to bring the python tool and 3:26the local python storage class the 3:28python tool will be used to execute Ute 3:30code and the storage component allows 3:31for code to be read and output locally 3:33now to configure the python tool I'm 3:35setting up the code interpreter URL and 3:37the storage locations this tells my 3:38agent where to run python code and where 3:40to sort any files it might create or 3:42read given we're running our agent in 3:44typescript we need somewhere to execute 3:46python code the Bagent framework comes 3:47with a standalone code interpreter which 3:49can be run via Docker I haven't done 3:50this yet so let's go do that the docker 3:52file is available via the B agent get 3:53Hub repo first clone it CD into it and 3:55install any remaining dependencies then 3:57I can spin up the container using yarn 3:59run infra start code interpreter then if 4:01I change the prompt to something like 4:02is3 a prime number I can run it using 4:04yarn run flow and finally we get the 4:07correct 4:10result hey guys editing Nick here I hope 4:13you enjoyed the video let me know what 4:14you thought in the comments and code 4:16will be down below