Building a Generative AI Pet Naming App
Key Points
- David Levy demonstrates building a full‑stack AI‑powered app with a React TypeScript UI, a TypeScript Express server, and a Python FastAPI backend to generate pet‑name suggestions.
- The app collects pet descriptions, sends them to a generative LLM, and returns a creative name with an explanation (e.g., “Lady Gobbledygawk”).
- He explains prompt engineering in watsonx.ai Prompt Lab, using clear instructions and few‑shot examples to shape LLM output, and shows how to adjust model parameters and view generated code.
- The tutorial walks through cloning the repository, creating a Python virtual environment, and installing the FastAPI dependencies to prepare the backend for integration.
Sections
- Building a Generative AI Pet Naming App - IBM Technology Engineer David Levy demonstrates how to create a React TypeScript UI, a TypeScript Express server, and a Python FastAPI backend that leverage watsonx.ai prompt engineering to generate pet name suggestions with explanatory reasons.
- Setting Up FastAPI with Watsonx - The speaker walks through activating a virtual environment, installing dependencies, configuring a .env file with Watsonx API credentials, launching the FastAPI via Uvicorn, and verifying the health and summary endpoints on Swagger UI.
- FastAPI Structure with Prompt Lab Integration - The speaker explains how they organize a FastAPI app by mirroring watsonx.ai Prompt Lab configurations—models, parameters, and few‑shot examples—into data directories for rapid, iterative development.
- Integrating Watsonx AI with LangChain - The speaker explains how the `generate_text_response` utility builds a prompt, optionally includes few‑shot examples, retrieves a model via `ModelRequest.get_model` using the watsonx.ai SDK, wraps it in a `watsonxLLM` for LangChain compatibility, and assembles the full chain with an output parser.
- Defining Typed JSON Responses with FastAPI - The speaker demonstrates importing a Pydantic class, setting it as the `response_model` for a FastAPI route, and using it to enforce a specific JSON output shape (e.g., a `generated_text` string) for easier team coordination.
- Integrating Pydantic JSON Response - The speaker walks through importing the GeneratePetNameResponse schema, modifying the endpoint to use LangChain’s PydanticOutputParser with format instructions, and switching from a text parser to a JSON response parser for the generate_pet_name route.
- Transferring Prompt Lab to Code - The speaker explains how to extract a prompt’s instructions, few‑shot examples, and model parameters (via Curl) from Watsonx.ai Prompt Lab and recreate them as new JSON, example, and template files, swapping the IBM Granite model for Mixtral.
- Integrating Format Instructions in API - The speaker demonstrates adding a format_instructions field, embedding it with prompt examples and a PydanticOutputParser, returning a dictionary containing generated_text, and validating the response against Swagger documentation.
- Setting Up Env Vars for Integration - The speaker explains running a setup command to create example environment files that link the React UI, Express server, and FastAPI, enabling each component to communicate through defined endpoints.
- Creating a New Pet Namer Route - The speaker walks through adding a petNamerRoutes.ts file, importing Axios, loading the FastAPI API_URL from process.env, and defining an async POST handler (generate_pet_name) that forwards requests to the FastAPI with basic try/catch error handling.
- Routing UI Data to FastAPI - The speaker details how to capture dynamic UI input in an Express route, forward it as a POST request to a FastAPI endpoint using an environment‑based URL, and return the generated text (name and description) from the response.
- Validating POST Endpoint via Postman - The speaker explains sending UI data with Axios to an Express route that returns generated text, then uses Postman to test the health and pet‑naming endpoints before integrating them with a FastAPI backend.
- Using Carbon React for UI - The speaker demonstrates how to employ the Carbon design system in a React project—adding headings, combo boxes, checkboxes, and other components—while navigating the file structure and explaining the steps for a less‑experienced frontend developer.
- Simple ID/Text List Filtering - The speaker explains how to populate a Carbon UI component with an array of items containing IDs and text, enabling built‑in filtering and state management, typically sourced from an API.
- Toggle Gender Checkbox and Tag Input - The speaker explains a Boolean‑based gender toggle that disables during API loading and a custom Carbon‑styled input that creates descriptor tags when entered.
- Enter-Key Input Handling and Tile Rendering - The speaker explains adding an onKeyDown Enter listener to submit text, dynamically enabling/disabling a button based on input state, storing entries in a descriptor array, and rendering each entry as a Carbon Tile.
- Implementing Carbon Button Set - The speaker notes a personal liking for box‑shadow, then walks through adding a Carbon React button set with primary “Submit” and secondary “Clear” buttons, configuring their kinds and spacing, and wiring simple state‑reset functions for clearing and submitting.
- Implementing Loading State with Accordion - The speaker explains adding a loading state using accordion skeletons, constructing a comma‑separated descriptor string, and wiring the API call to display results.
- Extracting API Response Data - The speaker explains how to use TypeScript and Axios in a React UI to access and name the returned fields—generated text, name, description, and the original request—after a successful call to the Express server.
- React to FastAPI Request Cycle - The speaker explains how a React submit handler toggles loading, routes a request through an Express server to FastAPI, receives and displays the generated response and original payload, and outlines error handling for cases where the LLM does not return proper JSON.
- Debugging Express Server Errors - The speaker forces a 500 response to test error handling, fixes the Express server, and urges viewers to expand the full‑stack generative AI demo with new routes, prompts, and UI.
Full Transcript
# Building a Generative AI Pet Naming App **Source:** [https://www.youtube.com/watch?v=2hB3XzfpGtI](https://www.youtube.com/watch?v=2hB3XzfpGtI) **Duration:** 01:05:01 ## Summary - David Levy demonstrates building a full‑stack AI‑powered app with a React TypeScript UI, a TypeScript Express server, and a Python FastAPI backend to generate pet‑name suggestions. - The app collects pet descriptions, sends them to a generative LLM, and returns a creative name with an explanation (e.g., “Lady Gobbledygawk”). - He explains prompt engineering in watsonx.ai Prompt Lab, using clear instructions and few‑shot examples to shape LLM output, and shows how to adjust model parameters and view generated code. - The tutorial walks through cloning the repository, creating a Python virtual environment, and installing the FastAPI dependencies to prepare the backend for integration. ## Sections - [00:00:00](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=0s) **Building a Generative AI Pet Naming App** - IBM Technology Engineer David Levy demonstrates how to create a React TypeScript UI, a TypeScript Express server, and a Python FastAPI backend that leverage watsonx.ai prompt engineering to generate pet name suggestions with explanatory reasons. - [00:03:15](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=195s) **Setting Up FastAPI with Watsonx** - The speaker walks through activating a virtual environment, installing dependencies, configuring a .env file with Watsonx API credentials, launching the FastAPI via Uvicorn, and verifying the health and summary endpoints on Swagger UI. - [00:06:19](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=379s) **FastAPI Structure with Prompt Lab Integration** - The speaker explains how they organize a FastAPI app by mirroring watsonx.ai Prompt Lab configurations—models, parameters, and few‑shot examples—into data directories for rapid, iterative development. - [00:09:32](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=572s) **Integrating Watsonx AI with LangChain** - The speaker explains how the `generate_text_response` utility builds a prompt, optionally includes few‑shot examples, retrieves a model via `ModelRequest.get_model` using the watsonx.ai SDK, wraps it in a `watsonxLLM` for LangChain compatibility, and assembles the full chain with an output parser. - [00:12:39](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=759s) **Defining Typed JSON Responses with FastAPI** - The speaker demonstrates importing a Pydantic class, setting it as the `response_model` for a FastAPI route, and using it to enforce a specific JSON output shape (e.g., a `generated_text` string) for easier team coordination. - [00:15:47](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=947s) **Integrating Pydantic JSON Response** - The speaker walks through importing the GeneratePetNameResponse schema, modifying the endpoint to use LangChain’s PydanticOutputParser with format instructions, and switching from a text parser to a JSON response parser for the generate_pet_name route. - [00:19:01](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=1141s) **Transferring Prompt Lab to Code** - The speaker explains how to extract a prompt’s instructions, few‑shot examples, and model parameters (via Curl) from Watsonx.ai Prompt Lab and recreate them as new JSON, example, and template files, swapping the IBM Granite model for Mixtral. - [00:22:14](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=1334s) **Integrating Format Instructions in API** - The speaker demonstrates adding a format_instructions field, embedding it with prompt examples and a PydanticOutputParser, returning a dictionary containing generated_text, and validating the response against Swagger documentation. - [00:25:26](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=1526s) **Setting Up Env Vars for Integration** - The speaker explains running a setup command to create example environment files that link the React UI, Express server, and FastAPI, enabling each component to communicate through defined endpoints. - [00:28:31](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=1711s) **Creating a New Pet Namer Route** - The speaker walks through adding a petNamerRoutes.ts file, importing Axios, loading the FastAPI API_URL from process.env, and defining an async POST handler (generate_pet_name) that forwards requests to the FastAPI with basic try/catch error handling. - [00:31:49](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=1909s) **Routing UI Data to FastAPI** - The speaker details how to capture dynamic UI input in an Express route, forward it as a POST request to a FastAPI endpoint using an environment‑based URL, and return the generated text (name and description) from the response. - [00:35:09](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=2109s) **Validating POST Endpoint via Postman** - The speaker explains sending UI data with Axios to an Express route that returns generated text, then uses Postman to test the health and pet‑naming endpoints before integrating them with a FastAPI backend. - [00:38:14](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=2294s) **Using Carbon React for UI** - The speaker demonstrates how to employ the Carbon design system in a React project—adding headings, combo boxes, checkboxes, and other components—while navigating the file structure and explaining the steps for a less‑experienced frontend developer. - [00:41:19](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=2479s) **Simple ID/Text List Filtering** - The speaker explains how to populate a Carbon UI component with an array of items containing IDs and text, enabling built‑in filtering and state management, typically sourced from an API. - [00:44:29](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=2669s) **Toggle Gender Checkbox and Tag Input** - The speaker explains a Boolean‑based gender toggle that disables during API loading and a custom Carbon‑styled input that creates descriptor tags when entered. - [00:47:33](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=2853s) **Enter-Key Input Handling and Tile Rendering** - The speaker explains adding an onKeyDown Enter listener to submit text, dynamically enabling/disabling a button based on input state, storing entries in a descriptor array, and rendering each entry as a Carbon Tile. - [00:50:36](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=3036s) **Implementing Carbon Button Set** - The speaker notes a personal liking for box‑shadow, then walks through adding a Carbon React button set with primary “Submit” and secondary “Clear” buttons, configuring their kinds and spacing, and wiring simple state‑reset functions for clearing and submitting. - [00:53:41](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=3221s) **Implementing Loading State with Accordion** - The speaker explains adding a loading state using accordion skeletons, constructing a comma‑separated descriptor string, and wiring the API call to display results. - [00:57:00](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=3420s) **Extracting API Response Data** - The speaker explains how to use TypeScript and Axios in a React UI to access and name the returned fields—generated text, name, description, and the original request—after a successful call to the Express server. - [01:00:08](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=3608s) **React to FastAPI Request Cycle** - The speaker explains how a React submit handler toggles loading, routes a request through an Express server to FastAPI, receives and displays the generated response and original payload, and outlines error handling for cases where the LLM does not return proper JSON. - [01:03:21](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=3801s) **Debugging Express Server Errors** - The speaker forces a 500 response to test error handling, fixes the Express server, and urges viewers to expand the full‑stack generative AI demo with new routes, prompts, and UI. ## Full Transcript
Hi, my name is David Levy.
I’m a Technology Engineer with IBM, and today we’re going to build an application on AI Applied.
Today I’m going to walk you through the process of creating a React TypeScript UI;
...setting up a TypeScript Express Server for handling UI logic;
...and integrating it with a Python FastAPI backend,
...that leverages Generative AI capabilities by creating a pet naming suggestion application.
Let’s get started.
So when we come here, we can see right away that this is our application.
It’s asking us to describe our future pet, and it’s going to give us a suggestion for its name and also a reason why it did it.
So I think I’m going to get a dog.
It’s going to be a girl.
And she’s going to be sweet; cute; cowardly; sleeps under my bed; and keeps me up at night by talking to me.
Very weird animal.
So let’s see what kind of name we get back,
...Lady Gobbledygawk, and then it gives us a reason why it named it such.
Now in order to get an LLM to respond in such a way, we’re going to have to work with a few different things.
One is prompt engineering, and the way we’re going to do that is we’re going to go to the watsonx.ai Prompt Lab.
So let’s start there.
First I’m going to show you a Summary Generation Prompt example.
So right now I’m asking it to summarize the history of bicycles.
And when I click generate, it gives me exactly what I want.
And the way I’ve accomplished this is first by giving a clear and concise instruction,
...and then also using something called few-shot examples.
So we give a long string that we want to be summarized, and then we give the example of what we’re expecting back.
So we do that a couple of times so when we ask it to do live generation, it provides the answer to us exactly the way we want it.
One of the things I want to show you within the watsonx.ai Prompt Lab is that we have options for model parameters,
...and also we have a view of the code, which is going to come in handy when we want to transpose this to our FastAPI.
So let’s clone our repo and we’ll start working on the FastAPI.
So when working with Python, it is a good idea to set up a virtual environment.
The way we could do that is we could – I already have one set up and we’re going to just activate it,
...and then we’re going to install the requirements file that’s located inside of the repo.
Perfect.
So now that we have the GitHub repo cloned on our machine, let’s first install the Python dependencies.
When dealing with Python, what is good practice to do is install dependencies of any kind of project into its own virtual environment.
And what this is going to do is it's going to create its own segmented environment with no packages installed at all,
...which we can then use to install all the unique packages that we have in this particular repo.
So now that we’ve created the virtual and Applied AI, let’s activate it.
And we can see right on the screen that we’re now within that virtual environment.
And the next thing we have to do is install the requirements.txt, which is just a frozen dependency requirement file.
So we’re just going to hit and install it into this particular virtual environment.
Now that we’ve installed the dependencies for our FastAPI, we’re going to copy the .env.example and create a new .env.
In the .env we’re going to want to grab the API key, the Project ID, and the URL that we can find in the watsonx.ai Prompt Lab.
So if we go to the code and we open it up, we can grab the Project ID right from the Curl.
And for the API key, we go back to our Identity & Access Management from our cloud.
We get there by the Manage dropdown, go to API keys, and create a new API key.
Never share your API key.
Don’t ever push it up to GitHub. You’ll get in trouble.
And now that we have this, we can start up the application.
So from our API directory in the code repo, we can run the command Uvicorn Server app and do fast reload.
This is very good for development.
But we can go to our Swagger UI and you could see that we have a couple of prebuilt routes in the repo we’ve provided for you.
We have a health route just to make sure that the FastAPI is up, and you can see that it’s up.
And then we also have that generate_summary endpoint.
And we have that there just to recreate exactly what I showed you from the Prompt Lab where you have instructions,
...you have a few-shot examples, you have the data, you send the data and you get a new summarization.
I wanted to show how to actually apply that in the FastAPI.
And with FastAPI you have these routes that you could test against, and you can see exactly what we’re looking for.
So we’re looking for a template model, we’re looking for a prompt template name, and then we’re looking for additional kwargs.
And I’m going to show you exactly how to use it from this Swagger UI.
So we’re going to send over the exact thing that we had, the history of the bicycle.
We’re going to send it over as data to our FastAPI and we’re going to ask it to summarize it.
And now it’s hitting our FastAPI and it provides us a nice summarization.
How do we do that?
There’s a couple of components within the FastAPI that I want to show you.
So if you look at the way the application is structured, we have a data directory.
In the data directory, we have a couple of additional directories,
...one called examples, one called models, and one called prompt templates.
Each of these I have used the Prompt Lab to dictate what I put in there.
So I’m basically just transposing what I’ve done in the watsonx.ai Prompt Lab into the application,
...and that makes this process so much easier.
So if we look at the model, we can see all the parameters;
...the ID for the model that we’re using, the parameters for decoding, and min new tokens, et cetera.
And if we look at our Prompt Lab and open up the view code, we can see all the same information.
So basically what I did was I copied what was successful for me in the Prompt Lab,
...transposed it into my application, into that model of JSON, and use that as the model parameters for my endpoint call.
Similarly, we have all of these examples.
We’re using few-shot to implement better responses from the LLM.
So the way we can do that is, similarly name it generate summary as a text file.
And we have these examples, identical to the ones we have here.
And the reason I’m doing this is because we’re going to be building this iteratively.
Like we want to be able to go directly from watsonx.ai,
...into our code base and then integrate it with the UI and just be able to work really, really quickly.
It’s just a very nice pattern to use to get really great results when dealing with something like an LLM.
Lastly, we have the prompt template.
Now a prompt template is something that we can use to give instructions, use the examples that we’ve placed,
...and take the data that we’re sending via the API and send all of that to the LLM to return back the responses we’re looking for.
So if we look at that, we have the instruction, summarize the following text; we have the examples; we have the input,
...which is that data that we’re sending directly from that endpoint that we showed in the Swagger docs.
And that’s exactly how we implement it.
Now what we’re using it as is in the server.py file.
Here you could see the generate_summary endpoint.
We’re looking for three parameters; the template model, the prompt template name, and the prompt template kwargs.
Now the template model is defaulting to generate_summary,
...and that is because in the data directory those are the names of the files that we’re using.
If you look at the functionality in the repo when you get it, you’ll see that we’re looking at these directories.
We’re grabbing the names of those directories and using those,
...as the key to a dictionary that we’re going to return back to us when we hit it with the correct parameter.
So if we look for something that says generate_summary, it’s going to grab the examples generate_summary,
...it’s going to grab the model generate_summary, and it’s going to grab the prompt template.
And from here we have a utility function called generate_text_response.
And from here there’s a lot of functionality that’s going on exactly what I said.
It’s going to say, look at that template model that we’re sending in a parameter and grab that model information from that JSON.
Same with the prompt_template_request. And same with the examples.
A little bit different with the examples, though,
...because you’re not going to want to use examples for every API call or every watsonx.ai call that you make.
So we have the option just to leave that as blank.
So if you don’t have any few-shot examples, no problem.
And after this, we’re going to use this method called get_model that’s in a class called ModelRequest.
Now this is where we’re going to be working directly with the watsonx.ai SDK.
So you could look at exactly what we’re doing with the documentation on watsonx.ai,
...but very basically you have the model_id, which is the model name, your credentials, and your parameters and your project id.
Now a little bit different here is that we’re going to add this into a LangChain invocation,
...so we have to wrap that model inference within a watsonxLLM wrapper, which makes it runnable within the LangChain.
So if we go back to the server, we can see that we have this chain, which is starting with the prompt_template,
...grabbing that model, and then ending it with the output_parser.
So the output_parser is something that’s provided to us by LangChain, and it’s going to provide a string from the response.
And once that is done, we call it and we return it as generated_txt.
Once that is completed, the function returns back something called generated_txt,
...which is just a string, which you could see in this response body.
So the next thing we’re going to do is now create a new endpoint called pet namer.
Now the reason this is going to be a little bit different is that we’re going to want to coerce the model to return back JSON.
And if we know LLMs, they’re not the easiest to ensure that we always get the data structure we’re expecting.
So what we’re going to pull in is something called a PydanticOutputParser.
Now we have this here, it’s part of the LangChain package,
...but Pydantic is used throughout this application and I could even show you something really quickly.
So if we go to the generate_summary endpoint,
...and we want to add a new class to ensure that everything that comes back as a response from generate_summary is in this JSON format,
...we can add a new class using Pydantic.
So let’s try that out really quickly.
I’ll add a generate_summary_response.py.
And we’ll open up one that already exists and we’re going to rename it from json_response to generate_summary_response.
We’re going to only be using the base model.
And what we want to return back is an object that has a generated_text field that returns back a string.
Now what we could do here, let’s import it into our init.
So from generate_summary_response we’re going to import the generate_summary_response.
And if we go back to server, let’s import it as one of our schemas.
And if we go back to the generate_summary,
...we can say that every response we want from this route, we want it to be in that exact shape, that exact JSON shape.
And now this is good for obviously a mixed team,
...because if we know exactly what’s coming in and exactly what’s coming out, it makes coordinating our code much, much easier.
So what we’re going to do is we’re going to take that class and we’re going to add it as a response_model to our endpoint.
And when we go back to our FastAPI docs,
...we can see that now the expected output is going to be this JSON object with a generate text field and guaranteeing a string output.
The way we can utilize this pydantic class pattern to coerce the model to return back JSON objects,
...is by using the same exact pattern, and instead of doing just the straight string output parser,
...we’re going to use a pydantic output parser to say, this is what we want the response to look like.
So let’s create that right now.
We’re just going to copy and paste our generate_summary route and rename it to generate_pet_name.
And if we look at the way that we have the GenerateSummaryResponse, we could take something that I’ve already written out,
...which is called the JsonResponseTemplate, and we’re going to add another class called GeneratePetNameResponse.
And we’re going to say generated_text equals this JSONResponse class.
So the output of this generate_pet_name is going to be almost identical to this object,
...which is going to give us a name and a description, but it’s going to be a field within an object that we’re returning.
So if we want to see that in action, we can see exactly what we’re going to get back.
Hold on, let me just rename this to generate_pet_name.
We’re going to reclassify this route as a generate pet name based on some descriptors.
And now we could look at our Swagger documentation,
...refresh it, we’re going to have a brand new route in here called generate_pet_name.
And if you look at the shape of the data that we’re supposed to get back – hold on, we actually have to import it first.
So first we just have to grab the GeneratePetNameResponse from the schema,
...and import it into our server.py to be used by our generate_pet_name endpoint.
So let’s go to our GeneratePetNameResponse and add it.
Everything looks good.
And now we should see exactly what we’re looking for.
We’re telling anyone who’s using this route that we’re going to receive something that looks exactly like this.
It’s going to be a JSON object.
And we’re going to have a name and a description, and the name is going to be a string and the description’s going to be a string.
So the last thing we have to do is let’s use that utility function that we used for the summary,
...and we’re just going to make one minor change.
So let’s change this from generated_text_response to generated_json_response.
And instead of using the StrOutputParser, we’re going to use the PydanticOutputParser,
...and we’re going to force it to use pydantic_object equals the JSONResponseTemplate.
And from there we’re going to grab – and I’ll show you exactly in the documentation where that is.
So if we look at exactly the Pydantic Parser documentation from LangChain,
...we’re going to grab something called the format_instructions.
And the way we get that is that when we define the PydanticOutputParser and we give the pydantic_object the class we’re using,
...and this time it was a JSON object,
...we can just grab the get_format_instructions from that parser, and that’s what we’re going to use.
So let’s call it format_instructions equals parser.get_format_instructions.
Perfect.
And this is going to be a string.
So the next thing we have to do is now we just have to add the examples, the model, and the prompt template.
So now that we’ve used the PydanticOutputParser,
...to try to convince or coerce the LLM to return back a JSON object to us based on that class we just created,
...we have to update the way that we’re making prompt_template and the way we’re using the examples.
And of course, we have to grab the model parameters from the prompt we’ve worked on in order to get the correct responses.
So let’s go back to the watsonx.ai Prompt Lab.
And this is the prompt that I’m going to transpose from the Prompt Lab into our code base.
So you could see that we have the instructions, we have the few-shot examples,
...we have the parameters that we have set, we’re using sampling,
...and we also have this Curl is where we’re going to take all that information.
Let’s go to our code base.
We’re going to create a new model file, so let’s name it pet_namer.json; we’re going to name a new examples file,
...pet_namer.txt; and we’re going to make a new prompt_template, you guessed it, pet_namer.txt.
So in the examples file, we’re going to grab the few-shot examples we have from here,
...and we’re going to take the model information from this Curl –
...so meaning like the parameters, what model we’re using, etcetera – and we’re going to create a new model file.
And if you look at the way – we’re just going to copy and paste from the generate_summary model parameters,
...and just create a new one and call it pet_namer.json.
But instead of using the IBM Granite model,
...and instead of using these parameters, we’re going to grab exactly what we’re using from the watsonx.ai platform.
So we’re using mixtral this time.
And that’s something that’s really very helpful about the watsonx.ai platform.
You could just use whatever model suits your best need for whatever you’re doing.
I find that very helpful in engagements.
And we’re going to replace all of these parameters with the ones that we’ve been using for that prompt that worked really well.
We are going to make one change from the GenerateSummaryPromptTemplate.
And this time what we’re going to do is we’re going to utilize that PydanticOutputParser.
And what that really is, the format instructions, if you look at it,
...it’s just a really well-crafted prompt to coerce the LLM to return back JSON.
And I find that very, very, very helpful.
So if you look at the way that we’ve structured this prompt as opposed to the generate summary prompt,
...we’re telling it now, your response should follow this format.
And this format is the format instructions we have extracted from the PydanticOutputParser,
...and so we’re going to add that to the prompt template,
...so when it’s grabbed, it’s going to grab the examples that we use, it’s going to grab the data that we’re sending it.
But before all of that, it’s going to grab the format instructions from the PydanticOutputParser, which is super, super great.
Makes it a lot easier.
And to be totally honest, when I was building out this application,
...I was trying to coerce it myself with my own prompts and a coworker of mine, Drew,
...showed me exactly how to use the PydanticOutputParser, and totally changed –
...you know what? Honestly, it changed my life, if I’m being totally honest.
Now we’re going to add another field, and we’re going to call it format_instructions.
And we’re going to use the format_instructions that I have here. It’s just going to be a string.
So now when we return this, you can see that we’re adding to our kwargs.
We have the examples, which is going to be within the prompt template.
The examples are going to be interpolated into that prompt,
...same with the format_instructions, and the data is what we’re going to be sending via the API call.
So let’s just make sure everything looks good.
We have the examples, we have the new formatting structures from the PydanticOutputParser,
...we have the chain, and we have the generated text.
The only thing that we’re going to do differently is we’re going to return back generated –
...we’re going to create a dictionary called – with a field called generated_text, and we’re going to pass in the generated_text.dict.
So now this is going to be a dictionary.
And let’s just ensure that what we’re returning back from the generated prompt – okay, perfect.
When we look at the generate_pet_name, we’re returning back this generated_pet_response.
Remember, if we look at our Swagger documentation, we’re expecting it to come back as generated text with that dictionary.
So if all goes well, we’ll be able to test this out.
Let’s restart our Swagger docs. Let’s try it out.
And let’s provide it that data.
And what we’re expecting is something like this example.
So now we’re going to test out the endpoint that we just created, the generate_pet_name,
...and it’s going to expect a data field with some description of an animal.
And we’re going to expect a response that is wrapped in a JSON with a name and a description.
So let’s see if it works. So let's see if it works.
Great. We have Captain Sparklesbeak and it gives us an explanation.
So now we have a working FastAPI endpoint.
We saw how we can coerce the LLM to return as JSON.
We figured out how to create a route in the FastAPI.
And the next thing we’re going to do is integrate it into our React UI and our Express TypeScript backend.
So now that we’ve finished our FastAPI endpoint, we have a generate_pet_name endpoint,
...that we could send the description of our potential pet, let’s integrate it into our frontend.
And so we’re going to be integrating it first into our Express Server,
...that’s going to direct any calls from the React UI to our FastAPI and then return the data from it.
So let’s get right into it.
So now that we have checked out into Step 02-express-server,
...what we want to do is we want to go into the UI directory and we’re going to install the route dependencies.
Let’s run the setup command.
So this setup command is going to do a couple of things.
It’s going to create envs from examples that we have in both the server and the client.
And these envs are going to be – when you’re working locally they are just going to be what they are in the examples.
But what these envs are going to do, it’s going to say, okay,
...the React UI is going to be able to talk to the Express Server, and we’re giving it that endpoint.
And for the Express Server, we have an env that’s going to say, okay, this is the endpoint for the FastAPI.
Because if you think about the flow of information that you saw on the application,
...we’re going to have the React UI, we’re going to fill out a form, and we’re going to hit a submit.
And that submit is going to send the data from the React UI to our Express Server;
...the Express Server is going to send that to our FastAPI; and the FastAPI is going to communicate with the watsonx.aiLLM;
...return back the response to the Express Server, which is going to return it back to the React UI.
That’s the flow of information.
So in order for that to work, we need these envs to tell the UI and the server and the FastAPI where to look and who to talk to.
So now that we have the dependencies installed for the React UI and the Express Server,
...we’re going to start running them in dev-mode.
So we’re going to go into the UI directory and we’re going to run this command,
...npm run dev server, and that’ll start up the Express Server.
And we’re going to run npm run dev client, and that’s going to start up our React UI.
And if you look at what the React UI was,
...what we’re going to be starting off here is going to be pretty much totally blank, we’re going to build it up, and show you how to do it.
But first, let’s get that Express Server working.
So let’s go into the server directory, open it up and let’s take a look at what we actually have in the server.
We have a boilerplate code in the index, which is just a basic Express Server.
We also added socket.io.
So having web sockets between the client and the Express Server is really very helpful and useful.
We could watch databases, do anything like that. It’s nice to have so we’ve added it.
And then we have this middleware that’s going to be using our routes.
So right now we only have a config route and a DB route.
Neither of them are going to be in use, but they’re there if you need to use it.
And we see the endpoints that we’re going to – we’re able to at the API and API.db.
So now we can see that the UI is blank.
We’re going to build this all out.
First thing we’re going to do is create the new pet namer route.
So in your routes directory, let’s create a new route.
We’re going to name it petNamerRoutes.ts.
And let’s grab the configure route, which is perfectly fine as a point to start.
We’re going to import Axios because we’re going to be making a call to the FastAPI and I personally like the Axios package.
We’re also going to import .n because we’re going to be using that env to communicate with the FastAPI.
And we’re just going to run .nconfig.
And if you look at your env in your server, you’re going to have an API_URL,
...which is the exact endpoint that we use to see our Swagger docs and the FastAPI,
...so we just have to grab that and bring it into our petNamerRoute.
So let’s just call it API_URL.
We’re going to grab it from process.env.API URL, or it’s going to be an empty string.
I’ll just make sure that it’s a string.
So the next thing we’re going to do is let’s create a POST.
We could copy this config.
We’ll get rid of this one.
We’re going to turn this into a POST.
We’re going to say generate_pet_name.
And because this it’s going to hit that FastAPI, we want it to be async.
We’re going to call – we’re going to have – within that function, we’re going to have the request object and the response object.
And what we’re going to do is we’re going to add a trycatch; just add some boilerplate here for now.
We’ll log the error.
And if we hit the error, we’ll send back an internal server error, so that’s status code 500, and send back an error/error.
One of the really useful things about setting those pydantic classes inside the FastAPI,
...and are really are articulating exactly what kind of parameters it’s expecting or what it’s going to be returning is that,
...let’s say we have two people working on the same project.
We have an AI engineer working on the FastAPI and we have someone like me working on the UI.
I can just go directly to the FastAPI, I see exactly what we’re expecting,
...and exactly what we’re expecting to get back and write the route to fit that.
So we know that we’re going to receive something like this, so let’s just copy it and bring it here for reference.
We also know that this is what it’s going to expect.
So this is going to be the output, and this is going to be the input.
So if we’re going to be sending this back, let’s just copy it and bring it directly into our route.
We’ll call it body.
The data is obviously going to be dynamic, so we’ll just add a data field.
But before we can even get there, we have to –
...we know that we’re going to be sending the data from the UI to the Express Server so let’s just get that data first,
...and we’ll be getting it from the request body.
So we’re just – we got data, the body, and this is the body that we’re going to be sending back.
And so now let’s make the call to the FastAPI.
So we’re going to be making a POST, and it’s expecting this body, and we’re going to be hitting this endpoint.
So in order to hit that, first, we’re going to use the API_URL that we have in our env, the localhost 8,000,
...and then we’re going to make the endpoint the generate_pet_name.
And the back ticks just mean that you could just – it’s basically like an s-string in Python.
We’re going use the body as the request body.
And then we’re going to – let’s see what we get back.
Well, we know it’s going to look like that.
We’re going to have generated text. We’re going to have a name and a description.
So let’s say res.status. So this is a successful call.
We’re going to send back generated_text, and we’re going to say that’s result.data.generated text, because we could see right here.
We’re getting this object back.
It’s going to be generated text and that object’s going to have name and description, which is what we’re going to use in the UI.
Now if we go and look at what the end result’s going to be, we have this part called data sent to the API,
...which is really useful just to see exactly what we’re sending back.
And if you look at the way this looks, this is going to be pretty much the same thing we’re sending here.
So we can just grab the body that we’re sending, and we could call it request or data_sent_to_API and we’re going to call that body.
Let’s just wrap these in parens. Perfect.
So what this means is, okay, we’ll make an async call to the FastAPI; we’re going to send it;
...we’re going to hit the generate_pet_name endpoint;
...we’re going to send this body with the template model pet namer, prompt template name pet namer.
In the kwargs, you’re going to have the data being the data we’re getting back from the UI.
So in the trycatch, we had this async call.
We say, okay, we’re using Axios package to make the POST request.
We’re sending the body.
And if it is successful, we’re going to send back a res.status 200,
...and we’re going to send back an object with two fields; the generated text and the data sent to API.
Now a really useful tool, when we’re working with APIs like this, is something called Postman.
First, let’s just make sure we have the health route up.
Just to show you how Postman works, we have this health, and we could just say, okay, Pet Route Up.
And before we do that, we should import it and bring it into our routes.
We’ll name this petNamer.
We have to actually add it to the route.
So now let’s see if we could actually hit it from our Postman.
We got it up; Pet Route Up.
So the next thing we’re going to do is just recreate what hitting the endpoint will look like from the UI.
So we know that the data field in the – when we send it from the React UI,
...it’s always going to be something like a male dog who is goofy and sweet and cowardly or whatever,
...and we want to make sure that we can send that data to the Express Server and for it to hit the FastAPI.
So let’s make sure that works.
So we have a postman route.
Say a male dog who’s clumsy, drooly, and snores loudly.
Let’s just make sure we could hit it.
I’m going to make sure we’re hitting it here.
Yep. You can see everything’s – we’re hitting the FastAPI and we get a Baron Snorbs.
I don’t know why it came up with that. I like the name. I would name my dog Baron Snorbs any day.
But now we know for a fact that we’re able to hit the Express Server from the UI.
We’re using the Postman as a tool to test that.
The next thing we’re going to do is now build out the UI to look like what we had in the beginning.
We have the Express Server hitting the FastAPI. We’re able to test it with Postman.
Now let’s integrate it with our React UI.
So if we open up our localhost 3000, we have everything running.
We have our FastAPI running in dev, we have our Express Server running in dev, and we also have our client running in dev.
If we look at what it looks like now, it’s blank, but the end state we want to get to is something like this.
So if a designer handed me this image and said, hey, can you please build this for us using React.
We need all this functionality and we need it to look like this.
The most daunting part for me would be, oh God, I have to work with SCSS.
I have to figure out the placement. I have to figure out how to create inputs that look like this or dropdowns that do this.
And for me that’s difficult because I’m not really a great frontend developer.
But what’s really helpful is using something like Carbon, for me at least.
So we know from the image we’re looking at, we have something like a heading,
...and we have something like a combo box, something that has a dropdown and that you could type in it and it filters it.
We have the checkbox form. We have inputs. We have tags.
The way I would do this, I would go to the Carbon React page and I’ll just look up stuff.
So I know the first thing we’re going to use is a heading.
So when we open up a heading, very simple.
The code is super-duper simple. You just have heading.
Let’s add it to our React UI.
So now from your directory, go to UI, and then open up, and then CD into client.
Now in this client directory we have a source directory, we have components, and we have a pet form.
I have left in all the imports and a lot of the actual functionality for like state management and stuff like that,
...because what I’m trying to show you is how I utilize the Carbon design system to build out something like this.
And then the functionality, I implore you to look at, it’s documented and you could figure out exactly what I’m doing.
But let’s start by looking at what we have.
We have two columns, and this is all from Carbon.
We have two columns; one for Pet Form and one for Results.
And what to do. We want to get to this end state.
So let’s start with the heading.
So we already have it imported. We know it’s going to be here.
So we could just add heading, and we could say – what did we name it – describe your future pet.
Boom. Headings are simple.
Like this stuff is okay, we could use an H1 or H2 or whatever, but having something that’s –
...using a design system that already looks good and you don’t have to worry about it, you don’t have to worry about the font,
...you don’t have to worry about the sizing, you can just use it is so, so helpful and it just expedites all of the work on the frontend.
The next thing we have to bring in is this combo box.
So let’s look at what the combo box available in Carbon is.
And we can see it. They have it already here.
Example of what it looks like. You have all the documentation.
So let’s look at this.
So we have one that filters.
We know that’s what we’re going to look for.
So we could look at this exact code and we could just grab something like that and bring it into our application.
So for this one, we’ll just bring this.
Now we don’t have items yet, so let’s just see what it would look like.
We’ll have an ID with a one and a text with first item.
And then for the next one, we will have an ID with two and a text with the second item.
And so really it’s already built in.
You have this items prop, and you just fill it, and it’s looking for an array.
And if you look at, it actually has an explanation of what it’s doing.
It says, they’re trying to stay as generic as possible and we could have total control over it.
But for us, we want something pretty simple. We want an ID and we want a text.
So if we look at our application, once we save, we already have it.
It’s already built in. We have the first item and the second item.
It’s already filterable.
Like it’s just from the get go, you could select it.
We’re actually going to add the filtering in a second.
And that’s how easy it is to add like complex components that look good into a UI using Carbon.
I’m a big, big fan.
So now let’s just fill in all the functionality that we’re expecting.
And we have an un-change, and it’s pretty simple.
It’s just looking at the selected item and it’s setting it to our state.
We have the ID.
The items that we’re using usually – like the way I would use this in an engagement is that we would,
...often have an API call to a database and store it as the state for the items.
So you could have like – especially if you have a ton of rows from a database and you want to quickly filter it down, you could do that.
What we did in this case is we just have a giant list of different animals that you could potentially have as a pet, like an alpaca.
So now that we have all of this, you could see what it’s going to be.
We have all the different animals that we have available for it to being a pet, like a tarantula.
And we can move on to the next item that we need, which is a form group with a couple of checkboxes.
So similarly, if we go to Carbon, we could look up checkbox.
We could show – we have skeleton, we have just regular checkbox, the way it looks.
So let’s grab the checkbox. We’ll put it in a form group.
And the form group is also – this is something that Carbon provides for us.
Something interesting about a form group, I think it needs a legend text so let’s add some legend text, and we’ll say pet gender.
And if we look at our application, we have our two checkboxes in the pet gender.
And if we look at what it’s supposed to look like, obviously we have to make it male and female,
...so let me add the functionality and readjust the label text.
So now that we’ve added all the functionality that’s included in the component, I’ll just go over like simply what it’s doing.
So the checked means is just accepting – the way we’re setting it up is just a Boolean.
We’re seeing if the state gender of animal equals male, that equals true, make it checked.
And the on-change similarly is just saying, okay, if it is already male set it to nothing.
But if it isn’t, set it to male, a really, really simple state change.
We have it disabled while loading.
So this is important because we don’t want –
...because let’s say like an API call to the FastAPI takes like six seconds, which is pretty long for an API call.
You don’t want them making edits to the form while it’s happening.
So we disable all functionality within the form,
...and you’ll see this over and over again as we get through – as we keep on adding functionality to this component.
So now we have our pet gender.
We could choose the type of animal.
The next thing. Now this is an interesting component.
If you remember from the beginning of the video, how this works is this is an input field,
...where when you start typing the button on the right is activated.
When you hit enter or you hit the button, it adds it to a tile at the bottom, and it’s adding each individual descriptor in a little tag.
Now all of this is pretty custom, but we’re using the styling and the tags and the functionality is coming from Carbon.
So we’re going to need an input and we’re going to need a button.
So now I’m just going to paste in the functionality for this and explain exactly what we’re doing.
So if you look at our application, we have the descriptor; add cute, sweet, whatever your pet descriptor will be.
And you could see as we’re typing the button goes from disabled to not-disabled.
When you hit it, it clears the input field.
Now what you’re not seeing, obviously, is that we are adding all of this to a descriptors list,
...which we’re going to use to formulate that API call to the FastAPI.
And so just for clarity, I could go and I will add a use effect just to show you what is happening as we hit enter.
And we’re going to go look at the descriptor state.
And if you look at the descriptors, it’s just an array of strings.
If descriptors or descriptors.length, return.
But if not, console log.
So we could watch what’s happening in here, in the console, as we’re adding it.
So let’s say we add cute, hit enter.
We could see that we have an array with one item, cute and sweet and nice.
Now we could imagine how we’re going to utilize this when we send that API call.
So let’s continue with the functionality.
So in the text input we added a functionality on key down.
And this is just like naturally what you do. And this is something I found.
Like whenever I’m in an input field, I expect something to happen when I hit enter.
And so we just added the on key down, really simple functionality, looking for a keyboard event.
If the key that is hit is enter, you run the function handle, add descriptor, which adds the input text into that descriptor array.
And then obviously disabled while loading. Same with the button.
The disabled part is actually interesting.
It’s just saying, okay, if the input value and state is nothing, like the input length is nothing,
...just keep it disabled or disable while loading.
So that’s the way we’re able to dynamically disable and enable a button here.
So we could say cute. The second the input length is no longer zero, we have a little button, click it.
Back to zero because it clears the input and adds it to the descriptor array.
So what’s the next functionality we need to add?
So now let’s map over that descriptor array and place it in this this tile.
I like the tile. It’s something that I got from Carbon.
It looks like a div with like just extra capabilities.
I just happen to like to have the ability to have more capabilities,
...such as like you could drop it down, you could add functionality to it, you can make it selectable.
So in our case, what we’re going to do is we’re going to add a tile.
And in that tile we’re going to say, okay, look at descriptors, map over them.
Look at what it is, descriptor index and return back something called a tag, which I really like.
Again, it looks good, it has functionality. We can add an icon to it, which is what I want it to do.
If you look at the final result, we have that little tag with the descriptor and a little x.
Now that filled in x obviously it gives you the impression that you could click on it and delete it.
And that’s what we want it to do because if you add something that you don’t mean to, you want to be able to delete it.
It’s just a nice functionality to have.
So if we look at how to build out a tile with a – you say just a tag with the class name of whatever and the content inside of it.
Basically all it’s doing, it takes a string, it looks at the descriptors, and we’re going to update whatever that list is,
as long as it doesn’t equal the one that you just clicked.
Really kind of simple. You’re just filtering it down.
So now that we’ve added that, let’s look at how our application handles the additional descriptors.
So cute, fast, and sweet.
And on click is going to – oh, I added box shadow.
Now I like box shadow.
I don’t know why the modern UI design does not like box shadow, but I happen to like box shadow.
I don’t know why. I think it makes it look 3D. I think it’s very cool.
So we could click on the cute, click on fast, click on sweet, and you could delete it.
And that’s the functionality we’re looking for.
The last thing we’re looking to do is have a button set.
So let’s add a button set to our component.
So in the stack, we could just add button set, and we could have button clear, and we could have button submit.
All of this is coming from Carbon,
...so it’s already automatically going to be sized correctly and the button set is going to have a little gap.
We have two different kinds of buttons, but if you look at the image, we have two different colored buttons.
And within the Carbon React documentation, just go to button and you could see something called kind.
So you could choose what kind of button you'd want.
So we want the submit to be primary and we want the clear to be secondary.
So let’s just update this; kind secondary and kind primary.
And there we go. We have the two buttons.
The functionality for the submit will be done relatively quickly.
So now that we have all the functionality there, we have the two functions – so the handle clear function’s pretty simple.
We’re looking at all the state that we have in the component. We’re just setting it to its original empty form.
The disabled clear button and disable while loading; you don’t want to have –
...if there’s nothing – we have some functionality and use effects that are just looking at if any of the state is filled – if nothing is filled,
...you shouldn’t have the ability to clear and you shouldn’t have the ability to clear while you’re making the API call.
Similarly with submit.
Instead of disable when nothing is filled out, we have to disable the button when not everything is filled out.
So you can’t make a request if you don’t have an animal, you don’t have a descriptor, you don’t have a gender.
We need all of those in order to make that submit.
So now that we have everything in the pet form, at least visually completed, let’s get everything in the result form also completed.
We’ll handle the functionality in just a second.
Obviously we need to add a heading for the result, which we’ll just say result.
What else do we need? And then we need these accordions, which I like.
I like the functionality of accordion.
They’re dynamic. I think they look good.
You could fill it in. You have the title, and then you have the content within it.
So it’s something that I thought would look kind of nice when we’re displaying our results.
Also they have this skeleton state.
And I also like the way that looks so I just put it in, in that way.
It doesn’t really have any necessary – you could render the response any way that looks good to you,
...but in my opinion, I liked the accordion.
So now that I’ve just added all the functionality, we’re looking at a loading state.
Obviously when you’re making that call, you’re making that API call you would set the loading state to true.
And what that does is it alerts the application and the component to switch between loading state and not loading state.
And our loading state happens to be just these accordion skeletons that I just showed you here.
So let’s complete that.
And now we have whatever we’re looking for.
We just have to wait to add the rendered results.
And that’s going to be the response from the API.
And in order to get that functionality working, let’s go to our handle submit, and we could see exactly what we need to do.
So the first thing we’re going to do is we’re going to look at that list of descriptors.
And so we’re just going to grab the strings inside that list and just create a comma-separated string by it.
Pretty simple.
We’re going to say descriptor list, and we’re going to say descriptors.join, comma separated.
And now we’re going to have just a string that has all the descriptors a comma separated string.
So really what you’re looking for, if we look at the FastAPI example, is just this input.
We’re looking to recreate that in our API call.
So let’s use this as reference.
So now we have to interpolate all the data we collected from the form into a single string.
So that should be pretty easy, right?
So const string to send to API equals, we’ll use back ticks.
We have the gendered animal state, so that would be either male or female.
And then we’ll say the type of animal.text.
So that could be a male dog who is, and then we just have the descriptor list. Great.
Now all we have to do from here is make that API call.
So let’s set the loading to true.
And all these state variables are of the – the state has all been prepopulated in the application so all you’re doing is running the hooks.
So let’s set loading to true and let’s make the API call.
Similar to the way we did it in the Express API,
...but we know we’re going to, if we look here, we’re going to API/pet namer/generate pet name.
And we are going to send data string to send to API, making it a POST.
So now that we have crafted the API call that we’re making to our Express Server,
...we now have to, on success, extract the relevant data from the response.
So let’s just go look at what we’re expecting as a response on success.
We’re going to get a generated text and request sent to LLM.
If you remember from what we’re showing, what we want the end state to be,
...the request sent to LLM is going to be this API call, like this object.
And then we’re also expecting a name and description in that generated text field.
So let’s look at our code in the React UI and anticipate what we’re going to get.
Something I really like about TypeScript is that it knows that the next result from Axios is going to be .data.
I just find it very, very helpful.
So it infers that this is an Axios response and it gives us at least the first property on the data response that we could possibly use.
The next one, though, and it might make sense just to copy this so I could just reference it.
Let’s copy it.
So we know that we’re expecting a result.data.generated text.
So let’s call that object.
And then we have the data sent to API, which equals result.data.request sent to LLM.
If there’s anything I could do better, it’s variable naming. I’m really terrible at it.
So now we have the object that we want to render for the name and the description.
And the next thing we have is that data sent to API,
...which is we want to render just as information of how of what was sent to the FastAPI.
So we have both of these.
And now we want to set the response name and description to state.
So if we look at the accordions that we have placed in our response, let’s see what we’re expecting.
So rendered results.name for the name,
...rendered results.description for the description, and then we’re going to stringify the rendered request.
So all of these are state – all these have been included in state.
So let’s start with object.
We’ll set rendered result to object.
And then for the rendered request, we’ll set rendered request to data sent to API.
And then we’ll set the accordion to open.
Perfect. We’ll set that to true.
And then we’ll set loading to false finally.
So once it goes through all of – once it tries to do all this, it sets the loading to true.
It makes the API call the through the Express Server to the FastAPI.
It returns back the generated text from that FastAPI to the Express and to the React handle submit function.
We set the rendered result to the object that we’ve received.
And then we also set like the body that we sent to FastAPI, just so we could see what it looks like.
We set accordion open so it’s going to open up that accordion automatically.
We’re going to log an error and we’re going to set loading to false.
So let’s actually try the application now.
We’re going to choose a rabbit; female; cute; sweet; fast and cowardly; and hides under my bed; and eats my books.
Let’s submit it and see what kind of response we get back.
Luna Bun-Bun.
And it gives you a description and it shows us exactly what we sent.
We sent this body to the endpoint.
We’re sending the data; a female rabbit who is cute, sweet, cowardly, and hides under my bed.
But what happens if we have an error?
Now I’ve discussed that when we were making the FastAPI that when you’re trying to coerce –
...like we’re coercing the LLM to send back a JSON format and we’re doing that through that PydanticOutputParser.
And it’s just like a really well-crafted prompt,
...but occasionally the LLM is just not going to send you back something that is actually JSON and you’ll get an error.
So we want to kind of handle that, and there’s a really simple way to do it. It’s not particularly elegant, but it is simple.
So let’s create a new state and we’ll just call it error.
Set it to false to start with.
Let’s go to handle submit.
So we have this catch for this error, so we could console log it, but we could also set error to true.
And then finally, set loading to false.
The way I’m thinking this working is that when we retry it,
...we’ll set the error to false so we don’t continue to render what we’re going to render.
So let’s set error to false.
So let’s go to that button set and let’s add a conditional rendering, error.
And let’s add a new button.
Instead of it being – it’ll be the exact same thing as submit, but we’ll change the kind.
We’ll make it danger and we’ll change this to retry.
Let’s just test this functionality.
Instead of sending back a good response, let’s send a return.res.status 500,
...and let’s attempt it and just make sure that it functions the way we expect it to function because occasionally it will fail.
Alpaca.
So hopefully we get back a failure and it populates a – perfect.
And what we want to do is when we click it again,
...it’s going to take it off – it’s going to stop rendering it and it’s going to do another submit.
That’s perfect.
So let’s fix our Express Server.
Make sure everything is running.
Let’s try to rerun it.
And now this time with the Express Server actually returning back an accurate response.
Now Snuggles the Gentle.
The name embodies the alpaca’s undeniable cuteness and its sweet nature.
That is a very nice name for an alpaca.
So that was it.
Now that you’ve seen how to build a full stack Generative AI application, why don’t you take what we did and make it better?
Use what we’ve shown you today to create a new route, use new prompts and new examples, different model parameters,
...and come up with something cool, something with like interesting functionality.
Integrate it with a new UI to make it look awesome and tell us what you’re building in the comments.
Honestly, thank you for watching.
And if you like this video, be sure to like and subscribe.