How to stream agent data to the client
This guide will show off how to build & stream agent data to the client using LangChain.js and AI SDK.
This guide assumes familiarity with the following concepts:
This doc will break down each component into separate sections. Click here for the server file and here for the client file to view the complete code inside a demo application.
Setupβ
First, install the necessary LangChain & AI SDK packages:
- npm
- yarn
- pnpm
npm i @langchain/openai @langchain/core ai zod zod-to-json-schema
yarn add @langchain/openai @langchain/core ai zod zod-to-json-schema
pnpm add @langchain/openai @langchain/core ai zod zod-to-json-schema
Next, we'll create our server file. This will contain all the logic for making tool calls and sending the data back to the client.
Start by adding the necessary imports & the "use server"
directive:
"use server";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createStreamableValue } from "ai/rsc";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { JsonOutputKeyToolsParser } from "@langchain/core/output_parsers/openai_tools";
After that, we'll define our tool schema. For this example we'll use a simple demo weather schema:
const Weather = z
.object({
city: z.string().describe("City to search for weather"),
state: z.string().describe("State abbreviation to search for weather"),
})
.describe("Weather search parameters");
Once our schema is defined, we can implement our executeTool
function.
This function takes in a single input of string
, and contains all the logic for our tool and streaming data back to the client:
export async function executeTool(
input: string,
) {
"use server";
const stream = createStreamableValue();
The createStreamableValue
function is important as this is what we'll use for actually streaming all the data back to the client.
For the main logic, we'll wrap it in an async function. Start by defining our prompt and chat model:
(async () => {
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
`You are a helpful assistant. Use the tools provided to best assist the user.`,
],
["human", "{input}"],
]);
const llm = new ChatOpenAI({
model: "gpt-4o-2024-05-13",
temperature: 0,
});
After defining our chat model, we'll define our runnable chain using LCEL.
We start binding our weather
tool we defined earlier to the model:
const modelWithTools = llm.bind({
tools: [
{
type: "function" as const,
function: {
name: "get_weather",
description: Weather.description,
parameters: zodToJsonSchema(Weather),
},
},
],
});
Next, we'll use LCEL to pipe each component together, starting with the prompt, then the model with tools, and finally the output parser:
const chain = prompt.pipe(modelWithTools).pipe(
new JsonOutputKeyToolsParser<z.infer<typeof Weather>>({
keyName: "get_weather",
zodSchema: Weather,
})
);
Finally, we'll call .stream
on our chain, and similarly to the streaming agent
example, we'll iterate over the stream and stringify + parse the data before updating the stream value:
const streamResult = await chain.stream({
input,
});
for await (const item of streamResult) {
stream.update(JSON.parse(JSON.stringify(item, null, 2)));
}
stream.done();
})();
return { streamData: stream.value };
}