LangChain + Next.js Starter

R
Renaudil y a 2 jours
0

Description

Starter template and example use-cases for LangChain projects in Next.js, including chat, agents, and retrieval.

npx boilerapp langchain-starter

文档

🦜️🔗 LangChain + Next.js Starter Template

Open in GitHub Codespaces Deploy with Vercel

This template scaffolds a LangChain.js + Next.js starter app. It showcases how to use and combine LangChain modules for several use cases. Specifically:

Most of them use Vercel's AI SDK to stream tokens to the client and display the incoming messages.

The agents use LangGraph.js, LangChain's framework for building agentic workflows. They use preconfigured helper functions to minimize boilerplate, but you can replace them with custom graphs as desired.

https://github.com/user-attachments/assets/e389e4e4-4fb9-4223-a4c2-dc002c8f20d3

It's free-tier friendly too! Check out the bundle size stats below.

You can check out a hosted version of this repo here: https://langchain-nextjs-template.vercel.app/

🚀 Getting Started

First, clone this repo and download it locally.

Next, you'll need to set up environment variables in your repo's .env.local file. Copy the .env.example file to .env.local. To start with the basic examples, you'll just need to add your OpenAI API key.

Because this app is made to run in serverless Edge functions, make sure you've set the LANGCHAIN_CALLBACKS_BACKGROUND environment variable to false to ensure tracing finishes if you are using LangSmith tracing.

Next, install the required packages using your preferred package manager (e.g. yarn).

Now you're ready to run the development server:

yarn dev

Open http://localhost:3000 with your browser to see the result! Ask the bot something and you'll see a streamed response:

A streaming conversation between the user and the AI

You can start editing the page by modifying app/page.tsx. The page auto-updates as you edit the file.

Backend logic lives in app/api/chat/route.ts. From here, you can change the prompt and model, or add other modules and logic.

🧱 Structured Output

The second example shows how to have a model return output according to a specific schema using OpenAI Functions. Click the Structured Output link in the navbar to try it out:

A streaming conversation between the user and an AI agent

The chain in this example uses a popular library called Zod to construct a schema, then formats it in the way OpenAI expects. It then passes that schema as a function into OpenAI and passes a function_call parameter to force OpenAI to return arguments in the specified format.

For more details, check out this documentation page.

🦜 Agents

To try out the agent example, you'll need to give the agent access to the internet by populating the SERPAPI_API_KEY in .env.local. Head over to the SERP API website and get an API key if you don't already have one.

You can then click the Agent example and try asking it more complex questions:

A streaming conversation between the user and an AI agent

This example uses a prebuilt LangGraph agent, but you can customize your own as well.

🐶 Retrieval

The retrieval examples both use Supabase as a vector store. However, you can swap in another supported vector store if preferred by changing the code under app/api/retrieval/ingest/route.ts, app/api/chat/retrieval/route.ts, and app/api/chat/retrieval_agents/route.ts.

For Supabase, follow these instructions to set up your database, then get your database URL and private key and paste them into .env.local.

You can then switch to the Retrieval and Retrieval Agent examples. The default document text is pulled from the LangChain.js retrieval use case docs, but you can change them to whatever text you'd like.

For a given text, you'll only need to press Upload once. Pressing it again will re-ingest the docs, resulting in duplicates. You can clear your Supabase vector store by navigating to the console and running DELETE FROM documents;.

After splitting, embedding, and uploading some text, you're ready to ask questions!

For more info on retrieval chains, see this page. The specific variant of the conversational retrieval chain used here is composed using LangChain Expression Language, which you can read more about here. This chain example will also return cited sources via header in addition to the streaming response.

For more info on retrieval agents, see this page.

📦 Bundle size

The bundle size for LangChain itself is quite small. After compression and chunk splitting, for the RAG use case LangChain uses 37.32 KB of code space (as of @langchain/core 0.1.15), which is less than 4% of the total Vercel free tier edge function alottment of 1 MB:

This package has @next/bundle-analyzer set up by default - you can explore the bundle size interactively by running:

$ ANALYZE=true yarn build

📚 Learn More

The example chains in the app/api/chat/route.ts and app/api/chat/retrieval/route.ts files use LangChain Expression Language to compose different LangChain.js modules together. You can integrate other retrievers, agents, preconfigured chains, and more too, though keep in mind HttpResponseOutputParser is meant to be used directly with model output.

To learn more about what you can do with LangChain.js, check out the docs here:

▲ Deploy on Vercel

When ready, you can deploy your app on the Vercel Platform.

Check out the Next.js deployment documentation for more details.

Thank You!

Thanks for reading! If you have any questions or comments, reach out to us on Twitter @LangChainAI, or click here to join our Discord server.

Prix

Gratuit

FREE

评论 (0)

常见问题

常见问题解答 (FAQ)

有问题?我们有答案。如果您找不到想要的答案,请随时联络我们。

Boilerapp 是一个专门用于分享 Boilerplates、入门套件(Starter Kits)和项目模版的开发者社区平台。我们的目标很简单:为您节省初始配置(Setup)的时间,让您可以专注于真正重要的代码。无论您是在寻找简单的代码库还是完整的 SaaS 项目,都能在这里找到。

还有其他问题?

我们的团队随时为您提供帮助。联络我们,我们将尽快回复。

LangChain + Next.js Starter | BoilerApp