
Agent Chat using LangChain, Part 1 – Tools and Memory
There’s something really satisfying about building an AI agent that can actually take action in your application. In this post, I’ll walk you through how I integrated LangChain tools into my NestJS backend to create an agent capable of searching contacts, creating events, sending messages, and handling a bunch of other tasks.
The Tech Stack
I used LangChain for defining structured tools with Zod schemas, Anthropic’s Claude as the LLM for the agent’s reasoning, NestJS as the backend framework (love the dependency injection), and PostgreSQL for persisting conversations with JSONB message storage.
The Challenge
My goal was to let users interact with the Member Management system using natural language. Rather than forcing them to click through forms and menus, they could just say something like “Create a contact for John Smith with email john@example.com” and have the system handle it. The breakthrough for me was realizing that LangChain’s DynamicStructuredTool gives you exactly what you need: strongly-typed tool definitions that the LLM can reliably understand and call.

Building the Tools
Everything revolves around a createAgentTools function. Each tool wraps an existing service method and attaches a schema that describes the expected parameters. Here’s an example of how I set up the contact search tool.
To enable the agent to query the database, we define a DynamicStructuredTool that wraps the existing contact service. This implementation uses a Zod schema to enforce type safety on the searchTerm and limit parameters, ensuring the LLM knows exactly how to request data. Crucially, the function returns a simplified JSON string (ID, name, and email) rather than the raw database record, which keeps the context window clean and reduces token usage.
The Zod schema serves two purposes: it validates the inputs at runtime and gets converted into JSON Schema so Claude knows exactly which parameters it can pass.
In the end, I built about 32 tools that cover contacts, events, event instances, messages, users, and reporting—pretty much everything users can do through the regular UI.
Conversation Memory
For memory, I went with a straightforward approach that has worked really well: store the entire conversation history as a JSONB array in PostgreSQL. Each conversation is its own entity with the messages stored right there.
To maintain context across multiple turns, we define an AgentConversation entity using TypeORM. Crucially, we utilize the PostgreSQL jsonb column type for the messages array. This allows us to store the complex, nested structure of the conversation history (including tool inputs and outputs) in a single database row, ensuring efficient retrieval when rebuilding the context window for the LLM.
When a user sends a new message, I load the full history, add the new message, send the entire message to Claude, and then save the agent’s response. This way, the agent always has complete context. For very long conversations, I could add summarization later, but in practice, most chats are short enough that keeping the full history is fine.
The Agentic Loop
The interesting part is the agent loop itself. Claude might need to call several tools to complete a request—for example, first searching for a contact and then pulling up their message history. I keep looping, sending the tool results back to Claude as new messages, until it finally returns a plain-text response with no further tool requests.
How I Integrated with Angular
On the frontend, the Angular UI is a pretty standard chat layout: messages are stacked vertically, user messages aligned to the right, and agent messages aligned to the left. I used PrimeNG components to maintain consistent styling with the rest of the app.
The layout breaks down into three main parts: a toolbar at the top showing connection status, a scrollable area for the messages, and a fixed input box at the bottom.
We implement the UI as an Angular component that serves as the visual layer for the agent’s interaction. The template leverages PrimeNG for structure and uses *ngFor to render the message history, dynamically applying classes to distinguish between the user and the agent. Critically, we bind the message content to [innerHTML], which allows our formatMessage function to render the agent’s Markdown output (like lists or code blocks) as rich HTML directly in the browser.
When the agent calls tools behind the scenes, I show small tags under the message listing that list which tools were used. It helps users understand what’s happening—”Ah, it searched contacts and then checked the event history.”
AI agents can often feel like “black boxes” to end-users. To solve this, we explicitly render the specific tools the agent utilized during its reasoning chain. This snippet uses an Angular *ngIf guard to check for the presence of a toolsUsed array and then iterates through them to display PrimeNG tags (with ‘info’ severity). This adds a layer of explainability, allowing the user to verify that the agent actually performed the requested action (e.g., “search_contacts”) rather than just hallucinating a response.
The input supports the usual behavior: Enter sends the message, Shift+Enter adds a new line. I also make sure the view auto-scrolls to the latest message, which needed a small setTimeout trick to give Angular time to render everything first.
For agent responses, I added a lightweight formatMessage method that handles basic markdown-like formatting—code blocks, bold text, line breaks—so things look clean when the agent returns lists or snippets.
Summary
A few lessons I learned along the way:
- Good tool descriptions are everything. Claude leans heavily on the description text when deciding which tool to call, so I spent real time making them clear and precise.
- Always return structured JSON from tools. It makes it much easier for Claude to parse results and keep reasoning.
- Include navigation links when possible. When a tool creates or finds a resource, adding a link in the response lets the frontend offer “View Contact” or “Open Event” buttons, which nicely connect the conversational flow back to the regular UI.
In Part 2, I’ll talk about adding real-time token streaming over WebSockets to make the experience feel a lot snappier.


