
Building Agents with LangChain Agents
In previous posts, we have explored various ways to interact with LLMs. Today, I want to walk through creating agent_demo.py, a small but capable multi-agent application.
Why is this important?
Most people stop at simple prompts. But real power comes when we chain these interactions together to solve complex workflows. We aren’t just asking a question; we are building a system that can go out, do work, and report back.
In this post, we will build a system that:
- Downloads website content (Agent 1)
- Summarizes the content to Markdown (Agent 2)
- Orchestrates the workflow and writes a file (Agent 3)
This demo includes simple CRM helpers to show agent interaction patterns. We’ll use Python, LangChain, and Ollama with the llama3.1:8b model.
Step 1: Configure llama
First, we need to get our environment set up. We are using Ollama here, which allows us to run these models locally.
Here is the configuration setup:
Step 2: Agent 1 (The Website Downloader)
The first agent has a specific job: getting the raw data.
This agent extracts readable text from or tags. If it can’t find those, it falls back to the full page. It also cleans and truncates the text to ensure we don’t blow up our context window.
We optionally expose this as a full LangChain agent using a Tool so other agents can call it.
Step 3: Agent 2 (The Summarizer)
For the second agent, we will give it the goal to summarize the text into markdown. This agent’s sole responsibility is to take the raw text provided by Agent 1 and summarize it into clean Markdown.
Step 4: Agent 3 (The Orchestrator)
This is where the magic happens.
The Orchestrator coordinates Agent 1 and Agent 2. It uses Agent 1 to get the HTML and Agent 2 to summarize the text.
Since this is an agent, we don’t write procedural code; we give it a goal, and it goes and solves the problem.
We define the persona and goals clearly:
- “You are an orchestrator agent.”
- “Goal: Download a website, summarize it, and write a Markdown file.”
- “Use download_website to get {url,title,content}.”
- “Summarize the content concisely in Markdown (no code fences).”
- “Call write_markdown with title, url, summary, and optional outfile.”
- “Return the final markdown path.”
Step 5: The CLI Runner
Finally, we need a way to trigger this. We set up a CLI runner where you pass a URL (and an optional output filename). Without arguments, the demo uses a default URL.
Does It Actually Work?
If you are wondering if this works, yes it does.
I ran this against our own website, https://dontpaniclabs.com. The agent successfully navigated the site, extracted the core value propositions, and generated a clean markdown summary file.
You can find the full code here:
Building agents like this forces us to think differently about how we construct software. We aren’t just writing functions; we are defining roles and goals.
Give this a shot with your own URLs. If you have questions about setting up your own agent swarms, reach out to me on X (@chadmichel).


