Building Agents with LangChain Agents

In previous posts, we have explored various ways to interact with LLMs. Today, I want to walk through creating agent_demo.py, a small but capable multi-agent application.

Why is this important?

Most people stop at simple prompts. But real power comes when we chain these interactions together to solve complex workflows. We aren’t just asking a question; we are building a system that can go out, do work, and report back.

In this post, we will build a system that:

  • Downloads website content (Agent 1)
  • Summarizes the content to Markdown (Agent 2)
  • Orchestrates the workflow and writes a file (Agent 3)

This demo includes simple CRM helpers to show agent interaction patterns. We’ll use Python, LangChain, and Ollama with the llama3.1:8b model.

Step 1: Configure llama

First, we need to get our environment set up. We are using Ollama here, which allows us to run these models locally.

Here is the configuration setup:

Download the raw source code

Step 2: Agent 1 (The Website Downloader)

The first agent has a specific job: getting the raw data.

This agent extracts readable text from or tags. If it can’t find those, it falls back to the full page. It also cleans and truncates the text to ensure we don’t blow up our context window.

Download the raw source code

We optionally expose this as a full LangChain agent using a Tool so other agents can call it.

Download the raw source code

Step 3: Agent 2 (The Summarizer)

For the second agent, we will give it the goal to summarize the text into markdown. This agent’s sole responsibility is to take the raw text provided by Agent 1 and summarize it into clean Markdown.

Step 4: Agent 3 (The Orchestrator)

This is where the magic happens.

The Orchestrator coordinates Agent 1 and Agent 2. It uses Agent 1 to get the HTML and Agent 2 to summarize the text.

Since this is an agent, we don’t write procedural code; we give it a goal, and it goes and solves the problem.

We define the persona and goals clearly:

  1. “You are an orchestrator agent.”
  2. “Goal: Download a website, summarize it, and write a Markdown file.”
  3. “Use download_website to get {url,title,content}.”
  4. “Summarize the content concisely in Markdown (no code fences).”
  5. “Call write_markdown with title, url, summary, and optional outfile.”
  6. “Return the final markdown path.”

Download the raw source code

Step 5: The CLI Runner

Finally, we need a way to trigger this. We set up a CLI runner where you pass a URL (and an optional output filename). Without arguments, the demo uses a default URL.

Download the raw source code

Does It Actually Work?

If you are wondering if this works, yes it does.

I ran this against our own website, https://dontpaniclabs.com. The agent successfully navigated the site, extracted the core value propositions, and generated a clean markdown summary file.

Download the raw source code

You can find the full code here:

Building agents like this forces us to think differently about how we construct software. We aren’t just writing functions; we are defining roles and goals.

Give this a shot with your own URLs. If you have questions about setting up your own agent swarms, reach out to me on X (@chadmichel).

author avatar
Chad Michel Chief Technology Officer
Chad is a lifelong Nebraskan. He grew up in rural Nebraska and now lives in Lincoln. Chad and his wife have a son and daughter.

Related posts