The LLM Is the New Runtime

Last week, I was playing with Pencil, a design tool that lives inside your IDE. You describe what you want, and it generates UI designs on a canvas that compile into clean code. It’s a solid tool. But the thing that caught my attention had nothing to do with design.

Pencil doesn’t ship its own AI. It doesn’t have an LLM subscription you need to buy. It doesn’t bundle a model or run inference on its own servers. Instead, when you install Pencil, it connects to your existing Claude Code CLI. All the AI generation happens through your Anthropic account, on your existing subscription. Pencil provides the canvas. You provide the intelligence.

Screenshot of a design editor showing a humorous landing page titled “Jack Really Smells,” with sections for evidence, testimonials, a stink scale chart, and call-to-action buttons.

In the screenshot above, you can see what this looks like in practice. On the left, Claude Opus 4.6 is doing the work of generating and reviewing sections of a page design. On the right, Pencil renders the result on its visual canvas. Pencil owns the experience. Claude owns the reasoning. Neither one is trying to do the other’s job.

This is a pattern worth paying attention to.

We’ve Seen This Before

If you’ve been building software long enough, this pattern feels familiar. Fifteen years ago, most SaaS applications shipped with their own embedded databases. Today, almost no one does that. Your application connects to a database that the customer (or the infrastructure team) already manages. The database became assumed infrastructure.

The same thing happened with authentication. We went from every app rolling its own user/password system to delegating authentication to identity providers like AWS Cognito or Auth0. The identity provider became assumed infrastructure.

LLMs are following the same trajectory. A year ago, every AI-powered tool either bundled its own model or required you to sign up for yet another API key. Now, a new class of tools is emerging that assumes you already have an LLM subscription and builds on top of it. The LLM is becoming infrastructure, not a feature.

MCP Makes This Practical

The reason this pattern is accelerating right now is the Model Context Protocol (MCP). MCP is an open standard that gives AI tools a consistent way to connect to external systems. Think of it as the protocol layer that makes composition possible.

Before MCP, if you wanted your design tool to talk to an LLM, you had to build custom integrations for each provider. You had to manage API keys, handle authentication, deal with model-specific quirks. The friction was high enough that most tool builders just bundled their own AI layer.

MCP changes the equation. A tool like Pencil can expose its design canvas as an MCP server with structured interfaces: “read the canvas,” “get the selected frame,” “apply a style guide.” Any MCP-compatible AI assistant can call those interfaces. The tool doesn’t need to know which model you’re using or how you authenticated. It just needs to speak the protocol.

This is the same separation that made REST APIs so effective for web services. You define the interface. You let the caller bring their own capabilities.

What the Tool Actually Needs to Be Good At

This pattern forces an interesting question: if your tool doesn’t own the intelligence, what does it own?

The answer is the workflow, the interface, and the domain translation.

Pencil is good at being a visual design canvas. It knows how to render components, manage layers, handle design tokens, and export to code. It translates between the visual world (what things look like on a canvas) and the code world (React components, CSS variables, Tailwind classes). That translation layer is where its value lives.

The LLM provides general-purpose reasoning. Pencil provides domain-specific experience. When Pencil asks Claude to generate a section of a page, Claude does the thinking and Pencil does the rendering. Each component focuses on what it does best.

This is separation of concerns applied to the tool ecosystem. And it mirrors a principle we’ve been teaching in software architecture for decades: components should have a single responsibility.

What This Means for Tool Builders

If you’re building developer tools today, this pattern has a few implications worth considering.

Your pricing model gets simpler. When you’re not reselling API calls, you can charge for the value your tool provides without trying to predict and mark up token usage. Pencil is currently free while in early access, but the economics of its approach are clear: the tool’s cost is independent of how much AI the user consumes.

Your users stay in control. Developers are increasingly consolidating their AI usage under a single provider. They pick their model, they manage their spend, they understand the privacy implications. When your tool connects to their existing subscription, you’re working with that preference instead of against it.

What This Means for Teams Evaluating Tools

When you evaluate a new developer tool that includes AI features, it’s worth asking where the intelligence comes from.

If the tool bundles its own model, you’re adopting a dependency on that vendor’s AI capabilities. When the underlying model improves or degrades, you’re along for the ride. You’re also paying for AI access twice if you already have a subscription to a provider like Anthropic or OpenAI.

If the tool connects to your existing LLM subscription, you keep control of the intelligence layer. You can upgrade models on your schedule. Your spend is consolidated and visible. And if you decide the tool’s workflow isn’t for you, switching costs are lower because the intelligence was never locked inside it.

Neither approach is universally better. But the trend is moving toward tools that compose with your existing AI infrastructure rather than replacing it.

This Is Just the Beginning

We are still early in this shift. MCP was released as an open standard in late 2024, and adoption is growing fast. The ecosystem of tools that assume you already have an LLM is expanding every month.

I think we’ll look back on this period the way we look back on the early days of SaaS infrastructure. There was a time when every application bundled its own email server, its own authentication system, its own database. Then those capabilities got pulled out into shared services, and applications got simpler, more focused, and better at the things that made them unique.

The same thing is happening with AI. The LLM is becoming the runtime. The interesting question is no longer “does your tool have AI?” It’s “what does your tool do with the AI I already have?”

author avatar
Chad Michel Chief Technology Officer
Chad is a lifelong Nebraskan. He grew up in rural Nebraska and now lives in Lincoln. Chad and his wife have a son and daughter.

Related posts