How the marriage of protocol and code execution is reshaping what agencies — and their clients — can actually do.
There's a moment in every technology cycle where the discourse gets ahead of the reality. Right now, that moment is happening with the Model Context Protocol.
Six months ago, MCP was the hottest acronym in AI. Everyone was shipping MCP servers, wrapping APIs in tool definitions, and watching agents pull data from external sources. Then the backlash arrived: MCP is dead. Just call the API directly. Use a CLI. Write code. The protocol is bloated, the context windows fill up, the tool calls are slow.
At the core of this debate are two arguments: token savings with CLIs, and the supposed bloat and complexity of MCP. Some vendors, including Perplexity, have moved away from the protocol entirely.
But here's what the "MCP is dead" crowd is missing: they're arguing against a version of MCP that's already evolving past them.

From Reading to Doing
When we first started building with MCP at Kworq, we thought of it the way most people did — as an aggregation layer. Connect your CMS. Connect your calendar. Let the model read from multiple sources and synthesize. It was useful. It was tidy. It was also just the beginning.
The real shift happened when we stopped thinking of MCPs as data pipes and started thinking of them as action surfaces. An MCP server isn't just a way to tell a model what exists — it's a way to let a model do things. Create a page. Update a component. Publish a story. Manage a workflow.
When you pair that capability with a harness like Claude that can reason about multi-step tasks, the whole dynamic changes. The agent isn't just informed — it's empowered.
That's the leap we made building StoryPress, our CMS platform. Our MCP server doesn't just expose content for reading. It lets an AI collaborator navigate a site's component tree, insert blocks, update fields, and publish — all through structured protocol calls that respect the system's schema and permissions. The protocol gives it guardrails. The model gives it judgment.

The Marriage, Not the Funeral
The people declaring MCP dead are often making a false binary. They're not wrong that CLI tools and direct code execution can save tokens — but they overlook that enterprise and organizational use cases demand the structure, telemetry, security, and observability that MCP provides.
The real insight isn't "MCP vs. code execution." It's MCP with code execution.
Cloudflare has been articulating this better than anyone with what they call "Code Mode" — the idea that agents should perform tasks by writing code that calls APIs, rather than making sequential tool calls one at a time. They've demonstrated that converting an MCP server into a TypeScript API can reduce token usage by 81% compared to traditional tool-calling patterns.
Think about that architecture: you still have MCP as the structured interface — the contract between agent and service — but instead of the model firing off dozens of individual tool calls, it writes a single script that chains those calls together, executes it in a sandbox, and returns just the result. The protocol provides the schema, the guardrails, and the discoverability. The code execution provides the efficiency.
Read more: Code Mode: the better way to use MCP — Cloudflare Blog

Cloudflare as Infrastructure for the Next Wave
This is where the infrastructure story matters. We chose Cloudflare as the foundation for StoryPress, and the bet has continued to pay off in ways we didn't fully anticipate.
Cloudflare recently released Dynamic Workers into open beta — V8 isolate-based sandboxes that start in milliseconds, use megabytes of memory, and are roughly 100x faster and up to 100x more memory-efficient than containers. These aren't theoretical advantages. When your agent needs to spin up a secure execution environment for every user request, on demand, the difference between milliseconds and seconds is the difference between a product that feels alive and one that feels like it's buffering.
Their own Cloudflare MCP server now exposes the entire Cloudflare API — over 2,500 endpoints — through just two tools, search and execute, consuming roughly 1,000 tokens. An equivalent traditional MCP server would consume over a million tokens. That's not optimization — that's a fundamentally different architecture.
Cloudflare didn't just see the need for faster compute. They saw that the future of agentic AI demands a marriage: structured protocols for discoverability and governance, code execution for efficiency and expressiveness, and lightweight sandboxes to make it all safe. They built the foundational layers for exactly this.
Read more: Sandboxing AI agents, 100x faster — Cloudflare Blog

What This Means for Agencies and Their Clients
At Kworq, this isn't abstract infrastructure talk. It changes how we work with clients every day.
Instead of building bespoke integrations for every workflow, we build MCP-powered surfaces that AI collaborators can use directly. A client's content team doesn't need to learn a new tool — they work with an AI that already understands their CMS schema, their brand components, their publishing workflow. The agency doesn't disappear; it designs the systems that make AI collaboration possible, useful, and safe.
The old agency model was: client has a problem, agency builds a thing, delivers it, moves on. The new model is: agency builds living systems that clients and AI operate together, continuously. MCP is the connective tissue that makes that possible — not because it's a perfect protocol, but because it's the one the ecosystem has converged on, and because it's evolving fast enough to keep up.
The Road Ahead
The 2026 MCP roadmap has shifted from release milestones to priority areas: transport scalability, agent communication, governance maturation, and enterprise readiness. The project's maintainers are focused on what needs to be fixed before MCP can hold up in real production use.
The numbers tell the story: 97 million monthly SDK downloads, over 10,000 active servers, and first-class client support across Claude, ChatGPT, Cursor, Gemini, and VS Code. In December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation, with OpenAI and Block as co-founders.
MCP isn't dying. It's graduating. And the agencies, platforms, and builders who understand that the protocol's value lies not in aggregation but in action — in the marriage of structured interfaces and dynamic execution — are the ones building what comes next.
We're one of them.

P.S. — This post was built the way it describes.
This entire blog post was drafted, researched, written, and published into StoryPress by Claude — through the same MCP server we've been talking about. Every image placement, every section edit, every YouTube embed was executed via MCP tool calls from a conversation on a phone.
And here's the kicker: midway through building the post, Claude switched from individual tool calls to code execution — writing JavaScript that ran inside a sandboxed V8 isolate on Cloudflare, reading the full story JSON in sandbox memory, mutating it, streaming the preview, and saving, all in a single round trip. The full story payload never entered the model's context window. Only the return value came back.
The estimated token savings: 60–70% compared to traditional sequential tool calling.
That's not a benchmark from a whitepaper. That's what happened in this conversation, building this post, in real time. The marriage of structured MCP interfaces and dynamic code execution isn't a thesis we're arguing — it's the method we used to publish the argument.
We didn't plan this meta moment. But we're not going to pretend it didn't happen.

