I've been in technology for forty years. Long enough to have watched several operating models arrive, get declared revolutionary, and then become invisible infrastructure nobody thinks about anymore.

The terminal gave way to the desktop. The desktop gave way to the web. The web gave way to mobile. Each time, the people who saw it coming early had an uncomfortable few years, then a very good decade.

We're at one of those moments again. And this one is moving faster than the others.


The forty-year contract we never noticed

In 1984, Apple introduced the world to windows, icons, menus, and a pointer. WIMP. The idea was elegant: make computers legible to people who don't write code. Click this, drag that, navigate here, fill in this form.

It worked so well that we never questioned it again.

For four decades, every piece of software ever built assumed you would operate it. You opened it. You navigated it. You pasted data from one place to another. You waited for results. Then you did the next step. The computer was a tool. You were the operator.

Nobody called it a contract. But it was one.

That contract just ended.


What actually happened on a Tuesday

Last week I needed a competitive analysis. Not a quick scan -- a real one. Pricing trends, recent moves from three competitors, gaps in their messaging, notes from conversations I'd had over the past month, cross-referenced against a pitch I had on my calendar for Thursday.

I didn't open a browser. I didn't open a spreadsheet. I typed a single sentence describing what I needed.

Four minutes later, a structured document was sitting in the right folder. Under a dollar.

Here's what happened in those four minutes. The AI read my local files -- notes, documents, saved references -- without me pointing it to any of them. It checked my calendar context. It pulled from my message threads for relevant fragments I'd flagged. It synthesized. It formatted. It saved to the right place.

Nobody operated anything. The system knew the task, understood the context, and acted.

That's not a productivity gain. That's a different job description for the software.


The protocol that made it possible

This kind of thing requires AI agents to reach into systems they didn't build. Your calendar. Your file system. Your CRM. Your message history. Without a shared protocol, every integration is a custom project.

MCP -- the Model Context Protocol -- is the answer to that. It's an open standard that lets AI agents communicate with external tools and data sources through a consistent interface. Any vendor, any stack, any agent can plug in.

The numbers tell the adoption story: 12 million daily SDK downloads. Python alone.

I want to be clear about what that means for business people. You no longer need an AI product that was specifically built to connect with your specific tools. You need an AI agent that speaks MCP, and tools that expose MCP servers. The rest is configuration. The wall between "what AI can do" and "what my organization actually has" is coming down.

This is what open protocols do. They commoditize the integration layer and shift competition up the stack. It happened with HTTP. It happened with REST. It's happening again.


Why the public version is not enough

The public AI services are good. Sometimes they're excellent. For drafting, for research on public information, for generic reasoning tasks -- fine, use them.

But the moment your competitive advantage lives in proprietary data, you have a problem.

Your customer history. Your pricing models. Your internal processes. Your deal flow. Your operational knowledge built over years. None of that should leave your walls. Not because the public services are untrustworthy -- but because the moment data moves off-premise, you've introduced a dependency you can't fully control, and you've handed your context to infrastructure someone else operates.

Private AI in 2026 is not a chatbot behind a firewall. That mental model is ten years old. What I mean by private AI is an operational layer -- agents that understand your organization's actual context, act on it, and keep every bit of it inside your infrastructure.

The value compounds in a specific way. Every interaction teaches the system more about how your organization works. Every task completed adds to the context the next task can use. That accumulation doesn't happen with a public service you prompt fresh each time.

Data that never leaves builds trust that compounds. Internally with your team, who see results based on real organizational memory. And externally, when clients understand their information never touched a third-party cloud.

This is where the real separation happens between companies that use AI and companies that are built on it.


What this means for the software you're paying for

I'll be direct about something the SaaS industry would prefer I wasn't.

Single-purpose applications -- the ones that charge $15 or $50 a month to do one specific workflow -- are the most exposed category in software right now. Not because they're badly built. Because the thing they built is a workflow. And workflows are exactly what AI agents are good at.

The apps that connect two things, automate one process, pull from one source and push to another -- those are the first to go. Not dramatically. Quietly. The renewal comes up and someone asks whether they still need it.

The SaaS companies that survive this are the ones sitting on data moats, or building the infrastructure layer itself. The ones in the middle -- pure workflow, no defensible data, no protocol position -- are in real trouble.

This isn't an opinion about which tools are good. It's an observation about what AI agents now do by default.


The layer being built right now

The companies worth paying attention to aren't replacing their tools. They're building something above them.

An intelligence layer that connects to everything they already have. That understands their context. That acts without waiting for a click. That gets more useful over time because it accumulates organizational memory.

The architecture question isn't complex once you see it clearly. You're not choosing between old software and new software. You're choosing between a model where software waits for humans to operate it, and a model where software acts on behalf of humans within defined boundaries.

I've watched that kind of shift happen four times in forty years. The pattern is consistent: the transition feels slow, then it feels obvious in retrospect, and the people who waited for certainty before moving always say the same thing afterward.


The question isn't whether this is coming. The question is whether your next architecture decision treats it as arrived.

Are you building for that? Or still optimizing the one that already left?