Most organizations are doing AI backward.

They start with the model. They connect it to their data. They get outputs. They call it AI transformation.

Then six months later, the outputs are inconsistent. Reports from two different teams, both generated by the same system, don't agree. The AI tells the finance team one thing about a customer and the product team something different. Nobody can trace why. Trust erodes. The project stalls.

This isn't a model problem. It's a knowledge problem.


What's Actually Breaking

There's a concept called semantic drift -- and it's quietly accumulating inside every organization that has deployed AI without first structuring their knowledge.

Here's how it happens:

You have data. You add AI. The AI generates answers. But your data has no shared vocabulary. "Customer" means something slightly different in your CRM, your billing system, your support platform, and your data warehouse. "Revenue" is calculated differently by finance, sales, and operations. Nobody documented this. Everyone assumed everyone else meant the same thing.

In the pre-AI world, these inconsistencies were manageable. Humans navigated them through tribal knowledge and manual reconciliation. The friction was real but contained.

AI removes that friction -- and amplifies the inconsistency.

Now you get confident, well-written, fast outputs that are wrong in ways that are hard to detect and harder to trace. The model isn't hallucinating. It's correctly reporting what the data says. The problem is that the data means different things in different contexts.


The Semantic Winter Risk

Pierre Bonnet, who writes on sustainable AI systems, calls the endpoint of this trajectory a "semantic winter."

Ontologies proliferate without a shared foundation. Interpretations diverge. Meaning becomes unreliable. Trust collapses. Value stops materializing at scale.

It looks like an AI problem. It's actually a knowledge architecture problem.

The organizations that avoid this aren't necessarily using better models. They're doing something harder and less glamorous: they've invested in a shared conceptual layer that sits between their raw data and their AI systems. A structured representation of what things mean -- not just what they are called.


What This Means for Private AI

This is where Private AI becomes either a solution or a faster path to the same failure.

An AI appliance running inside your organization has access to everything: your files, your processes, your communications, your databases. If the knowledge underneath it is unstructured and semantically inconsistent, the appliance will be faster and more confident about being wrong.

The semantic layer isn't optional. It's the foundation.

Getting this right means building a knowledge graph -- a structured representation of your organization's entities, relationships, and vocabulary -- before you layer AI on top. That graph becomes the source of truth that all AI outputs are anchored to.

Not another database. Not another data lake. A graph that models what things mean and how they relate.


The Practical Implication

If you're evaluating AI tools or planning a deployment, the questions that matter aren't about the model. They're about the knowledge layer:

  • Do you have a consistent vocabulary for your core entities -- customers, products, contracts, transactions?
  • Can your different systems agree on what those entities mean?
  • Do you have a way to represent relationships between them that all your AI tools can use as a shared reference?

If not, you're not building AI infrastructure. You're building a faster way to generate confidently wrong answers.

The AI summer is real. The semantic winter is coming for organizations that skip this step.


Inspired by Pierre Bonnet's framing of semantic climate instability in AI systems.