Most AI products are built for people who already understand AI.

The dashboards. The APIs. The model selection. The prompt engineering. All of it assumes someone technical is driving. And when that person doesn't exist — which is most businesses — the technology just sits there, impressive and useless.

I've been thinking about a different kind of product. Not a platform. Not a service. A physical appliance. A box you plug in that does what cloud AI promises, without the cloud and without the complexity.

Here's what that actually looks like.


The Box

A small, quiet device. About the size of a thick book. It arrives at your office. You plug it into power, connect it to your network, open a browser.

That's the setup.

No terminal. No command line. No Docker. No cloud account. You type your company name, set a password, and you're in.

Inside, it runs a language model built for business tasks — reading documents, answering questions, summarizing, extracting data. The model lives entirely on the device. Nothing goes to the cloud. Nothing ever will.


What Happens on Day One

The first thing it asks is simple: where are your documents?

You point it at whatever your business already uses. A shared drive. A synced folder. A pile of contracts, invoices, policies, manuals. It starts reading.

Within hours, anyone in the office can ask questions in plain language:

  • "What are the payment terms in our contract with Meridian?"
  • "Summarize last month's incident reports."
  • "What's our remote work policy?"
  • "Pull all invoices from Q4 over $10,000."

No search syntax. No filters. No digging through folder structures. Just a question and an answer, with the source document attached so you can verify it yourself.

This isn't about replacing anyone's judgment. It's about eliminating the hours spent finding, reading, and cross-referencing things that are already written down somewhere.


How It Works

When you ask a question, the system doesn't pull the answer from memory. It searches through everything it's indexed, finds the most relevant passages, and uses those to construct a response.

Two things matter about this.

The answers come from your documents, not from the model's general training data. And when you add new documents, the system picks them up automatically. No retraining. No waiting.

The model runs locally on a GPU built into the appliance. A few years ago, this would have required a server room. Today, models capable of genuine reasoning run on hardware that fits under a desk and costs less than a used car.

This isn't experimental anymore. It works.


Your Data Stays in Your Building

No outbound connection to any AI provider. No telemetry. No usage analytics sent home. No model updates quietly phoning a server somewhere.

Your data stays inside your network. Full stop.

For a marketing agency, that's convenient. For a law firm handling privileged communications, it's mandatory. For a healthcare provider bound by privacy legislation, it's the difference between using AI at all and not being able to.

That's why some businesses still rely on fax machines. Not because fax is good technology. Because it's the only channel they trust.

The compliance story writes itself when the architecture is simple: data doesn't cross a boundary you don't control.


A Typical Day

Morning. The office manager asks for a summary of overnight emails and anything that needs immediate attention. The appliance processes the inbox locally. Answer in 30 seconds.

Mid-morning. A project manager needs a clause from a vendor contract signed in 2023. Instead of opening every PDF in the contracts folder, she asks. The system returns the exact clause, the page number, and a link to the document.

Lunch. The HR lead drafts a new policy and asks the appliance to check it against existing ones for contradictions. It finds two.

Afternoon. A client calls with a question about their account. The receptionist asks the appliance while the client is still on the line. Answer in 8 seconds.

None of these tasks required a developer. None required training. None required anyone to understand what a language model is.


What It Doesn't Do

Honesty matters here.

This isn't a general-purpose AI. It won't make strategic decisions. It won't replace your accountant. It won't automate everything.

What it does — and does well — is eliminate the time your team spends searching, reading, cross-checking, and summarizing internal documents. That work doesn't show up on org charts, but it eats hours every day.

For a 40-person company where several people spend half their day looking things up, removing that friction isn't glamorous. But it's real.


The Cost

The hardware sits in the $8,000–$15,000 CAD range. One-time.

Compare that to:

  • Cloud AI at $50–$100/user/month for 40 users: $24,000–$48,000/year
  • A junior IT hire to manage cloud AI tools: $55,000+/year
  • A consultant to build a custom solution: $80,000–$200,000 as a project

The appliance pays for itself in months, not years. And it keeps working without a recurring bill or a dependency on someone else's pricing decisions.


Who This Is For

Not everyone. Let me be specific.

Good fit: Professional services firms. Law, accounting, consulting. Healthcare clinics. Manufacturing companies with technical documentation. Property management. Insurance brokerages. Logistics. Any business with a lot of documents and not a lot of IT.

Not a fit yet: Real-time systems. Cutting-edge research. Companies that already have sophisticated data infrastructure and engineering teams.

The sweet spot is the business that knows it should be using AI, but has no one on staff to figure it out — and no appetite to become a tech company in order to use technology.


What Comes Next

The components exist. The models are ready. The hardware is available and affordable.

What's been missing is the product thinking — turning powerful open-source tools into something a non-technical person can actually use, reliably, inside real operational and regulatory constraints.

That's the hard part. Not the AI. The experience.

I've spent decades building enterprise systems. The pattern is always the same: technology arrives first, then tools for experts, and only later — sometimes years later — the product for everyone else.

We're at that transition point with AI. The expert tools are here. The everyone-else product is overdue.

That's what I'm building.