Did you sign an NDA with your AI?
Most people didn't. Most don't think about it.
You paste client briefs. Internal memos. Pricing strategies. HR decisions. You ask the model to summarize your contracts, draft your proposals, analyze your competitive situation. You share context freely -- because that's how you get better answers.
But the model isn't bound to silence.
The platform has terms of service, not loyalty.
What You're Actually Agreeing To
When you use a cloud AI -- any of the major ones -- you're operating under their terms of service. Those terms vary by provider, by tier, by jurisdiction, and they change.
Depending on what you've paid for and what you've configured, your inputs may be used to improve the model. They may be logged, reviewed by humans, retained for a period of time you didn't choose.
Most enterprise tiers have better protections. Most free and standard tiers have fewer. Most users don't read the difference.
This isn't a conspiracy. It's just what the terms say, in plain language, if you look.
The problem isn't that the AI is malicious. The problem is that the relationship isn't confidential by default. You assumed it was. You were wrong.
The Chain Nobody Follows
Here's where it gets complicated.
Your clients shared information with you under your confidentiality obligations. That's the contract -- explicit or implied. When you engage a professional, you expect that what you share stays in the room.
When you paste that information into a cloud model, you've extended the chain. You've added a third party your client didn't agree to. You made a decision on their behalf -- and most of them don't know you made it.
A law firm that feeds a client's merger details into a cloud AI to draft a summary isn't just taking a privacy risk. They may be in breach of solicitor-client privilege. The analysis doesn't have to leak for the violation to be real.
An accounting firm that uploads financial statements to generate a narrative for a board presentation has potentially exposed confidential filings to a third-party system. In Canada, that touches PIPEDA. If the client is in a regulated industry, it touches their sector rules too.
An HR director who pastes performance review notes into an AI to help write a termination letter has moved employee data outside the organization's control. That data may be retained. It may be used. The employee has rights over it -- rights that weren't considered.
The legal exposure is real and it sits with you, not with the AI provider. The terms of service typically make that clear. You accepted the risk when you clicked agree.
The Canadian Context
PIPEDA -- Canada's federal private sector privacy law -- requires organizations to take reasonable steps to protect personal information, including when working with third-party processors. If you're using a US-based cloud AI, you've just moved Canadian personal data across the border under a data processing agreement you probably didn't read.
Quebec's Law 25 goes further. It requires explicit disclosure of cross-border data transfers and a privacy impact assessment before they happen.
Most businesses using AI tools haven't done any of this. Not because they're negligent -- because the tooling is frictionless and the legal surface isn't obvious.
"I used ChatGPT to write a draft" doesn't feel like a cross-border data transfer. But if that draft was based on client information, it is.
Where the Context Lives
I've been building private AI systems for Canadian businesses for the past two years. The question I get asked most isn't "will it work" -- it's "where does the data go."
That question is the right one.
Cloud AI answers from your data -- then keeps it. The context you provided doesn't disappear when you close the browser tab.
Local inference answers from your data -- then forgets it. The model runs on hardware you control. The data never leaves. There's no upstream provider. There's no terms of service to misread.
For most casual use, cloud AI is fine. For client data, for anything under a confidentiality obligation, for anything regulated -- the question of where the context lives matters.
A private AI appliance that costs $12,000 upfront is expensive. A breach investigation, a regulatory filing, and a client relationship ended over a data incident costs more.
Not an Argument Against AI
I use AI every day. I'm not suggesting anyone stop.
I'm suggesting that before you paste something into a cloud model, you ask a simple question: does this belong to me?
If it belongs to a client, to an employee, to a regulated category -- the tool you reach for should be one that earns silence, not one that operates under terms of service.
Less about whether to trust AI. More about whether you've read the terms of the relationship.