Why Your NDA Might Not Cover AI & How to Fix It

Jon Nordmark, CEO and Co-Founder
August 20, 2025

Introduction: The NDA Blind Spot

Non-disclosure agreements (NDAs) are designed to keep corporate secrets safe. But in today’s AI-driven world, most NDAs don’t address a new reality: What happens if someone uploads confidential strategy into a public LLM like ChatGPT?

The answer could leave companies dangerously exposed.

When Helpful AI Creates Risk

Innovation leader Gee Mann recently shared a story that illustrates this gap.

During a partner workshop, a participant used ChatGPT to summarize ideas in real time. The AI made the session more engaging—but also required feeding private strategy details into an AI system the company didn’t own or control.

The problem? Their NDA only covered traditional channels like email and printouts. It never mentioned AI.

Their lawyer spotted the gap and quickly drafted an update:

“No Confidential Information may be uploaded to, processed by, or disclosed to any publicly available AI/ML system without prior written consent.”

A simple clause with big implications.

The Case for Private AI

Colleen Shannon responded to Gee’s post with an important point: don’t just ban AI—use the right kind of AI.

At Iterate.ai, we call this Private AI. It means:

  • Deploying LLMs on infrastructure you control.

  • Training models to reflect your brand’s unique voice.

  • Operating inside governed, secure environments.

This approach allows organizations to innovate with AI while ensuring sensitive information stays private.

The Bigger Picture: AI Governance Wake-Up Calls

This NDA story reflects a broader issue. Public LLMs introduce risks companies may not fully understand or control.

As I’ve written in my series Three Wake-Up Calls of Agentic AI, the concerns are mounting:

  • Part 1.07 Hidden Risks of ChatGPT Agents

  • Part 1.1The Legal Risks of Using Public LLMs and Agents

  • Part 1.270,000 ChatGPT Conversations Just Leaked — A Governance Wake-Up Call

The reality? Even unintentional use of public AI can create exposure. Worse, companies have little visibility into how those models are retrained, tuned, or shared.

And many risks remain unknown.

Questions Every Company Should Ask

  • Has your NDA been updated to address AI?

  • Have you set policies banning (or strictly governing) public LLM use?

  • Do you have a Private AI strategy in place?

Why the Future is Private AI

Careful, well-governed companies will migrate toward Private AI because it keeps secrets secret.

At Iterate.ai, we design Private LLMs that are:

  • Tailored to your brand voice (yes—even with emojis 😉).

  • Scalable from Nano-LMs to enterprise deployments.

  • Deployable anywhere: private cloud, on-prem appliances, AI PCs, or offline handhelds.

For developers, our AgentOne coding assistant goes beyond autocomplete. It can code, test, and containerize autonomously—running entirely inside your private environment.

Conclusion + CTA

AI is evolving faster than contracts, compliance policies, and risk frameworks. Companies that update NDAs, govern public AI use, and adopt Private AI will protect their competitive edge.

Because at the end of the day: control your AI, or in time, it could control you.

Read original news story here