AI Is Not Blocked by Regulation. It Is Blocked by Leadership
Many companies are still treating enterprise AI as a legal exception instead of a leadership decision, and that delay is creating more risk, not less.

AI Is Not Blocked by Regulation. It Is Blocked by Leadership
Recently, that many operators have now seen firsthand: two companies in the same industry can face the same regulatory environment, the same market pressure, and the same model landscape, yet one has already deployed enterprise AI broadly while the other still routes every use case through a committee and worries that current models automatically train on company data.
His conclusion was simple: this is a leadership question.
I think that is exactly right.
Too many organizations still describe AI hesitation as an IT problem, a legal problem, or a compliance problem. In practice, those functions are often reacting rationally to a vacuum above them. When leadership does not define acceptable risk, measurable upside, and clear operating boundaries, the default response inside the organization is delay.
That delay is usually framed as prudence. More often, it is unmanaged indecision.
The Training Myth Should Not Still Be Running the Discussion
One reason this debate keeps dragging on is that many executives are still operating on outdated assumptions about how enterprise AI products handle customer data.
That does not mean every tool is safe for every workload. It does mean the conversation should start from current facts, not 2023-era fears.
Today, major enterprise offerings publicly state that customer business data is not used to train their foundation models by default, including:
If leadership still allows the organization to debate AI as though every prompt is automatically becoming public training data, then the real blocker is not law. It is executive failure to update the operating assumptions.
Blocking Official Tools Does Not Eliminate AI Use
This is the part many companies still refuse to confront: when employees are under pressure to move faster, blanket restrictions do not remove demand. They displace it.
Microsoft's 2024 Work Trend Index reported that 78% of AI users were bringing their own AI tools to work. The same report found that many people were reluctant to admit they were using AI for important tasks.
That is the predictable outcome of unsanctioned demand.
When the approved path is too slow, too narrow, or too politically risky, people do not stop experimenting. They move outside formal governance.
So the company ends up with the exact opposite of what the blockers intended:
- less visibility,
- weaker policy enforcement,
- fragmented tool use,
- and sensitive work happening in personal accounts or unreviewed workflows.
The stated goal is risk reduction. The actual result is shadow AI.
The Real Issue Is Not Access. It Is Operating Design
Most risk and legal teams are not irrationally anti-AI. They are doing what institutions always do when accountability is unclear: they narrow permissions.
The harder question is not, "Should we allow ChatGPT, Claude, or Gemini?"
The harder question is: "What exactly are we authorizing people to do, with which data, under which controls, and who owns the consequences when something goes wrong?"
That is not a procurement question. It is not a vendor FAQ question. It is an operating model question.
Leadership teams that move faster usually do three things earlier than everyone else:
- They separate low-risk use cases from high-risk ones.
- They define what data can and cannot be used with sanctioned tools.
- They assign clear ownership for adoption, incidents, policy, and enablement.
That is why the real gap is usually not between regulated and unregulated industries. It is between organizations that made the leadership call and those that postponed it.
The Market Has Already Moved Past "Can We Use AI?"
The external signal is also getting harder to ignore.
Deloitte's 2026 State of AI in the Enterprise report says only 34% of companies are truly reimagining the business around AI, and only one in five has a mature governance model for autonomous AI agents.
That combination matters.
It means many companies are still stuck in the most dangerous middle state:
- they know AI matters,
- they are increasing investment,
- but they have not redesigned workflows, permissions, or governance deeply enough to scale it with confidence.
That is why access alone will not save them. Buying licenses without changing execution is just an expensive form of symbolic progress.
What Serious Leadership Looks Like Now
If leaders want to move past fear without being reckless, the answer is not "open everything." It is to define bounded execution.
That starts with a few concrete moves:
1) Sanction a Default Toolset
Do not force the company to improvise. Pick the approved tools, document why they are approved, and give teams a legitimate path to use them.
2) Publish a Data Boundary Model
Make it explicit which categories of data are:
- allowed,
- allowed with controls,
- escalated,
- or prohibited.
Most confusion in AI governance is really classification confusion.
3) Replace Case-by-Case Approval with Policy Tiers
If every workflow needs committee review, you do not have governance. You have queue management.
Routine, low-risk use cases should be pre-authorized. Sensitive or externally consequential workflows should have tighter review and stronger evidence requirements.
4) Train People on Good Usage, Not Just Safe Usage
A sanctioned subscription without enablement produces weak outcomes and skepticism. People need examples, patterns, and task-level guidance that show where AI actually creates value.
5) Measure Delay as a Risk
Most organizations track model risk. Very few track delay risk.
They should.
If the company takes six months to approve common knowledge-work use cases while competitors compress cycle times immediately, that is not neutral. That is a strategic cost.
The Leadership Test
Mollick's observation is useful because it cuts through the theater.
If two companies in the same industry face the same constraints and only one is still frozen, then the issue is rarely that regulation made action impossible. The issue is that one leadership team decided to take responsibility for a controlled deployment model and the other did not.
That distinction matters because it reframes the executive job.
The job is not to eliminate all uncertainty before adoption. That standard was never available.
The job is to define where AI can be used safely now, where it cannot, what evidence is required, and how the organization will learn its way forward.
That is what real governance looks like. Not fear. Not slogans. Not another review committee.
Just accountable decisions, clear boundaries, and the willingness to move.
At this point, many companies are no longer being held back by the technology. They are being held back by leaders who still want certainty in a world that now rewards disciplined iteration.
That is the real bottleneck.
Where Gaia Fits
Gaia is relevant once a team decides to operationalize this posture instead of debating it in the abstract. The useful role of the platform is not "AI access" by itself, but structured execution: sanctioned tooling, explicit workflow boundaries, observable evidence, and faster learning loops that make governance real instead of rhetorical.
For readers who want to turn this into practice, the most relevant Gaia resources are the , the , and the . The related opinion post on is also worth reading alongside this one because the two arguments are really about the same operating failure from different angles.
About the author
Kostas Karolemeas
Product and Technology Lead of Gaia, two-time founder, and software product executive with more than three decades of experience building and scaling products across healthcare, architectural and mechanical engineering software, logistics and supply chain, financial services and banking, enterprise resource planning (ERP), and visual effects (VFX) for television.