Regulation Will Follow Capital, Not Code

AI regulation will not be driven by code or model architecture—but by capital, liability, and risk concentration. Why leaders misunderstand what regulation actually follows.

Viktorija Isic

|

Systems & Strategy

|

March 10, 2026

Listen to this article

0:00/1:34

Introduction: Why the Regulation Debate Is Misframed

When leaders talk about AI regulation, the conversation usually starts in the wrong place.

They debate:

  • Model architecture

  • Training data

  • Explainability techniques

  • Technical safeguards

These discussions matter—but they are not what ultimately drives regulation.

Historically, regulation does not emerge because technology exists. It emerges when capital, risk, and harm concentrate faster than institutions can absorb them.

AI will be no different.

A Brief Lesson From Regulatory History

Consider how major regulatory regimes actually formed:

  • Banking regulation followed systemic financial collapse

  • Securities regulation followed market manipulation and capital loss

  • Product safety regulation followed consumer harm at scale

  • Data protection regulation followed monetization of personal data

In each case, lawmakers responded not to innovation itself, but to economic exposure and political pressure created by failure.

AI regulation will follow the same path—not toward code, but toward who controls capital, who bears risk, and who profits from deployment.

Why Code Is the Wrong Regulatory Target

Code changes too quickly to regulate directly.

Models iterate. Architectures evolve. Capabilities compound.

Capital, by contrast:

  • Is traceable

  • Is taxable

  • Is insurable

  • Leaves audit trails

McKinsey Global Institute notes that the most material risks from AI adoption increasingly appear as balance-sheet exposure, legal liability, and reputational damage—not technical malfunction alone (McKinsey Global Institute, 2023).

Regulators regulate what they can see, measure, and enforce. That is capital—not code.

AI Risk Is Becoming Financial Risk

As AI systems make or influence decisions tied to:

  • Credit approval

  • Pricing

  • Hiring and termination

  • Compliance enforcement

  • Resource allocation

Risk migrates from technical domains into financial ones.

The Stanford AI Index documents growing deployment of AI in high-stakes economic contexts, where small system errors can scale into material financial consequences (Stanford HAI, 2024).

Once AI decisions affect revenue, access, or valuation, regulation is no longer optional—it is inevitable.

Liability Is the Bridge Between AI and Regulation

Regulation almost always follows liability.

When harm occurs, questions surface quickly:

  • Who approved this system?

  • Who benefited financially?

  • Who failed to intervene?

The OECD has emphasized that accountability for AI outcomes must be human and enforceable, especially when systems affect rights, livelihoods, or access to capital (OECD, 2019).

As lawsuits, insurance claims, and enforcement actions accumulate, regulators will follow the trail of financial responsibility—not the trail of algorithms.

Why Boards and CFOs Will Become Central

AI regulation will increasingly land at the level of:

  • Enterprise risk management

  • Financial disclosures

  • Internal controls

  • Board oversight

MIT Sloan Management Review notes that AI governance matures only when embedded into enterprise-wide risk and financial control frameworks, rather than siloed within innovation or IT teams (MIT Sloan Management Review, 2023).

This is why AI accountability is shifting toward:

  • Boards

  • CFOs

  • Chief Risk Officers

  • Audit committees

Not because they write code—but because they oversee capital.

What Leaders Should Be Preparing For Now

Organizations waiting for “clear regulation” are already behind.

Preparation requires:

  • Treating AI as enterprise risk, not innovation theater

  • Mapping AI systems to financial exposure

  • Assigning executive accountability

  • Integrating AI into disclosure and audit processes

Explainability will support investigations. Ethics frameworks will shape norms. But financial responsibility will determine enforcement.

The Strategic Mistake Leaders Keep Making

Many leaders assume regulation will:

  • Target model builders only

  • Focus on technical compliance

  • Exempt downstream users

This assumption is fragile.

History shows that downstream deployers—those who profit from use—are often regulated just as heavily as creators.

Capital attracts scrutiny.

Scale attracts enforcement.

Silence attracts suspicion.

Conclusion: Follow the Money

AI regulation will not be decided in engineering meetings.

It will be shaped by:

  • Losses

  • Lawsuits

  • Insurance markets

  • Financial disclosures

  • Political pressure

Leaders who understand this will design governance accordingly. Those who do not will find themselves reacting to regulation rather than shaping it.

When it comes to AI accountability, always ask one question:

Where does the capital flow?

Because regulation always follows it.

If you are responsible for AI strategy, risk, or governance—and want to prepare for where regulation is actually heading—

Subscribe to the ViktorijaIsic.com newsletter for systems-level insight on AI, capital, and accountability.

Explore the Systems & Strategy and AI & Ethics sections for leadership frameworks designed for financial and regulatory reality.

References

  • McKinsey Global Institute. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey & Company.

  • MIT Sloan Management Review. (2023). Governing AI responsibly: Practical frameworks for organizations. MIT Sloan Management Review.

  • Organisation for Economic Co-operation and Development. (2019). Artificial intelligence and accountability: Who is responsible when AI goes wrong? OECD Publishing. https://doi.org/10.1787/5e5c1d6c-en

  • Stanford Institute for Human-Centered Artificial Intelligence. (2024). AI index report 2024. Stanford University. https://aiindex.stanford.edu

Want more insights like this? 

Subscribe to my newsletter or follow me on LinkedIn for fresh perspectives on leadership, ethics, and AI

Subscribe to my newsletter