Connect with us

Thought Leaders

The Governance Gap: Why AI Regulation Is Always Going to Lag Behind

mm

Innovation evolves at machine speed, while governance moves at human speed. As AI adoption grows exponentially, regulation is lagging behind, which is fairly typical when it comes to technology. Worldwide, governments and other entities are scrambling to regulate AI, but fragmented and uneven approaches abound.

Part of the challenge is that there’s no such thing as apolitical tech design. There are a number of regulations and proposals, from the European Union’s AI Act to the U.S. regulatory sandboxes, each with its own philosophy. While AI governance inherently trails innovation, the real challenge is managing security and policy responsibly within that lag.

The nature of the gap: Innovation first, oversight later

Regulatory lag is an inevitable byproduct of technological progress. For instance, Henry Ford wasn’t developing the Model T with a primary focus on highway safety and road rules. Regulatory patterns historically follow innovation; recent examples include data privacy, blockchain and social media. AI’s rapid evolution outpaces policy formation and enforcement. In other words, the cart has been before the horse for a while.

Part of the challenge is that policymakers often react to harm rather than anticipate risk, which creates cycles of reactive governance. The issue isn’t the lag itself but rather the lack of adaptive mechanisms to keep up with emerging threat models, and the lack of will to compromise a competitive edge for the sake of safety. It’s a “race to the bottom” scenario; we’re eroding our own collective safety for localized competitive gains.

Global patchwork of AI governance represents fragmented philosophies

The existing major AI governance approaches in the world vary greatly.

In the EU, the AI Act introduced last year is very much ethics- and risk-based. AI use is assessed according to risk level, with some deemed unacceptable and therefore, prohibited risks. The U.S., by contrast, has taken more of a regulatory sandbox model that emphasizes innovation flexibility. Some might describe it as a carve-out for innovation, while critics may call it a blank check.

There’s also the Hiroshima process, which contains global coordination intent but limited follow-through; each G7 nation is still focused on domestic AI dominance.

In the U.S., the matter has largely been left up to states, which effectively ensures a lack of effective regulation. The federal government does this sometimes precisely because of how ineffective it can be. States are creating new sandboxes to lure tech companies and investment, but it’s unlikely there will be any meaningful regulation at the state level; only exceptions granted.

The UK has been in a domestic and international struggle to establish itself as fiercely independent following Brexit. Through deregulation and the government’s “Leveling Up” scheme, the introduction of regulatory sandboxes is no surprise. The UK government will want the UK to be a dominant AI superpower for both internal and external political advantage and stability.

The EU is focused more on consumer safety but also on the strength of its shared market. This makes sense, given the EU’s history with patchwork regulation. Shared compliance, norms and cross-border commerce are key to making the EU what it is. They still require regulatory sandboxes, but also that each member state must have one operational by the same date.

These are just a few such regulations, but arguably the most prominent. The key point is that there are disjointed frameworks that lack shared definitions, enforcement mechanisms and cross-border interoperability. This is leaving gaps for attackers to exploit.

The political nature of protocols

No AI regulation can ever be truly neutral; every design choice, guardrail and regulation reflects underlying government or corporate interests. AI regulation has become a geopolitical tool; nations use it to secure economic or strategic advantage. Chip export controls are a current example; they serve as indirect AI governance.

The only regulation effectively introduced so far has been to intentionally hinder a market. The global race for AI supremacy keeps governance a mechanism for competition rather than collaborative safety.

Security without borders, but governance with them

The major thorny problem here is that AI-enabled threats transcend borders while regulation remains confined. Today’s rapidly evolving threats include both attacks on AI systems and attacks that use AI systems. These threats cross jurisdictions, but regulation remains siloed. Security gets sequestered in one corner while threats cross the whole internet.

We’re already starting to see the abuse of legitimate AI tools by global threat actors exploiting weak safety controls. For example, malicious activity has been seen with the use of AI site creation tools that are more like site cloners and can be easily abused to spin up phishing infrastructure. These tools have been used to impersonate login pages for everything from popular social media services to national police agencies

Until governance frameworks reflect AI’s borderless structure, defenders will remain constrained by fragmented laws.

From reactive regulation to proactive defense

Regulatory lag is inevitable, but stagnation isn’t. We need adaptive, predictive governance with frameworks that evolve with the technology; it’s a matter of moving from reactive regulation to proactive defense. Ideally, this would look like:

  • Development of shared international standards for AI risk classification.
  • Broadened participation in standards-setting beyond major governments and corporations. Internet governance has sought (with mixed success) to use a multistakeholder model over a multilateral one. Though imperfect, it has made a huge impact on making the internet a tool for everyone and minimizing censorship and political shutdowns.
  • Fostering diversity of thought in governance.
  • A mechanism for incident reporting and transparency. A lack of regulations will often also mean a lack of reporting requirements. It’s unlikely that there will be a requirement to inform the public of damage from mistakes or design choices within regulatory sandboxes in the near future.

While the governance gap will never disappear, collaborative, transparent and inclusive frameworks can prevent it from becoming a permanent vulnerability in global security.

Ginny Spicer is a Cyber Threat Analyst at Netcraft, where she tracks emerging threat actor tactics and campaigns. Her background is in network analysis and nation-state threat research. She’s the 2026 president of the HTCIA’s Silicon Valley chapter, a board member for the Deep Packet Inspection Consortium, and one of the Internet Society’s 2025 Youth. Ambassadors.