Connect with us

Regulation

Most U.S. Tech Executives Want AI Regulation, But Who Should Lead it?

mm

Just as the 1990s had the commercialization of the Internet, the 2000s the smartphone, and the 2010s the rise of social media, the 2020s belong to AI. Like those earlier technological breakthroughs, AI is also seemingly expanding faster than regulators and lawmakers can keep pace with. 

The dotcom bubble exploded in 1993 with the launch of the inclusive Mosaic browser, and it was not until 1996 that U.S. Congress passed the Telecommunications Act, the first to explicitly address the Internet. Similarly, Apple unveiled the iPhone in 2007, although legislators passed the 21st Century Communications and Video Accessibility Act in 2010, requiring all smartphones to include accessibility features. 

And although the “social media decade” of the 2010s saw the establishment and expansion of platforms like Facebook, WhatsApp, YouTube, and Instagram, it was not until the latter half of the decade that the FOSTA-SESTA act was approved, making platforms liable for knowingly facilitating sex trafficking. Will history repeat itself with AI? 

While AI went mainstream after the industry-shaking release of OpenAI’s ChatGPT in 2022, the tool’s home country has yet to pass federal legislation to regulate it. The U.S., instead, has shifted its stance- from restrictive to deregulatory- as different administrations have changed. 

AI in the U.S.: A Bipartisan Issue

Former President Joe Biden’s Safe, Secure, and Trustworthy Act required federal agencies in 2023 to take steps towards AI safety, civil rights, equity and transparency. “Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks,” the Act read

In January 2025, however, current President Donald Trump signed the Removing Barriers to American Leadership in Artificial Intelligence Act, revoking existing AI policies and directives “that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence.” 

The current administration, then, seeks to accelerate AI innovation in the U.S. via deregulation efforts, counteracting potential risks through investment in research and development. The question remains, however, if it is the government which must address these concerns. 

Today, there are near-universal anxieties about AI risks, including ethics, disruption, and trust. A 2024 study on multistakeholder concerns arising from AI, in fact, found the most pressing concerns include bias, misuse, unexpected machine action, inequality, social anxiety and changes in supply chains, to name a few. 

Most U.S. technology executives agree, but highlight the paradox. A September 2025 report from Solvd, an AI advisory and digital engineering firm, concluded that although 97% of responding Chief Information Officers (CIOs) and Chief Technology Officers (CTOs) are somewhat concerned about the unethical use of AI, 87% believe that too much AI regulation could limit innovation and become a competitive disadvantage. 

These leaders’ top concerns echo those identified by the 2024 study: AI models becoming so powerful that they cannot be controlled, malicious actors potentially taking advantage of AI vulnerabilities, AI models gaining too much access to company data, and the disinformation or bias the technology could pose. 

Among the 500 American CIOs and CTOs at companies making over $500M in ARR that Solvd surveyed, 61% preferred public regulation, while 36% favored industry-led regulation. 

Industry-led regulation could signify a mixed model, drifting from the public-private dichotomy. The Biden administration, in fact, negotiated a deal with top tech executives in 2023 that detailed voluntary commitments regarding AI guardrails. Such pledges included ensuring that products were safe before their public launch, building systems that put security first, and earning the public’s trust through transparency and disclosing AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. 

A Global Conversation

Despite the divergence in company executives’ preference over public or industry-led regulatory frameworks, the fact remains that 82% of them are primarily responsible for leading internal AI governance, according to Solvd. Only 8% of respondents pointed to senior leadership for internal AI regulation, and 5% said their companies had a dedicated AI ethics board or compliance committee. 

Enterprises- and their employees- are thus caught between waiting for a public policy that may or may not come, and acting mostly independently from senior leadership and teams trained in AI ethics. 

Different contexts have found distinct solutions to this complication. The European Union (EU), for one, sharply contrasts the Trump administration’s deregulatory approach via the EU AI Act, the world’s first comprehensive AI law. Passed in March 2024, the Act banned certain AI applications, established a risk-based evaluation framework, and legally demanded transparency and communication with users. 

Others are following the EU’s example. Chile, South America’s AI leader, recently proposed a bill to regulate AI, inspired by the European AI Act. China, on the other hand, largely sidesteps these hurdles, as its political system favors state-sanctioned regulation, subsidies and partnerships with private AI companies. The country’s challenges stem less from a regulatory dichotomy, and more from the risks of censorship and state meddling in innovation.  

But the U.S., as the global AI leader, is bound to set the tone for the future of regulatory frameworks. 

Final Thoughts

AI implementation and innovation has become a competitive advantage in nearly every industry across the globe. But, without regulation clarity and shifting politics, tech executives must set their own guardrails, and be responsible for protecting consumers, earning their trust amidst widespread fear of AI, and innovating- all simultaneously. 

Despite cross-sector worries that strict regulations might hinder innovation, the opposite might just be the case. According to RegulatingAI, a U.S.-based non-profit dedicated to exploring the intersection of AI and regulation, uncertainty about AI policing holds back the technology’s adoption, which delays the realization of economic benefits, lacking investment, and hinder companies’ scaling margins.

“Clarifying how AI systems are defined within regulatory contexts is crucial, as ambiguity in definitions adds to compliance challenges,” the non-profit noted.  

Similarly, Solvd stressed the importance of companies seizing the present regulatory confusion. “Now is the perfect moment for companies to establish effective internal governance before external regulators step in and potentially impose less flexible solutions,” the company’s report concluded.  

In this context, AI innovators in the U.S. face unprecedented challenges: reputational risks for not adopting oversight policies, and the possibility of compliance gaps when regulation does come into place. 

But opportunity shines through, too. With the hindsight of the Internet, smartphones, and social media, now is precisely the time to look ahead and build for an ecosystem that balances innovation with accountability, fosters trust, and prepares for the inevitability of regulation.

Salomé is a Medellín-born journalist and Senior Reporter at Espacio Media Incubator. With a background in History and Politics, Salomé’s work emphasizes the social relevance of emerging technologies. She has been featured in Al Jazeera, Latin America Reports, and The Sociable, among others