Connect with us

Anderson's Angle

Will AI Require the Same Kind of Socialized Insurance as Nuclear Energy?

mm
AI-generated image: a robot on a floundering ship reaches for a life-preserver that is not there, under a placard reading 'INSURANCE', as the sea rises over the boat. GPT-image-1 and Firefly V3.

The US has often intervened in major new technology areas when insurers got spooked, and this seems likely to happen again with AI; but are the risks different this time?

 

Feature The current US administration has repeatedly shown its commitment to ensuring that the laissez faire freedoms which China enjoys in developing AI systems are mirrored in the United States. Since America is currently taking a strong executive stance, and wielding its influence with quite a heavy hand, recent events suggest that its AI policies may be echoed in the future legislation of countries which depend on good relations with the States.

Therefore it will be interesting to see how the US responds to the much-reported request to congress from major insurers to be allowed to offer policies which exclude coverage of liabilities relating to AI systems such as chatbots and agentic AI.

According to the FT report linked above, the insurance groups AIG, Great American and WR Berkley are among a number of others seeking to have such exclusions allowed.

The FT notes that WR Berkley has asked for an exclusion that would prohibit claims involving ‘any actual or alleged use’ of AI, or any service or product that ‘incorporates’ AI.

You Can’t Sue the Dog

This was a predictable development: at the same time as the US administration seeks to remove red tape from American AI development culture, so that it may compete on a level playing field with China, the fact that prominent AI systems are rarely trained on rights-cleared material is causing a growing raft of lawsuits from influential players such as Disney and Universal.

The US’s 2025 AI action plan (linked above) mentions very little about copyright holders; and the country’s apparent impetus towards burying the issue, China-style, seems reflected in its determination to force federal laissez faire upon dissenting states.

However, the concerns outlined in the FT report may extend beyond copyright issues, in the case of AI systems that have agentic control of infrastructure, or other fundamental systems, such as stock market mechanisms.

The US judiciary has broadly determined that AI will be held accountable for its errors, with its owners liable for its misadventures – much as a dog owner will be accountable for any injuries inflicted by their dog. That’s a grim prospect for insurance companies, who – among other issues – are concerned over the capacity of generative AI to hallucinate in potentially damaging ways.

Assured Construction

However, this predictable up-swell of complaint from the insurance sector has considerable historical precedent in areas such as the nuclear industry, space and aviation, and vaccine development, among others – circumstances where the US determined that government assurances and insurance coverage were essential for important new technologies, in order not to cede progress to countries (such as the former Soviet Union, or France) where state-backed insurance of infrastructure was far more common.

Nuclear

For instance, in 1957 Congress capped nuclear industry liability with the Price-Anderson Act, as it had become evident that without a government backstop, private insurers would never support atomic energy.

The law limited how much utilities and reactor makers could be sued for, and set up a payout mechanism to cover accidents. It has since been renewed repeatedly, most recently with an extension through 2065, in this year’s spending bill.

Aerospace

Additionally, the U.S. government protects commercial space launch companies from catastrophic liability by covering damages exceeding what private insurers will underwrite. Under the Commercial Space Launch Act, launch providers are obliged to carry a fixed amount of insurance, with federal indemnification kicking in above that, currently capped at $2.7 billion.

This secondary safety net, never yet invoked, allows companies such as SpaceX and Blue Origin to develop space programs without being hobbled by the threat of uninsurable failure.

Terrorism

Unsurprisingly, after the events of 9/11, the insurance industry, which had previously covered such risks under the terms of general policies, no longer wished to cover losses due to terrorism and war. In this case, as usual, the US federal government responded by extending coverage as a federal obligation in the short to medium term.

The Terrorism Risk Insurance Act (TRIA) of 2002 created a federal insurance backstop for losses and claims due to terrorism, covering a large share of terrorism losses above stated deductibles – an act that has been renewed multiple times, including under the Trump administration.

Vaccine Development

Just as vaccine development and diffusion began to have a widespread effect on global health in the 1970s and 1980s, a plague of lawsuits against manufacturers increased liability costs notably for manufacturers.

To avoid a public health crisis, Congress created the National Childhood Vaccine Injury Act, diverting insurance claims to a dedicated Vaccine Court, and shielding manufacturers from the majority of liability, so long as safety standards were met, allowing innovation to continue while compensating patients from a government pool.

The approach was later upheld by the Supreme Court and expanded during the COVID-19 pandemic under the PREP Act, which waived manufacturer liability for approved countermeasures.

Is AI a Different Kind of Case?

Thus, Congress has repeatedly stepped in to break innovation bottlenecks when insurers balked at underwriting public-risk sectors.

However, though it is difficult to contend that AI’s risks exceed those of nuclear systems, the insurance groups are arguing that generative AI introduces systemic risks, where adverse consequences are potentially ‘native’ to the normal functioning of a system, rather than a result of breach, human error, attack, or other more familiar kinds of happenstance or misadventure.

AI pioneer and Turing prize winner Yoshua Bengio stated in early November that artificial intelligence companies should be legally compelled to have liability insurance, to cover ‘existential risks’.

However, history suggests that forcing AI companies to insure themselves, devoid of government aid, is not the likely path ahead. Though OpenAI’s CEO Sam Altman recently backtracked on a suggestion that AI should receive bank-style government bailouts as necessary, the trend of the current US administration indicates that it is not going to leave AI’s fate to the open market alone.

Possible Measures

One possible way forward is a federal liability cap – a revisiting of the 1957 Price-Anderson act, as well as the vaccine act, in the form of an ‘AI indemnity act’ limiting the liability of companies for certain AI-related harms.

Together with a federal compensation fund for AI-related injuries, similar to the earlier vaccine injury fund, this approach could protect companies from ‘worst case’ lawsuit scenarios, much as the vaccine and nuclear industries had been shielded in prior decades.

Alternatively, the TRIA model could be adapted for the purpose, in the form of a government AI insurance backstop. This would force insurers to offer AI liability coverage, but the federal government would agree to pay, for example, 80-90% of any losses above a certain threshold.

Perhaps the least attractive option – partly because it might inspire criticism of ‘socialist’ policy in certain branches of government and the electorate – would be direct federal insurance or indemnification, wherein the government is the direct insurer.

This level of state involvement is usually reserved for limited periods in the evolution of critical industries (such as the nuclear industry), or for wartime management scenarios.

Based on recent behavior, it seems likely in any case that the US administration will push for regulatory overrides at state level, to prevent individual states setting laws that could create unique insurance scenarios at a per-state level, undermining a broader federal initiative.

Conclusion

Those that object to the possibility of AI obtaining the same ‘bailout’ status as banks, are not likely to embrace heavily government-backed solutions to the insurance quandaries around AI.

However, it’s clear that the current US administration views AI as ‘essential infrastructure’, despite its ever-growing tendency to err, or otherwise fall short of expectations.

One could argue that extensive state involvement in insuring AI is tantamount to a ‘pre-bailout’ – a hard sell in a period where market excitement and investor frenzy is shadowed by the growing fear of a bubble-burst, and by a public that is simultaneously fearful and enraptured in regard to generative AI.

 

First published Monday, November 24, 2025

Writer on machine learning, domain specialist in human image synthesis. Former head of research content at Metaphysic.ai.
Personal site: martinanderson.ai
Contact: [email protected]
Twitter: @manders_ai