Connect with us

Thought Leaders

The Internet Will Keep Breaking in 2026 and AI Is Part of the Reason

mm

If 2025 felt like the year the internet kept breaking, 2026 is shaping up to be more of the same. Outages, incidents, and production failures are no longer rare events that surprise engineering teams. They are becoming a steady background condition of modern software development.

Data from outage trackers like IsDown.app shows incidents climbing year over year since 2022, with no meaningful reversal, and independent surveys back this up. A global poll of more than 1,000 CIOs, CISOs, and network engineers found that 84% of organizations reported rising outages, with more than half seeing increases of 10–24% over just two years.

ThousandEyes observed similar volatility, with sharp month-to-month swings that point to sustained upward pressure rather than isolated failures. The uncomfortable takeaway is that the systems we rely on every day are becoming more fragile, not more resilient, despite years of investment in cloud infrastructure, observability, and automation.

When major platforms go down, the blast radius is immediate. Payments fail, consumer apps freeze, internal tools grind to a halt, and entire supply chains feel the impact with economic loss estimates routinely reaching into the billions. For instance, Amazon, a leader in e-commerce, attributes an increase in incidents—including a nearly six-hour outage of its website and shopping app this month—to changes assisted by Generative AI. This has prompted the company to schedule engineering meetings for a deep dive into the recent surge in outages.

After every large outage, the same conversations repeat themselves around redundancy, multi-cloud strategies, and vendor concentration risk. Those discussions matter, but they miss the bigger picture.

If infrastructure providers are not getting worse at what they do and tooling continues to mature, how are incidents still increasing?

AI changed how software ships

One of the biggest shifts happening at the same time as this rise in outages is the spread  of AI-assisted software development. AI coding tools are no longer experimental. They are embedded into daily workflows, whether in IDEs or the CLI, making it easier than ever to generate code with AI.

Across the industry, pull requests per developer have increased materially, with some analyses showing roughly a 20% year-over-year jump as AI accelerates output. At the same time, incidents per pull request have risen even faster, increasing by more than 23%.

That correlation is not proof of causation, but it is difficult to ignore. AI doesn’t just make it faster to write code, it changes the shape of risk. By now, most teams have encountered a steady stream of bugs in AI-assisted code that experienced engineers are confident they would not have introduced on their own.

These aren’t dramatic syntax errors or obviously broken changes. They are subtle logic mistakes, misconfigurations, missing guardrails, and edge-case failures that look reasonable at a glance.

AI-generated code often compiles cleanly, passes basic tests, and reads plausibly correct. The problem is not that AI invents new kinds of bugs. It’s that it produces familiar bugs more frequently and at a scale that overwhelms existing review and QA processes.

What the data shows when AI writes more code

We recently analyzed hundreds of open-source pull requests to help put numbers behind this intuition in our State of AI vs. Human Code Generation Report. When changes co-authored by AI were compared to human-only pull requests and normalized for size, AI-assisted PRs contained roughly 1.7x more issues overall.

More concerning, they also showed 1.4–1.7x more critical and major issues. Logic and correctness problems, including flawed control flow, incorrect dependency usage, and configuration errors, were about 75% more common. Error-handling gaps such as missing null checks, incomplete exception paths, and absent guardrails appeared nearly twice as often.

Security issues were amplified as well, with some categories occurring at rates up to 2.7x higher, particularly around credential handling and insecure object references. Concurrency and dependency correctness issues also increased by roughly 2x.

Humans make these same mistakes, but when AI is involved, these defects occur more frequently, across a larger codebase, and at a speed that outpaces traditional code review. These are the exact types of defects that are likely to slip past quick review and later manifest as security incidents or outages in production environments.

What decides whether 2026 looks different

From a security perspective, this trend is difficult to ignore. Logic flaws, unsafe defaults, and configuration errors expand the attack surface even when no single vulnerability looks catastrophic in isolation. Error-handling gaps and dependency mistakes increase the likelihood that failures cascade rather than degrade safely.

Strong isolation, least-privilege execution, short-lived credentials, and encryption can limit blast radius if something goes wrong, but they cannot compensate for defects introduced earlier in the development lifecycle. Security and reliability are no longer just infrastructure concerns and are direct consequences of how software is built, reviewed, and tested.

The internet will keep breaking in 2026 if this imbalance remains. That is not an argument against AI, as AI is already here and it is not going away. The teams that will fare best are not the ones that avoid AI, but the ones that adapt their guardrails to match it.

That means resourcing review and QA teams appropriately for higher output, shifting testing and validation earlier in the development loop, being explicit about which AI-generated issues deserve deeper scrutiny, and treating AI-assisted code as higher-variance input rather than trusted output by default.

The lesson is simple: you can’t automate your way out of accountability. As AI writes more code, teams need the time, tools, and headcount to review more code, not less. The next phase of AI innovation won’t be defined by how fast code gets generated, but by how confidently it can be shipped.

Review is now the bottleneck

AI dramatically increased code generation capacity. It did not automatically increase review capacity. That gap creates risk. The next phase of AI adoption will not be defined by how fast code gets generated. It will be defined by how confidently teams can ship it.

That means:

  • Resourcing review and QA for higher output, not lower.
  • Moving validation earlier in the development loop.
  • Increasing signal in pull requests so reviewers focus on what matters.
  • Treating AI-assisted code as deserving deeper scrutiny, not lighter oversight.

The internet does not have to keep breaking. AI is not the root problem, unreviewed AI-generated code is. If AI is going to write a growing share of production software, something equally rigorous needs to review it before it ships.

That shift is exactly why AI code reviews are becoming foundational infrastructure, not optional tooling. Platforms such as CodeRabbit embed context-aware AI reviews directly into your Git workflow, helping teams catch logic errors, security gaps, and edge cases before they turn into incidents.

Because if code generation scales, review has to scale with it.

Otherwise, 2026 will look exactly like 2025 – just faster.

David Loker is the Vice President of AI at CodeRabbit where he leads the development of agentic AI systems that transform code reviews and developer workflows. As an entrepreneur and award-winning researcher, he has been building large-scale machine learning and AI systems since 2007 and has published over a dozen papers in leading conferences including NeurIPS, ICML, and AAAI, and was an early pioneer in generative AI.