Connect with us

Thought Leaders

The API Explosion Is Real – And Vibe Coding Is Lighting the Fuse

mm
The AI boom has brought us many things: productivity boosts, new creative workflows, and more recently, an avalanche of APIs. If it feels like the number of internal and external APIs at your company has doubled overnight, you’re not imagining it. We’re living through an API explosion, and generative AI is a primary accelerant.

Just a few years ago, spinning up a new API endpoint in a mature codebase was a high-friction endeavor. You needed to navigate ownership of multiple code domains, wrangle sign-offs from cranky architects, and conduct reviews that sometimes dragged on for weeks or months. The friction was painful, but it ensured that every new API carried with it a level of scrutiny and institutional memory.

Now? AI-powered development tools have torched that bottleneck.

GenAI agents can consume massive amounts of contextual data and generate code changes across hundreds of files in seconds. That has democratized the ability to create APIs – not just for engineers, but even for non-technical roles (shock horror) like product managers and support teams who may now feel empowered to ship experiments straight to production.

It’s a massive shift in who holds power in the software development process. And it’s not necessarily a bad thing, especially in a business environment that prioritizes speed and iteration. But the result is a wildfire of rapidly deployed APIs: many launched as “experimental” or hidden behind feature flags, but quickly becoming essential infrastructure as business needs evolve. What starts as a quick prototype becomes a key integration. And now it’s too late to unwind.

The Rise of “Vibe Coding”

This new breed of AI-generated APIs often arrives with little in the way of architecture, documentation, or testing. We call this phenomenon “vibe coding”– writing software based on rough intuition, loose prompting, and a general sense of what “should work,” rather than a deep understanding of systems or design patterns.

Unfortunately, APIs created this way tend to follow inconsistent conventions, lack robust validation, and often ignore established internal standards. Worse, they can introduce serious security or regulatory risks, especially when connected to sensitive data or external-facing endpoints. AI doesn’t know your company’s governance model – or your compliance requirements. Unless explicitly told, it won’t write with them in mind.

And the problems compound quickly. AI is also increasingly used to generate tests. But when broken code is tested with AI-generated validations, the tests merely confirm flawed behavior. Developers are reluctant to write tests for code they didn’t author, let alone code generated by machines, so AI picks up the slack. The result? A recursive feedback loop of low-quality code tested and “validated” by equally shaky scaffolding.

Patchwork APIs and the Ownership Crisis

All of this leads to a sprawling, fragmented API layer within most organizations. APIs now span overlapping domains, perform similar functions in slightly different ways, and often lack clear ownership. Many were written without a deep understanding of underlying data models, service boundaries, or team charters. Unsurprisingly, maintenance becomes a nightmare. Who owns this endpoint? Who can modify it? Who even knows it exists?

AI tools prioritize utility and speed. Left unchecked, they’ll create the shortest path to delivery, whether or not it aligns with your architectural vision. Over time, the weight of this technical debt can grind progress to a halt.

Some practical steps to take.

1. Visibility

The answer isn’t to slow everything down or forbid AI. That’s not realistic, and it would leave enormous value on the table. Instead, we must evolve how we manage software in the age of generative development.

The foundational first step is visibility. You can’t govern what you can’t see. Organizations need continuous API discovery, not static documentation that’s outdated the minute it’s published.

Tools that monitor APIs—at runtime and in code—are becoming essential. Once you can map your real-world API landscape, you can assess risk, identify duplication, and begin building reliable governance on top.

Ironically, AI itself can help with this process. Using prompted AI models to analyze and audit API maps helps uncover anomalies, risky exposure, and consolidation opportunities. This is AI assisting not in building more, but in cleaning up what we already have.

2. Setting Up Organization-Wide Standardization of Prompt Engineering and Tooling

Better control of both the output and the input into AI tools goes a long way in keeping a level of control over the code generated. Simple steps like aligning on the AI-powered IDEs and models approved to use inside an organization will help with the variation. This also has the benefit of making rolling out new models easier and making it more likely that prompts will be reproducible across engineers' workstations.

More powerful still is aligning on the specific rules.md type files you require AI-coders to provide as context to their agent. The more complex the code base, the more helpful it is for all engineers to be working with the same set of rules, providing context to the AI Agent on how to properly generate code that works best with the existing structures.

We’re not going to put the generative genie back in the bottle. But we can guide it, contain the blast radius, and use it to fuel responsible innovation. That work starts not with code, but with clarity.

Bio: Benji Kalman, VP of Engineering and co-founder of Root, has over a decade of experience in researching and building in cybersecurity and DevTools. An 8200 Alumni who specialised in cyber operations, Benji was an early joiner of Snyk, where over five years he worked as Director of Snyk's Security RnD group responsible for curation and creation of the company's security knowledge bases.