Thought Leaders
The API Explosion Is Real â And Vibe Coding Is Lighting the Fuse

Just a few years ago, spinning up a new API endpoint in a mature codebase was a high-friction endeavor. You needed to navigate ownership of multiple code domains, wrangle sign-offs from cranky architects, and conduct reviews that sometimes dragged on for weeks or months. The friction was painful, but it ensured that every new API carried with it a level of scrutiny and institutional memory.
Now? AI-powered development tools have torched that bottleneck.
GenAI agents can consume massive amounts of contextual data and generate code changes across hundreds of files in seconds. That has democratized the ability to create APIs â not just for engineers, but even for non-technical roles (shock horror) like product managers and support teams who may now feel empowered to ship experiments straight to production.
Itâs a massive shift in who holds power in the software development process. And itâs not necessarily a bad thing, especially in a business environment that prioritizes speed and iteration. But the result is a wildfire of rapidly deployed APIs: many launched as âexperimentalâ or hidden behind feature flags, but quickly becoming essential infrastructure as business needs evolve. What starts as a quick prototype becomes a key integration. And now itâs too late to unwind.
The Rise of âVibe Codingâ
This new breed of AI-generated APIs often arrives with little in the way of architecture, documentation, or testing. We call this phenomenon âvibe codingââ writing software based on rough intuition, loose prompting, and a general sense of what âshould work,â rather than a deep understanding of systems or design patterns.
Unfortunately, APIs created this way tend to follow inconsistent conventions, lack robust validation, and often ignore established internal standards. Worse, they can introduce serious security or regulatory risks, especially when connected to sensitive data or external-facing endpoints. AI doesnât know your companyâs governance model â or your compliance requirements. Unless explicitly told, it wonât write with them in mind.
And the problems compound quickly. AI is also increasingly used to generate tests. But when broken code is tested with AI-generated validations, the tests merely confirm flawed behavior. Developers are reluctant to write tests for code they didnât author, let alone code generated by machines, so AI picks up the slack. The result? A recursive feedback loop of low-quality code tested and âvalidatedâ by equally shaky scaffolding.
Patchwork APIs and the Ownership Crisis
All of this leads to a sprawling, fragmented API layer within most organizations. APIs now span overlapping domains, perform similar functions in slightly different ways, and often lack clear ownership. Many were written without a deep understanding of underlying data models, service boundaries, or team charters. Unsurprisingly, maintenance becomes a nightmare. Who owns this endpoint? Who can modify it? Who even knows it exists?
AI tools prioritize utility and speed. Left unchecked, theyâll create the shortest path to delivery, whether or not it aligns with your architectural vision. Over time, the weight of this technical debt can grind progress to a halt.
Some practical steps to take.
1. Visibility
The answer isnât to slow everything down or forbid AI. Thatâs not realistic, and it would leave enormous value on the table. Instead, we must evolve how we manage software in the age of generative development.
The foundational first step is visibility. You canât govern what you canât see. Organizations need continuous API discovery, not static documentation thatâs outdated the minute itâs published.
Tools that monitor APIsâat runtime and in codeâare becoming essential. Once you can map your real-world API landscape, you can assess risk, identify duplication, and begin building reliable governance on top.
Ironically, AI itself can help with this process. Using prompted AI models to analyze and audit API maps helps uncover anomalies, risky exposure, and consolidation opportunities. This is AI assisting not in building more, but in cleaning up what we already have.
2. Setting Up Organization-Wide Standardization of Prompt Engineering and Tooling
Better control of both the output and the input into AI tools goes a long way in keeping a level of control over the code generated. Simple steps like aligning on the AI-powered IDEs and models approved to use inside an organization will help with the variation. This also has the benefit of making rolling out new models easier and making it more likely that prompts will be reproducible across engineers' workstations.
More powerful still is aligning on the specific rules.md type files you require AI-coders to provide as context to their agent. The more complex the code base, the more helpful it is for all engineers to be working with the same set of rules, providing context to the AI Agent on how to properly generate code that works best with the existing structures.
Weâre not going to put the generative genie back in the bottle. But we can guide it, contain the blast radius, and use it to fuel responsible innovation. That work starts not with code, but with clarity.