Funding
Sedai Secures $20M to Scale the Self-Driving Cloud and Redefine DevOps with AI Agents

Sedai, the company behind the first self-driving cloud platform, has announced a $20 million Series B round to scale its vision of autonomous infrastructure management. The funding, led by AVP (Atlantic Vantage Point) with support from Norwest, Sierra Ventures, and Uncorrelated Ventures, will fuel Sedai’s expansion into new domains like LLM optimization, GPU resource management, and intelligent orchestration for platforms such as Databricks and Snowflake.
The platform marks a turning point in DevOps, replacing traditional alert-and-dashboard paradigms with AI agents that learn from production environments and act autonomously to optimize cost, performance, and availability.
“Just like Waymo proved that self-driving cars are possible, Sedai proves that self-driving infrastructure is not only possible, it’s necessary,” said Suresh Mathew, CEO and Founder of Sedai.
What Self-Driving Infrastructure Really Means
While most monitoring tools simply generate alerts, Sedai’s approach is far more proactive. The platform observes traffic, application behavior, and infrastructure configurations in real time, then makes autonomous decisions that improve performance and reduce costs—without requiring human intervention.
This shift from observability to autonomy is what makes Sedai a truly “self-driving” platform. Its system doesn’t just flag problems. It solves them.
Under the hood, Sedai uses a multi-agent AI architecture that continuously adapts to changing workloads and system states. At the heart of this system is deep reinforcement learning (DRL)—a powerful form of machine learning where agents learn by trial and error. In Sedai’s case, agents are trained to dynamically scale infrastructure resources such as CPU and memory based on actual performance outcomes. Over time, these agents learn which actions lead to the best results in live environments.
This intelligence is further enhanced by techniques like anomaly detection and causal inference, allowing Sedai to predict failures and pinpoint root causes before customer experience is affected. And with seasonality modeling, the system automatically adjusts to recurring patterns like daily traffic spikes or end-of-month processing loads, optimizing infrastructure before demand surges even occur.
A New Era of DevOps Efficiency
Sedai was founded by Suresh Mathew and Benji Thomas after experiencing firsthand the scaling challenges of microservices at PayPal. While DevOps accelerated deployments, it also created new burdens—endless toil, alert fatigue, and brittle systems held together by manual workarounds.
Sedai changes that dynamic by taking action. Instead of relying on engineers to interpret metrics and respond manually, the platform handles tasks like:
- Detecting and resolving infrastructure degradation in real time
- Scaling workloads vertically and horizontally based on actual traffic
- Updating configurations to optimize for cost, latency, and availability
- Restarting or healing broken services before users notice
Already, the platform has executed over 25 million autonomous actions in production, managing $3 billion in cloud spend. This has saved customers over $5 million annually, while giving engineering teams back more than 22,000 hours of productive time.
Trusted by Enterprise Leaders Across Critical Industries
Sedai is used in production by Fortune 500 leaders in cybersecurity, financial services, pharmaceuticals, education, and AI. Customers include household names like Palo Alto Networks, Experian, and McGraw Hill—companies that depend on stable, performant, and cost-efficient infrastructure at scale.
At KnowBe4, Sedai cut production costs by 50% and development costs by as much as 87%. Engineering VP Matthew Duren credited the platform not only with budget efficiency, but with transforming his own role—freeing up his team to focus on strategic initiatives instead of low-value tasks.
These results are not projections—they reflect real AI in live environments, safely managing production systems and even high-complexity machine learning workloads.
Going Beyond Automation: Why AI Agents Are the Next Leap
It’s important to distinguish automation from autonomy. Automation executes predefined tasks based on static thresholds or scripts. Sedai’s AI agents, by contrast, observe and learn from your systems, discovering the best actions dynamically—even when conditions change.
This distinction matters. In a world of ever-evolving traffic patterns, service dependencies, and deployment architectures, static rules quickly become outdated. Sedai’s AI-first approach ensures continuous optimization, even under complexity.
For example, its platform learns how specific services behave under different loads and fine-tunes resource allocation accordingly. If latency increases due to a specific memory bottleneck, Sedai can act immediately—without waiting for a human to interpret the alert.
A Platform for the Entire Engineering Organization
Sedai delivers value across every role in the engineering stack:
- SREs and DevOps engineers reduce toil and meet reliability goals without burnout.
- Developers focus on shipping code, while Sedai ensures optimal configurations in production.
- Engineering leaders gain operational efficiency and massive cloud savings.
- Architects and CTOs turn infrastructure into a strategic differentiator, not a liability.
With just 15 minutes of setup, teams can connect Sedai to their cloud and APM tools. From there, the platform begins learning, validating safe optimizations, and ultimately taking action in live production—with a full audit trail for compliance.
What’s Next: Optimizing the AI Infrastructure Stack
With its Series B funding, Sedai will expand its capabilities into some of the most pressing challenges in modern AI infrastructure, including:
- Self-tuning for LLM-based applications, ensuring optimal configuration during inference
- Autonomous GPU orchestration, managing expensive compute resources in real time
- AI-driven optimization of data platforms like Databricks and Snowflake
These efforts align with a future where the workloads themselves—AI models, inference pipelines, real-time analytics—demand equally intelligent infrastructure layers to support them.
“As cloud adoption increases, companies are struggling to improve performance while reducing cost. AI agents are uniquely positioned to solve this at scale,” said Manish Agarwal, General Partner at AVP.
The Future of Cloud Infrastructure Is Autonomous
The rise of autonomous cloud platforms signals a broader industry shift—from human-in-the-loop systems toward intelligent agents that operate independently in real time. As enterprises scale their cloud footprints and embrace increasingly complex, distributed architectures, manual infrastructure management is reaching its limits.
DevOps, once seen as the ultimate solution for faster deployment and operational agility, is now under pressure from mounting complexity, alert fatigue, and cost inefficiencies. Traditional observability and automation tools offer visibility and scripting—but they still rely on human engineers to analyze, interpret, and act. This reactive approach struggles to keep pace with modern service demands.
Autonomous platforms represent the next evolutionary step. By integrating deep reinforcement learning, causal inference, and adaptive scaling into core infrastructure workflows, they offer the ability to self-optimize and self-heal in production—continuously and without intervention. The result isn’t just operational efficiency, but a structural transformation: fewer outages, faster releases, better cost control, and improved developer experience.
As the ecosystem matures, this shift will impact everything from how teams are staffed and structured, to how applications are architected, tested, and deployed. Early adopters are already proving that autonomous operations can yield tangible gains in productivity, performance, and financial ROI.
While Sedai is among the leaders bringing this vision to life, the larger takeaway is clear: cloud infrastructure is no longer something engineers must constantly manage—it’s becoming something that manages itself.












