Interviews
Anton Onufriienko, Managing Director at Devart – Interview Series

Anton Onufriienko, Managing Director at Devart, is a technology executive and operator with deep experience scaling software businesses, driving revenue growth, and leading large cross-functional teams across SaaS, enterprise software, and financial services. Over the course of his career, he has progressed from building sales organizations and launching startups to overseeing full P&L operations for major business units, including Devart’s largest division with more than 130 employees. Prior to becoming Managing Director, he served as Devart’s Chief Revenue Officer and Head of Sales, where he led go-to-market strategy, pricing transformation, and international growth initiatives. He is also CEO of TMetric, a time tracking and profitability platform focused on helping service-driven businesses gain operational clarity.
Devart is a software company specializing in database development, data connectivity, integration, and productivity tools for developers, DBAs, analysts, and enterprise teams. Founded in 1997, the company is best known for its dbForge suite of database management tools, which support major database systems including SQL Server, MySQL, Oracle, and PostgreSQL. Devart also develops data connectivity solutions such as ODBC, ADO.NET, Python, and Delphi connectors, alongside Skyvia, its cloud-based no-code data integration platform for ETL, automation, backup, and workflow orchestration. The company serves more than 500,000 users globally, including a large share of Fortune 100 organizations, and has increasingly focused on integrating AI-powered capabilities into its products through tools like dbForge AI Assistant, which helps developers generate, optimize, troubleshoot, and explain SQL queries using natural language.
You’ve progressed from building and leading sales teams to running full P&L operations and now managing Devart’s largest business unit. How has that journey shaped your approach to integrating AI into product strategy and decision-making at scale?
Sales taught me to measure ROI on everything. Moving into a CRO role, I scaled that discipline across functions. Running the BU forced me to apply it to AI itself.
I take a practical view of AI. It’s not that I’m doubtful: three out of our four product bets for 2026 are AI-native. But I believe hype gets in the way of real, lasting results.
There’s a meme going around that sums up where the industry often goes wrong. Companies swap $400 SaaS subscriptions for homegrown tools that cost $1,000 a month in API fees and need constant fixes. That’s not real change, it’s just a costly show.
The lesson I picked up in sales is simple: every initiative pays its way, or it dies. I run our AI rollout the same way I once ran a sales territory. Explicit ROI hypothesis per workflow, a three-wave rollout, and documented impact before scaling.
Our North Star metric is Revenue per Employee, and our target is to more than double it by the end of 2028. You don’t close that gap by hiring. You close it by changing what work looks like, and AI is the only realistic mechanism at that magnitude.
My filter on every AI initiative, internal or product, is the same: what’s the measured value, who pays for it, and how do we know it worked? Anything that fails those three questions doesn’t belong in production. The cost of getting this wrong compounds fast, and most companies will find that out the expensive way.
Devart has built a strong reputation around database tools and developer productivity. How are you embedding AI into these products in a way that delivers real value rather than surface-level automation?
Our users are hardcore technical specialists: DBAs, senior engineers, data architects. They detect surface automation in seconds and resent being sold marketing toys dressed as innovation. Two years ago, when AI hype peaked and competitors raced to bolt chat panels onto every UI element, the temptation to follow was real. I’d seen that pattern before, in mobile, in cloud, in low-code, and I refused to repeat it.
The discipline was straightforward: customer value first. Building AI features nobody asked for, that don’t deliver real value, is the worst possible use of finite engineering resources. That’s especially true when your audience can spot the difference immediately.
What changed in 2026 is that AI moved from hype into a real technical revolution. The gap between what these systems could do in 2023 and what they can do today is not incremental. It’s a completely different category of capability. We can now solve problems that were genuinely unsolvable before: secure enterprise data access for AI agents, contextual database intelligence inside the developer’s IDE, and autonomous business analytics that don’t require a dedicated analyst.
These are new product lines that exist because AI made the underlying problem solvable. That’s the bar we hold ourselves to: a real AI product is one where removing the AI layer breaks the product. The industry has spent two years calling chat panels “AI products.” Those are features, not products.
We took longer because we wanted to get it right. The next twelve months will show whether that discipline paid off.
AI is increasingly writing, optimizing, and debugging code. How do you see this changing the role of developers working with databases over the next few years?
The value of knowing SQL syntax is depreciating fast. If AI can generate a complex multi-table JOIN in seconds and identify missing indexes from logs in minutes, an engineer’s value no longer comes from typing SQL. That part of the job is becoming a commodity.
But here’s the critical nuance that evangelists of total automation always skip. An AI mistake on the frontend is a misaligned button you refresh. An AI mistake on the database is a wiped production environment, a PII leak, or a transactional shutdown of the entire business.
Databases hold state. They don’t forgive hallucinations.
That asymmetry redefines the role completely. Over the next two to three years, database developers and DBAs will evolve from coders into architects and auditors. Their primary work shifts to three things:
- Designing reliable architectures that AI cannot reason about on its own, because it lacks business context.
- Setting hard guardrails and security policies for AI agents that touch production systems.
- Reviewing and auditing the code machines generate before it reaches the database.
The mental model I keep returning to: engineers will manage armies of AI assistants. Tools like dbForge will have to evolve from traditional IDEs into command and audit centers. The job becomes less about writing SQL manually and more about reviewing what AI generates, validating it, and enforcing the boundaries AI cannot cross safely.
The professional opportunity here is significant. Developers who level up to architecture and oversight will multiply their market value. They become the indispensable layer between AI productivity and production safety. The premium on database expertise doesn’t disappear; it shifts upward toward design, governance, and judgment, which is exactly where AI cannot operate alone.
What are the biggest limitations of current AI tools in database management today, and where do you see the most meaningful breakthroughs coming from?
Current AI is still stuck in surface-level automation. Generating a basic SELECT query or boilerplate code is no longer impressive. The bigger issue is that most AI systems still behave like blind typists rather than system architects. They can generate syntax, but they don’t truly understand the environment they’re operating in. The real breakthrough happens when AI starts reasoning about context, dependencies, state, and business logic together.
Right now, I see three major limitations holding AI back in database environments.
Firstly, there’s the context problem. Large language models can see schemas, DDL, and column names, but they don’t really understand execution plans, index fragmentation, data distribution patterns, or the actual business logic behind the data. Without that deeper understanding, a lot of optimization advice becomes statistical guessing dressed up as expertise.
Secondly, there’s the hallucination problem, and enterprises have almost zero tolerance for it at the database layer. A hallucinated JOIN can slow down production systems. A wrong UPDATE can wipe critical records. At that level, even small accuracy failures become extremely expensive very quickly.
The third issue is security and governance. No serious enterprise is going to paste production schemas or PII into a public AI tool without strong guarantees around data isolation and control. Until vendors solve that properly, AI adoption in regulated industries will stay limited.
The meaningful breakthroughs will come when AI moves beyond syntax generation and starts functioning more like a background architect or analyst.
One part of that is the semantic layer: moving from raw table names to actual business meaning. Not just “table_users,” but understanding concepts like customer cohorts, churn risk, or Q3 LTV trends.
Another shift is AI acting more like a senior DBA in the background. Continuously analyzing workloads, identifying bottlenecks, suggesting indexes, spotting risky queries, and catching problems before systems fail.
Then you have machine-to-machine operations, where autonomous agents monitor database load, test optimization strategies in isolated environments, and deploy improvements under human supervision.
Those are the developments that will shape the next five years of database tooling.
From your experience leading revenue and go-to-market strategy, how is AI reshaping pricing models, product packaging, and customer acquisition in software companies?
The traditional go-to-market playbook is broken. We see it in our own numbers and across the entire dev tools category.
The death of classic acquisition. Despite meaningful improvements in search rankings across our products in 2026, we’re hitting the zero-click reality. AI search delivers answers directly on the results page and starves websites of traffic. Strong rankings no longer translate to leads the way they did even two years ago.
Five years ago, a strong content strategy was enough to drive growth. Today it’s table stakes. LLMs weigh brand strength, positive mentions, and community density when forming answers. If your brand is not visible and trusted, AI systems stop surfacing you consistently. You don’t just lose traffic. You disappear from the buying journey entirely. Making things worse, the entire market has panicked into paid ads, driving CPCs to absurd levels and quietly destroying the unit economics of most SaaS companies.
This shift is hitting traditional dev tools companies particularly hard. SEO-driven acquisition channels that funded a generation of B2B SaaS are losing efficiency rapidly. Anyone still relying on them as a primary growth lever needs to be actively building alternatives right now: ecosystem distribution, community, and partnerships.
Pricing evolution: from seats to PLG 3.0. We’re entering the next phase of PLG. Per-seat pricing starts breaking down when one AI agent can do the work of multiple employees. In that environment, charging by headcount stops making sense. Companies that don’t repackage products around value rather than headcount will hemorrhage MRR over the next 24 months.
The next step is PLG 3.0: the moment when an autonomous AI agent, not a human, evaluates, tests, and purchases enterprise software. Mass adoption of that pattern is still a few years out, but architecting products and pricing for the machine buyer is a 2026 task, not a 2028 task.
Many organizations struggle to move from AI experimentation to real production impact. What are the key factors that determine whether AI initiatives actually succeed?
Most AI features fail before they’re built. They fail in the room where someone says “we need AI in this product,” not because users asked, but because the board wants an AI story or marketing thinks it’ll attract a new audience. That’s the original sin of most AI initiatives, and it shapes everything that follows.
I keep seeing the same mistakes repeated in companies that struggle to move AI from experimentation into real production impact.
The first mistake is building AI features nobody actually asked for. Once an AI feature is mandated without a genuine user need, the team works backwards from the technology to invent a use case. The result is predictable: a chat panel bolted onto an existing UI, an autocomplete that gets in the way, a “summarize” button that produces worse output than the user could write themselves. These features ship, get a press release, and quietly underperform every adoption forecast. The deeper damage is that they consume engineering capacity that should have gone to features users actually requested.
The second issue is that teams massively underestimate the difference between clean demo data and real production data. AI demos run on clean, curated examples. Production runs on the actual mess of customer data: duplicates, missing fields, ten different ways to spell the same product name, fifteen years of legacy edge cases. A model that achieves impressive accuracy in evaluation can degrade severely on live data, and most teams don’t discover this until users complain. The cost of that discovery in production trust is rarely recoverable.
Another common failure point is user research. Standard product interviews don’t work for AI features. Users can’t articulate what they want from AI because they don’t know what’s possible. Asking “would you use AI to do X?” gets polite yes answers that have no predictive value for adoption. Effective AI product research requires showing prototypes, observing real usage, and measuring whether users return after the novelty fades. Few product teams have rebuilt their research practice for this. They’re still running 2019 playbooks on 2026 problems.
And finally, many companies measure AI activity instead of business impact. “Two hundred people used the AI feature this week” is an adoption metric, not an impact metric. Real impact is cycle time reduced, quality improved, revenue generated, or cost removed. If you can’t draw a straight line from the AI feature to a number on the P&L, you don’t have a production impact. You have an expensive activity.
There’s a fifth factor that’s becoming increasingly critical and that most product teams overlook entirely.
Compliance and the AI-free build path. A meaningful share of enterprise users in finance, healthcare, government, defense, and legal operate under policies that prohibit or restrict AI features in vendor software. If your product hard-couples AI into the core experience without a way to disable or bypass it, you don’t expand your audience by adding AI. You lose a segment of your existing one.
This is exactly the problem we’re solving with AI Connectivity. Compliance teams in regulated industries don’t object to AI itself. They object to data leaving their perimeter. The solution isn’t to strip AI out; it’s to give those organizations an AI architecture that fits their constraints. That’s why AI Connectivity ships as on-premise: the AI capability stays, the data never leaves the customer’s infrastructure, and procurement passes review on the first round instead of the third.
The teams that get this right architect for compliance from day one. The teams that get it wrong discover the problem during procurement review, when the deal is already lost.
Devart operates across multiple database ecosystems. How can AI help simplify the growing complexity of managing data across different platforms?
The pain is real. A typical Fortune 500 runs eight to twelve different database engines simultaneously: legacy Oracle for finance, PostgreSQL for new services, SQL Server for ops, Snowflake or BigQuery for analytics, and increasingly a vector store for embeddings. Each has its own dialect, its own tooling, its own governance regime. A developer joining that environment can spend three months just learning where data lives and who’s allowed to touch it.
AI doesn’t fix that complexity on its own. It amplifies whatever context it’s given. Eight disconnected databases with no unified metadata produce eight disconnected sets of shallow suggestions. That’s exactly the failure mode we see in most enterprise AI rollouts on stacks.
The opportunity is a context layer that sits between AI agents and the underlying databases. One that speaks to all of them, normalizes metadata, enforces unified governance policies, and exposes a clean MCP interface so any AI agent, whether Claude, GPT, or an internal model, works across the entire estate with consistent rules.
That’s the architecture we’re building toward with AI Connectivity: an on-premise MCP server with multi-database support, a semantic layer that captures business definitions once instead of forcing every AI agent to relearn them, role-based access control at the SQL operation level, and full audit logs.
Simplification isn’t free. Someone still has to model the semantic layer and set policy. But that work happens once, not repeatedly for every AI agent you add.
You’ve led large cross-functional teams. How is AI changing internal collaboration and decision-making between product, engineering, marketing, and sales?
Most cross-functional friction was really just people waiting on information from other teams. AI collapses that friction faster than any management framework ever could.
The shifts are practical and immediate.
In product and engineering: a product manager asks a database question in plain business terms, “what’s the LTV variance across our top three pricing tiers?”, and gets an actionable answer on the spot, instead of filing a Jira ticket to analytics and waiting three days.
In marketing and data: cohort analysis happens inline, not through a request queue. The marketing manager asks, gets numbers, and builds the campaign, all in the same morning.
In sales and engineering: technical answers for prospects no longer require scheduling a call with a senior engineer. The sales rep gets a credible technical response in real time, and the deal cycle compresses.
Decisions move into the conversation rather than into the follow-up. The “let me get back to you with that number” pattern is dying. Meetings shrink because AI handles pre-reads and summaries that used to consume the first half of every session.
This collapse of friction forces a deeper management shift, and it’s the one most leadership teams underestimate.
Every company claims to be results-oriented. Look under the hood and most still run on proxy metrics: story points, lines of code, tickets closed, hours logged. We used activity as a proxy for value because actual value was hard to measure. AI breaks that proxy permanently. When an agent can write 10,000 lines of code or close 500 support tickets in a minute, measuring activity becomes dangerously misleading.
We’re moving explicitly to True Result-Oriented Management, where performance is measured strictly by outcome and judgment. Brutal in practice, because most performance systems aren’t built for it. People who used to hide behind high activity become visible immediately, and leadership has to be willing to act on that visibility.
The structural consequence is flatter org charts. Coordination and information-routing layers compress. Organizations that adapt fastest will operate with structurally fewer people at higher leverage.
With the rise of AI-assisted development and no-code tools, are we moving toward a future where database management becomes accessible to non-technical users?
There’s a dangerous confusion in the industry right now. People treat a side-project database and an enterprise legacy database as if they’re the same thing. They aren’t.
For small greenfield projects, democratization is already here. I’ve personally built small applications from scratch without deep database management skills. If your entire schema fits inside an LLM’s context window, AI works like magic. Citizen developers building internal tools at a small scale will be a real and growing category.
Enterprise reality is completely different. Massive legacy databases face the same problem as massive monolithic codebases: the context wall. You cannot fit fifteen years of undocumented schema evolution, cross-database dependencies, and custom trigger logic into a prompt. When AI loses context on a large database, hallucinations don’t degrade gracefully. They multiply exponentially.
The risk that gets underdiscussed is false confidence at scale. Natural language interfaces are uniquely good at producing plausible-looking but subtly wrong answers. If a SQL query has a syntax error, you get an error message. If a natural language interface misinterprets “active customers” because your data has six different definitions of activity, you get a number. The number looks fine. It might be off by 30%. The user has no way to know.
So no, enterprise database management is not becoming a playground for non-technical users.
The Citizen DBA is a myth at scale.
The future belongs to expert data architects who use professional tools to bridge the context gap and build infrastructure that lets AI operate safely on top.
The structural fix is the semantic layer: a controlled vocabulary where business definitions are fixed once and reused across every AI interaction. That’s the core architecture we’re building into Insightis. Without it, accessibility becomes a liability.
Looking ahead, what does an “AI-native” developer toolkit look like, and how should teams start preparing for that shift today?
An AI-native toolkit is not a chatbot bolted onto an IDE. Most of what’s marketed as “AI-native” today is a chat interface plus an autocomplete model. That’s table stakes, not the destination.
To me, a genuinely AI-native toolkit needs three things.
Firstly, AI needs deep context. It has to understand your codebase, your infrastructure, your historical decisions, and your data environment continuously, not just through prompts pasted into a chat window. Most current tools fail this test. Their context resets with every session, and the user pays the cost of rebuilding it constantly.
Secondly, the tools themselves need to communicate with each other properly. Your IDE must talk to your database, the database to your observability stack, and the CI/CD to your AI reviewer, etc. The Model Context Protocol is becoming the standard layer here, with 97 million SDK downloads per month in Q1 2026, up from 100,000 in late 2024. That’s a 970x increase in fifteen months and the steepest adoption curve I’ve seen in developer infrastructure.
Thirdly, production-grade AI requires serious safety guardrails. Blast radius preview before destructive operations. Dependency analysis. Automated rollback plans. Audit trails by default. AI without these is fine for prototypes and dangerous in production.
How to prepare, concretely.
Audit your stack against those three components. Does each tool expose APIs and MCP? Does it talk to others, or sit in a silo? Does it have safety controls? Tools failing two of three are short-term assets.
Build context infrastructure now. Document schema, business definitions, and architectural decisions in machine-readable formats. Rich context isn’t built in a quarter. The teams whose AI has it in 2027 are the ones documenting today.
Run AI in production before you think you’re ready. Teams waiting for a formal “AI strategy” before shipping will be eighteen months behind teams already learning from real production failures. Pick a low-risk use case. Ship it. Build the muscle.
The teams making these decisions today will define the next decade of how software is built. The window is narrow, and it’s open right now.
Thank you for the great interview, readers who wish to learn more should visit Devart.












