Connect with us

Interviews

Rob Feldman, Chief Legal Officer at EnterpriseDB – Interview Series

mm

Rob Feldman, Chief Legal Officer, is responsible for the worldwide legal and compliance functions at EnterpriseDB. An experienced executive and lawyer, he builds high-performing legal teams to support growing technology companies in dynamic business and regulatory environments. Most recently, he led a 45-person legal team at Citrix Systems, Inc. as its General Counsel, including through its +$16 billion take-private transaction in 2022. Prior to Citrix, he spent more than a decade in private practice as a technology company litigator, focused in securities fraud defense, intellectual property disputes and government and internal investigations. Rob also serves on the UN Global Compact Legal Council, providing strategic guidance on global regulatory environments to help businesses drive transformative, long-term impact.

EnterpriseDB is a software company that provides enterprise-grade database solutions built on open-source PostgreSQL, helping organizations run mission-critical workloads with greater performance, security, and reliability. Founded in 2004, EnterpriseDB offers cloud and on-premises platforms, global support, and Oracle-compatibility tools, while increasingly focusing on AI-ready and hybrid data platforms through its Postgres AI offerings.

Given your long experience in corporate legal leadership and EnterpriseDB’s focus on enterprise-grade Postgres and sovereign AI and data platforms, how do you see liability evolving for companies that operationalize agentic AI inside critical data infrastructure

The world of AI and data still depends on the same core principles that should have governed enterprises long before agentic systems arrived: accountability, restraint, and clarity of responsibility.

In the past, those principles were applied to people and largely inert systems, dashboards, reports, and automated tools that didn’t initiate action on their own. Agentic AI introduces systems that behave more like participants than instruments. They can act independently, adapt over time, and increasingly interact with both humans and other agents.

If an organization lacks strong governance and control disciplines, it will struggle in this environment. Agentic AI doesn’t create new responsibility problems so much as expose existing ones. For enterprises with sound foundations, this shift actually reinforces practices they already follow, what we describe as “digital leashing.” For others, it’s a clear signal that practical guardrails need to be established before operationalizing agentic AI at scale.

Only about 13% of enterprises have reached this point of agentic scale successfully. They do 2X the amount of agentic than all the others and get 5X the ROI. But the more autonomy an AI system has, the sooner organizations must confront accountability. When an AI agent routes a claim, moves money, or mishandles sensitive data, responsibility follows the enterprise that defined the environment, set the permissions, and decided how much freedom that system had.

This is why companies need to bring clear oversight to their agentic AI use cases, and why organizations are incentivized to bring focus to their guardrails and governance programs. The analogy of dog ownership and digital leashing is useful. Dogs have a certain level of agency, act independently, albeit sometimes unpredictably, yet they are not legal persons. That combination, agency without personhood, is similar to where today’s agentic AI systems sit, and owners must understand that absent oversight and governance, they will bear responsibility for bad outcomes.

How should enterprises distinguish between assistive AI and agentic AI from a legal and operational perspective before deployment?

At a simple level the distinction comes down to authority. Assistive AI supports human decision-making, while agentic AI initiates actions and executes decisions. Both can influence workflows and shape behavior for example, in customer service or operational prioritization) but only agentic systems act on that influence independently.

If a system can trigger workflows, approve outcomes, modify system states, or take action without real-time human approval, it should be treated as agentic. That determination needs to happen before deployment, because once authority is granted to an agent, legal and operational responsibility shifts with it. Organizations must be mindful of this distinction so they do not discover too late that they’ve unintentionally delegated decision-making power, and with it, accountability.

Can established legal doctrines such as negligent delegation and respondeat superior realistically be applied to autonomous AI systems, and where do those frameworks start to break down?

They apply more directly than many assume. These doctrines exist to address situations where authority is delegated and harm occurs, which is precisely one of the potential challenges agentic AI introduces.

The issue is not with the legal doctrine, but whether organizations understand the responsibility they assume when deploying autonomous AI, and the need to govern those systems accordingly.

When organizations fail to define scope, permissions, and supervision, they create legal liability. The issue is rarely that the law can’t handle agentic AI, but rather that enterprises have not clearly defined what their systems were authorized to do or how they are to be governed.

What practical steps should CIOs and legal teams take today to define and mitigate liability when AI workflows continue to learn and adapt in production environments?

The first step is treating sovereign control over AI and data as mission-critical. Organizations can’t meaningfully govern liability if their AI systems and data are fragmented across environments they can’t fully observe or manage. The 13% of enterprises succeeding with agentic AI at scale start with this foundation.

In practice, that means constraining data access, clearly defining which actions agents can perform autonomously, and placing human oversight around high-impact decisions. It also requires logging and traceability, so behavior can be reviewed when and if needed. Organizations that adopt these measures early will reduce both legal exposure and operational friction down the line.

How do you recommend enterprises leash or govern agentic AI through policy, technical controls, or contractual safeguards to reduce the risk of unintended harm?

The starting point is sovereignty. Enterprises need environments where their AI systems, data, and execution context are observable and enforceable at scale. Governance can’t rely on policy alone. Policy sets expectations, but technical controls determine what systems can actually do, whether data is at rest or in motion, and how models are allowed to operate.

Some agents belong in fenced environments with no production access. Others may operate with limited permissions and approval thresholds. Fully autonomous agents should be rare and carefully supervised. Contracts can help clarify responsibility, but they do not replace the need for internal control and accountability.

Does the shift toward enterprise-controlled or sovereign AI environments change who ultimately bears risk when an AI agent causes financial or operational damage?

It doesn’t change who bears the risk. It makes accountability clearer, and in many ways reduces risk. When enterprises control the data, infrastructure, and execution context, they remove variables introduced when data and tooling is in the hands of third-parties.

Control over data and AI tooling is a strength. Sovereignty gives organizations the visibility and authority required to manage risk responsibly. Without that control, enterprises expand their risk profile.

From your perspective, what role do transparency and auditability play in reducing legal exposure when running autonomous AI applications?

They’re foundational. Auditability turns autonomous systems into defensible systems.

When incidents occur, regulators and courts ask practical questions: what did the system know, what was it authorized to do, and why did it act? The enterprises that can demonstrate oversight and auditability are in a far stronger position compared to their counterparts who come up empty-handed.

As federal AI guidance continues to evolve, how should companies prepare for differing state-level legal obligations related to AI liability?

Organizations cannot wait for regulators to hand down a body of detailed rules specific to AI. Existing state and federal law give us 95% of the clarity we need to use AI responsibly and avoid significant liability events.

That clarity includes designing systems to meet the most demanding product liability standards, which will necessarily include things like responsible development of AI capabilities, pre-release testing, transparency and risk disclosure, post-release auditing, human oversight, and training for users of AI capabilities. These basic and familiar steps matter more than trying to predict specific regulatory outcomes.

What are the most important questions technology buyers should ask vendors about autonomy, oversight, and liability before adopting agentic AI systems?

With agentic AI, accountability ultimately rests with the party that authorizes autonomy. So, the four main questions you should be answering are:

  1. Who controls the system in production?
  2. How are permissions tested and enforced?
  3. How is learning constrained?
  4. What audit evidence is available if something goes wrong?

If a vendor cannot provide clear answers, enterprises should proceed with caution. Going back to the dog analogy: breeders matter, but if something goes wrong, responsibility may rest with the owner.

Thank you for the great interview, readers who wish to learn more should visit EnterpriseDB.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.