Interviews
Gerald Kierce, CEO and Co-Founder of Trustible – Interview Series

Gerald Kierce, CEO and Co-Founder of Trustible, is a technology and policy leader focused on operationalizing responsible AI. He leads Trustible’s mission to help organizations build trust, manage risk, and comply with emerging AI regulations. Previously, he served as Vice President & General Manager of AI Solutions at FiscalNote, where he oversaw enterprise AI products, and held senior roles across corporate development, product, customer success, and executive operations. His career has consistently sat at the intersection of technology, regulation, and scalable enterprise execution.
Trustible provides an AI governance platform that helps organizations inventory AI systems, assess and mitigate risk, and operationalize compliance through structured workflows and documentation. Designed for legal, compliance, and AI teams, the platform centralizes governance activities, aligns AI use cases with regulatory frameworks, and enables faster, more transparent deployment of responsible AI across the enterprise.
You moved from product marketing and Chief of Staff work into leading AI solutions at FiscalNote before founding Trustible. What did you see in those roles that convinced you AI governance needed a dedicated platform, and what problem were you determined to solve first when you launched Trustible?
I was fortunate enough to have a lot of roles during my 8+ years at FiscalNote, where I started as an early Seed/Series A employee and left as a senior executive after the IPO.
Across product marketing, Chief of Staff work, and eventually leading AI solutions at FiscalNote, I kept seeing the same issue emerge from different angles. AI governance is fundamentally a sociotechnical problem, but most organizations were approaching it in fragmented ways. Teams treated AI performance, security, privacy, ethics, and legal reviews as separate tracks, often owned by different functions with little shared operational spine tying them together. Those five dimensions absolutely matter, and they need to be addressed collaboratively. But where organizations were struggling was translating that sociotechnical intent into something durable once AI moved into real decision making.
At the same time, the regulatory environment around AI was clearly changing. The EU AI Act and related standards signaled a shift toward governing AI as regulated infrastructure rather than experimental technology. What became apparent was that many companies were trying to map policy and regulatory expectations onto AI systems after deployment, instead of designing governance that could continuously operationalize regulatory intent across those sociotechnical dimensions.
My experience at FiscalNote was important because we were applying AI to the policy, legal, and regulatory landscape itself. We were helping organizations understand how laws evolve, how requirements are interpreted, and how regulatory expectations translate into operational obligations over time. That experience made it clear that effective AI governance requires the same discipline in reverse: applying policy and regulatory thinking directly to how AI systems are built, deployed, monitored, and adapted as conditions change.
Customers consistently described the same pain points. They could not confidently answer what AI systems were in production, which ones were high risk under emerging regulations, who was accountable when systems crossed functional boundaries, or how to demonstrate ongoing compliance as models, data, vendors, and regulations evolved simultaneously.
When we launched Trustible, the first problem we set out to solve was turning sociotechnical governance from theory into operational reality. We focused on creating a system that connects technical behavior, use case risk context, ownership, and regulatory expectations in one place. Trustible was built to give organizations a living system of record for AI, with continuous visibility and accountability, so governance could keep pace with both technological change and regulatory evolution rather than lag behind it.
From the front lines, what have you learned over the past year about why governance programs stall once AI moves into real decisions, workflows, and customer-facing experiences?
Once AI moves out of experimentation and into real workflows, governance tends to stall for very practical reasons rather than philosophical ones. Most organizations simply do not know how to evaluate AI risk in a way that maps to how the systems are actually being used. They can assess models in the abstract, but they struggle to evaluate risk at the use case level, where context, impact, and downstream decisions matter far more than technical metrics alone.
This problem becomes even more pronounced with generative AI. A single foundation model might be used for customer support, internal research, decision support, or content generation, each with very different risk profiles. Without a structured way to assess and compare those uses, teams either over rotate on caution or move forward without real confidence.
Third party AI further complicates things. Organizations rely heavily on vendors and embedded AI capabilities, yet lack consistent methods to evaluate those systems, understand upstream controls, or determine how vendor risk translates into their own regulatory and operational exposure. As a result, reviews become subjective and slow.
These challenges are amplified by gaps in expertise and ownership. Governance responsibilities are often spread across legal, compliance, security, data, and product teams without a shared framework or a clearly accountable owner once systems reach production. Combined with ill suited tooling like spreadsheets, document repositories, or legacy GRC platforms, governance teams lose visibility into what is changing and why it matters.
At its core, governance stalls because organizations are applying old playbooks designed for static systems to dynamic AI systems. AI requires continuous risk evaluation, clear ownership tied to outcomes, and tooling that reflects how systems actually behave in production rather than how they were approved on paper. Governance teams cannot see what is changing, when it is changing, or why it matters.
Finally, ownership is often unresolved. In many organizations, there is no clearly accountable owner for an AI system once it crosses from experimentation into production. Without a named business owner who is responsible for outcomes, governance becomes advisory and progress slows.
The common thread is that organizations are applying old governance playbooks to fundamentally new technology. Those playbooks were built for static systems and periodic reviews. AI requires continuous risk evaluation, clearer ownership, and tooling that connects governance directly to how systems actually operate in production.
How do you define Year Two governance, and what changes when an organization shifts from initial adoption to ongoing monitoring, drift management, and continuous compliance?
Year Two AI Governance is the moment when AI stops being treated as a series of projects and starts being treated as underlying infrastructure for decision-making. What I mean by this is that, in the first year, AI governance is largely about enablement. Teams are focused on approving use cases, documenting models, and putting review processes in place so AI can move forward responsibly.
As AI systems scale and become embedded in core business processes, the focus shifts. The question is no longer whether something should be deployed, but whether it can be operated safely and reliably over time as data, users, vendors, and regulations change. AI governance becomes continuous rather than episodic, triggered by real changes in behavior or context rather than calendar based reviews.
Risk also becomes dynamic. Instead of assigning a static risk rating at launch, organizations need to understand how risk evolves as models drift, scopes expand, or new stakeholders interact with the system. Compliance follows the same shift. Regulatory requirements move from being mapped to policies into being enforced through live controls, monitoring signals, and continuously captured evidence.
Another key aspect of Year Two AI Governance is the introduction of real AI incident management. Organizations need to know what systems are being monitored, prioritize them based on inherent risk, integrate the right data to surface meaningful signals, and define clear alerting and escalation criteria. This allows teams to intervene early, before issues turn into incidents.
With fragmented systems and limited resources, what are the first governance capabilities you think companies should standardize across the organization?
When resources are limited, organizations need to be deliberate about where they start, because early choices set the trajectory for everything that follows. The first priority is gaining reliable visibility into where AI actually exists in the business. Many teams believe they have only a handful of AI systems, only to discover shadow AI, embedded vendor capabilities, and quietly scaled use cases that were never formally reviewed. Without a living view of what is in production, governance discussions remain theoretical and disconnected from reality.
Once visibility exists through your AI Inventory, it’s about driving accountability into AI use cases. Governance breaks down quickly when responsibility is spread across committees or functions. Organizations need to clearly assign who is accountable for outcomes when an AI system makes or influences decisions, not just who built it or reviewed it initially. This clarity becomes especially important when incidents occur or when models evolve beyond their original scope.
From there, teams need a practical way to reason about risk. This means establishing a shared approach to risk classification that works across internally built systems, generative AI use cases, and third party vendors. Without a common risk lens, organizations either over scrutinize low impact systems or under monitor the ones that matter most.
Finally, governance has to generate evidence as a byproduct of normal operations. We often like to talk about “Say It, Do It, Prove It” as a way of demonstrating trustworthiness in your AI governance. Capturing approvals, changes, and monitoring signals as systems run allows organizations to respond to audits, incidents, customer requests, and regulatory questions with confidence rather than reconstruction. These foundations do not need to be perfect at the start, but they do need to be coherent and repeatable if governance is going to scale.
Why do you believe AI governance needs to be treated with the same seriousness as cybersecurity or GRC, and where do leaders most underestimate the operational workload?
AI governance carries systemic risk that is comparable to cybersecurity and GRC, but with added complexity. Like cybersecurity failures, AI failures can propagate quickly and invisibly across an organization. Like GRC, AI intersects with legal, ethical, and operational obligations. Unlike either, AI systems can change behavior over time without explicit human action.
Where leaders tend to underestimate the workload is in the ongoing operational demands. Monitoring is continuous rather than periodic. Coordination spans product, data, IT, legal, compliance, and procurement teams. Change management is constant because models, vendors, use cases, and regulations evolve simultaneously.
Organizations that treat AI governance as a one time compliance exercise inevitably struggle. Those that approach it as operational infrastructure, much like security or reliability engineering, are far better positioned to scale AI safely and sustainably.
As U.S. states push forward on AI rules while federal policy remains contested, how should enterprises design governance that stays resilient through regulatory uncertainty?
The regulatory environment for AI is uncertain and evolving. The most resilient governance programs are built around requirements rather than individual regulations. Instead of reacting to each new law with bespoke processes, organizations should focus on the common expectations that appear across jurisdictions, such as inventory, transparency, accountability, risk assessment, human oversight, and documentation.
When governance systems are modular, new regulatory requirements can be mapped onto existing controls rather than forcing teams to reinvent their approach each time the landscape shifts. This reduces friction and helps governance keep pace with policy change.
The goal is not to optimize for compliance with today’s rules, but to adapt it as expectations evolve.
Looking toward 2026, which AI governance capabilities do you expect to become non negotiable as organizations scale AI across more business units?
As AI moves from isolated pilots to systems that shape real world decisions, governance expectations are changing just as quickly. By 2026, organizations will no longer be able to rely on the playbooks that worked in 2024 and 2025, when AI oversight was often manual, episodic, and centered on individual reviews. Continuous monitoring will become table stakes, because static documentation and point in time assessments will not satisfy regulators, boards, employees, or customers in a dynamic AI environment.
As AI becomes embedded across more teams and workflows, organizations will also need consistent governance across increasingly complex AI supply chains. Internal models, third party vendors, embedded AI features, and autonomous components will all need to be governed through the same lens, rather than treating vendor AI as a blind spot or assuming responsibility ends at procurement.
Audit ready evidence will need to be available on demand as regulatory enforcement tightens and public expectations for transparency rise. This means capturing governance activity as AI systems are designed, deployed, and monitored, rather than reconstructing decisions after an incident or audit request.
Finally, governance will need to be embedded across the full AI lifecycle. Oversight will not be a legal review at deployment, but an operational capability integrated into SDLC, MLOps, and procurement workflows for third parties. Organizations that build these capabilities will be better positioned to adapt to regulatory uncertainty, respond to incidents, and scale AI faster and more safely as expectations continue to evolve.
If you were advising a company that already has AI in production but no formal governance program, what would a realistic first 90 days look like?
The first 30 days should focus on gaining basic visibility. That means identifying what AI systems are in production, understanding where they influence real decisions, and assigning clear ownership.
The next phase is about establishing baseline controls. Organizations should define how they classify risk, introduce approval checkpoints for higher risk systems, and begin monitoring the areas that matter most.
In the final stretch, governance needs to move from setup to operation. Monitoring should be integrated into existing workflows, escalation paths should be clearly defined, and evidence should begin accumulating naturally as systems run.
The goal over the first 90 days is not perfection. It is momentum. A governance program that functions imperfectly in practice is far more valuable than one that exists only on paper.”
Thank you for the great interview, readers who wish to learn more should visit Trustible.












