Interviews
Wendy Gilbert, Senior Vice President of Product, Mark43 – Interview Series

Wendy Gilbert, Senior Vice President of Product at Mark43, brings more than two decades of experience in public safety technology, with a career spanning product management leadership roles across organizations like CentralSquare Technologies, TriTech Software Systems, and VisionAIR. Having steadily advanced through roles focused on records management systems (RMS) and public safety platforms, she now leads product strategy at Mark43, where she is responsible for shaping the evolution of cloud-native solutions used by law enforcement agencies. Her tenure reflects a deep specialization in modernizing mission-critical software for first responders, with a focus on usability, compliance, and operational efficiency in high-stakes environments.
Mark43 is a leading provider of cloud-native public safety software, offering an integrated platform that includes records management (RMS), computer-aided dispatch (CAD), analytics, and mobile tools designed for law enforcement and emergency services. Built on a SaaS architecture, the platform enables agencies to streamline report writing, manage cases, and access real-time data from anywhere, significantly reducing administrative workload and improving response times. Its systems are designed to replace legacy infrastructure with scalable, secure, and mobile-first solutions, helping over hundreds of agencies enhance situational awareness, collaboration, and overall public safety outcomes.
You’ve spent over two decades building products across companies like VisionAIR, TriTech, CentralSquare, and now Mark43. How has your perspective on public safety technology evolved, and what feels fundamentally different about this current wave of AI adoption?
Technology has changed a lot over the past two decades. For most of that time, technology was designed to support the mission after the fact. Systems captured what happened, but they did not actively help shape what should happen next. That is what is fundamentally different now.
For the first time, we are not just adding technology to public safety workflows. We are reimagining those systems as AI-native from the ground up, which entirely changes the role technology plays. We are moving from systems of record to systems of action. Instead of just documenting incidents, we are helping agencies interpret, prioritize, and act in real time. This is the first time in my career that I have seen technology truly operate at the speed public safety deserves, which is to deliver products that can transform how agencies function day-to-day and ultimately deliver better public safety outcomes.
Mark43’s research shows strong support for AI among law enforcement, but also a clear expectation for human oversight. What does meaningful human involvement actually look like in day-to-day operations?
In every product decision we make, the principle is clear. AI supports. Humans decide.
AI can structure a report, surface links across records, or highlight key information in a call. But an officer, a dispatcher, or an investigator is always the one applying judgment and making the call. In practice, that means AI accelerates the path to insight while keeping accountability with the human. It drafts, suggests, and connects, but it does not decide.
That is especially important in CAD and RMS workflows, where decisions carry real consequences. The role of AI is to reduce friction and cognitive load so public safety professionals can focus on all that requires their expertise.
A recurring concern is whether AI-assisted reports and decisions will hold up in court. How do you design systems that are not only operationally useful but also legally defensible?
Coming from a CAD and RMS background, defensibility is not a “nice-to-have”. If records from a system cannot stand up in court, the tools are not ready for public safety use.
This starts with transparency. Every AI-generated output must be directly linked to its source data. Users need to be able to trace exactly where information came from, how it was generated, and how it was used. If that chain is not clear, the system will not hold up under scrutiny. The system also needs to know its limits. When data is incomplete, we design AI to surface that gap, not attempt to resolve it on its own. It does not guess.
Another critical piece is alignment with how agencies actually operate. AI cannot dictate policy or workflow. It must conform to the legal requirements, reporting standards, and operational realities of each agency.
We build configurability directly into the system. Compliance validations and AI permissions can be set at the report and offense-code level, so agencies can choose where AI assistance is appropriate and where it is not. For example, an agency may enable AI support for lower-level incidents while restricting it in cases that carry a higher evidentiary standard.
We also maintain full transparency into how AI is used. Every instance of AI involvement is logged, and full draft history is preserved within the report. That creates a clear, reviewable record that supports both internal accountability and external legal scrutiny.
These systems must stand up to scrutiny from supervisors, attorneys, and the public to be operationally useful and effective.
From your experience, where is AI already delivering measurable value in public safety today, and where is it still falling short of expectations?
The most immediate value is in the core workflows I have worked on my entire career.
Report writing is essential, but it’s also a persistent drain on time. AI is changing that by turning documentation into something that happens alongside the work, not after it. That is a meaningful shift for officers and agencies managing staffing constraints.
But the bigger shift is happening in how agencies move from information to action. AI can surface what matters in real time, not just after the fact. Where I see the gap is not in the technology. It is in how organizations operationalize it. AI requires clear policies, governance, and training to be effective. Agencies that treat it as a workflow transformation are seeing results. If agencies see it only as a feature, it won’t unlock the same opportunity.
Bias remains one of the most critical challenges in AI, especially in law enforcement where historical data can reflect systemic issues. How does Mark43 approach mitigating bias while maintaining accuracy and trust in its systems?
Bias is a real concern, especially in systems that rely on historical data.
From a product standpoint, the answer is not to obscure how the system works. It is to make it more transparent and more accountable. We focus on improving data quality at the point of entry, making outputs consistent, and ensuring there is always human review. In CAD and RMS systems, better data leads to better decisions. AI should reinforce that, not undermine it.
The goal is to ground decisions in more complete and consistent information, while ensuring every output can be examined, audited, and improved over time.
Transparency and auditability are often discussed as requirements for responsible AI. What does full auditability look like in practice for agencies using AI-driven tools?
In public safety systems, auditability has always been essential. AI raises the bar.
For agencies using AI, there should be complete visibility into what was generated, what data informed it, and how it was used. Nothing should exist without a clear connection back to the underlying record. Every output should be explainable. Every step should be reviewable. That is how you build systems that can be trusted, not just operationally, but publicly.
Many public safety agencies still rely on legacy systems and fragmented data. How important is infrastructure modernization before AI can truly deliver meaningful results?
AI is only as powerful as the foundation it sits on. I have worked with legacy CAD and RMS environments for much of my career, and I have seen firsthand how much fragmented systems limit what agencies can do. Investigators using these systems often have to query multiple tools just to understand a single case. That is not sustainable.
AI depends on connected, accessible data. If systems are siloed or require manual workarounds, AI cannot deliver meaningful outcomes. It becomes another layer of complexity instead of a force multiplier. Modern, cloud-native infrastructure unlocks the possibility for AI across an entire connected platform.
There is growing interest in AI systems that can take action rather than just provide insights. Do you see a future where AI actively participates in workflows like case management or dispatch, and where should boundaries be set?
AI will continue to move deeper into workflows, especially in areas like call handling, triage, and case management. It will increasingly coordinate workflows, prioritize actions, and surface the next best steps in real time. However, in public safety, there will need to be clear boundaries. AI can recommend, prioritize, and elevate key information. It can reduce the time it takes to move from information to action. But decisions that affect people’s lives must remain with trained professionals. Our goal is clarity and confidence at the right moment.
Public trust is essential in policing, particularly when new technologies are introduced. What safeguards are most effective in ensuring AI strengthens trust rather than undermines it?
We recognize that trust is something public safety agencies work to earn every day. Technology has to meet that same standard. For AI, that starts with visibility and transparency. Users need to understand what the system is doing and why. It also requires consistency, governance, and the ability to review outcomes over time. We build with a feedback loop directly from the field, because trust is established through real-world use and continuous validation.
Looking ahead over the next few years, how do you expect the role of AI in public safety to evolve from administrative support toward real-time decision-making?
AI will become embedded across the full lifecycle of public safety, from the moment a call comes in to how a case is resolved. The biggest shift will be how quickly agencies can move from information to action. That has always been the challenge in CAD and RMS systems. We are moving toward a model where the system understands how your agency operates, anticipates the first steps in a workflow, and helps execute them, all while remaining human-centered.
The role of AI is to reduce administrative burden, unify fragmented systems, surface the right information at the right time, and support better decisions under pressure. AI cannot replace experience. It helps ensure the people on the front lines are better informed, prepared, and supported in every decision they make.
Thank you for the great interview, readers who wish to learn more should visit Mark43.












