Interviews
Pablo Ormachea, VP of Data at Motus – Interview Series

Pablo Ormachea, VP of Data at Motus, builds enterprise AI and analytics systems designed to move quickly while standing up to regulatory and financial scrutiny. He leads fully remote, cross-functional teams and focuses on evidence-driven decision systems that improve retention, expand margins, and deliver measurable ROI. At Motus, he re-engineered analytics for more than 350,000 drivers, achieving 60× faster reporting with zero timeouts, and shipped AI/ML systems including anomaly detection and churn forecasting that have saved clients millions. He also co-authored Motus’s AI governance framework, enabling safe LLM experimentation with clear defaults, strong auditability, and consistent business logic across the data stack.
Motus is a workforce management and mobility software company that helps organizations manage vehicle reimbursement, mileage tracking, and mobile workforce operations. Its cloud platform automates tax-advantaged reimbursement programs, delivers real-time reporting and insights, and helps enterprises reduce costs, improve productivity, and manage compliance for employees who drive as part of their jobs.
You’ve built a unique career at the intersection of AI engineering, data strategy, and regulation — from Harvard Law to leading data and AI at Motus. What key experiences shaped your approach to building AI systems that are both technically advanced and compliant with strict regulatory frameworks?
I learned early to treat compliance like an engineering constraint, not a legal afterthought. If you build the highway, you can drive at highway speeds. If you pretend it’s a dirt road and floor it anyway, you do not move faster. You just crash sooner.
Harvard Law helped in a surprising way because the common law system is basically residual-driven learning. A rule meets reality. Edge cases expose where it fails. Doctrine refines.
That’s the same mental model I use for AI in production. Every residual is a gift. It tells you where your assumptions diverge from the real world, and it gives you a concrete path to tighten the system.
So, I optimize for two things at once: shipping velocity and burden-of-proof. The goal is not “innovation versus compliance.” The goal is building systems that can move quickly and still answer, clearly and repeatably, “How do you know?”
You co-authored Motus’ AI governance policy that streamlined approvals while maintaining strong controls. What principles guided you when designing that policy, and how do you balance innovation speed with audit readiness?
We did not set out to write rules. We drew a map. When AI adoption starts, interest comes from every direction, and velocity can turn into noise, or worse, liability. So the first job is clarity: where LLMs can run and where they cannot, what data stays strictly inside, and what kinds of experiments are allowed in a safe lane.
The balance comes from making the safe path the easy path. Governance fails when it’s a committee. It works when it becomes defaults: approved tools, clear data boundaries, standard logging, and a fast approval lane for edge cases. The goal is that builders do not need to renegotiate safety every time they ship.
Then audit readiness becomes a byproduct. You are not scrambling to assemble evidence after the fact because the system generates the evidence as it runs.
You’ve said AI practices should meet “even IRS level scrutiny.” Can you share an example where regulatory considerations directly influenced a technical AI or ML decision at Motus?
In regulated workflows, the question is not just “is the model accurate?” It’s “can you show your work later?” That reality shapes what “good” looks like at Motus.
It changes design choices. For certain use cases, we bias toward approaches that are explainable, replayable, and easy to audit. Sometimes that means simpler model families. Often it means deterministic guardrails, versioned features, and logging inputs and outputs in a way that supports true replay.
A concrete example: when we updated parts of our reimbursement logic and reporting, we pushed hard on traceability at key decision points. We wanted the system to answer, on demand, what rule fired, what data it used, what version was running, and what would change the outcome. It made the AI components more usable, and it made the whole workflow easier to defend.
The payoff compounds. When you can replay behavior and slice errors, residuals stop being mysterious. They become a prioritized backlog: what failed, where, why, and what change closes the gap.
Motus operates solutions for vehicle reimbursement and risk mitigation that must satisfy IRS and other regulatory requirements. How does AI improve compliance and accuracy in these enterprise use cases?
AI helps in two ways: it reduces manual friction, and it strengthens defensibility.
On reimbursement, the value is not just automation, it’s consistency. AI can help classify trips, detect anomalies, and surface missing information earlier, which reduces downstream reconciliation. Nobody wants reimbursement to become a monthly archaeology project. The compliance benefit comes from better measurement and better documentation. You support outcomes with a clear record rather than relying on after-the-fact reconstruction.
On risk, AI is useful because point-in-time checks are not enough. Enterprises want continuous awareness of what changed, what looks off, and what needs attention. The best AI systems here are not dramatic. They’re quiet, consistent, and measurable.
Leading remote, cross-functional teams that collaborate with Legal, Security, Finance, and Product is no small feat. What are the biggest challenges you’ve faced aligning these groups around data and AI initiatives?
The hardest part is that each group is rational, and they optimize for different risks.
Security worries about exposure. Legal worries about defensibility. Finance worries about cost and predictability. Product worries about speed and customer value. Data and engineering worry about feasibility and reliability. If you treat those as competing agendas, you stall.
The fix is shared language and clear lanes. We align on the decision at stake, define the boundaries, and agree on what evidence “good” requires. Then we build defaults so most work can move without ceremony.
I’ve found that clarity beats persuasion. When people can see the map, alignment becomes much easier.
You’ve driven major performance improvements — like 60× faster reporting for 350,000+ drivers and millions in client savings. How do you decide which AI/ML projects to prioritize for both tactical impact and strategic value?
I prioritize projects that pass three tests.
First, they must change a real decision or workflow, not just produce a clever score. If the output doesn’t reliably change behavior, it’s a demo, not a product.
Second, they must be measurable. My grandparents used to say “well measured is half done.” In regulated settings, it’s more than half. If we can’t define success, error modes, and monitoring up front, it means we don’t understand the work yet.
Third, they must be defensible under scrutiny. That includes data provenance, access boundaries, and the ability to explain and replay outcomes.
When a project passes those tests, it tends to create both tactical wins and strategic compounding. At Motus, that’s how we’ve delivered step-change improvements, including materially faster reporting at scale, fewer exceptions, and automation that translates into real client time savings.
Trust and explainability are critical for enterprise AI adoption. How does your team ensure models are interpretable and trustworthy for stakeholders across business units?
Trust comes from clarity, consistency, and a system that can explain itself under pressure.
We design systems with a replay button. Same inputs, same version, same output, plus an evidence trail of what changed over time. We also make residuals visible. Every miss is information. If you instrument errors properly, you can explain behavior in plain language and improve it in a disciplined way.
When a decision has audit exposure, we bias toward simpler models plus strong measurement over opaque complexity. Practically, that means clear data definitions, evaluation that slices performance by meaningful segments, monitoring for drift, and a documented change process. Stakeholders don’t need every technical detail. They need confidence that the system is measured, bounded, and improving.
In enterprise settings, explainability is not a philosophical preference. It’s a requirement for adoption, and it matters when clients need to withstand future audits.
From HIPAA-grade data pipelines to IRS-compliant reporting, Motus emphasizes safe, scalable AI. What best practices would you recommend to other AI leaders working in regulated industries?
A few principles that travel well:
- Treat compliance as the highway. Build paved roads so teams can move fast safely.
- Define boundaries early. Be explicit about what data cannot leave, what tools are approved, and where models can run.
- Automate evidence. Make logging, lineage, and versioning defaults, not a scramble during an audit.
- Measure before you scale. Well measured is half done. You can’t improve what you can’t see.
- Operationalize residuals. Turn misses into an error taxonomy and a prioritized improvement backlog.
- Design for adoption. Great models are part statistics, part partnership, and largely change management.
If your governance lives in a PDF, it won’t scale. If it lives in the system, it will.
With Motus at the forefront of vehicle reimbursement and risk solutions, how do you see AI evolving in this space over the next 3–5 years?
I expect two big shifts, and they reinforce each other.
First, risk will move from periodic checks to continuous, decision-grade signals. Today, most organizations still learn about driver risk too late, either after an incident or after a point-in-time review. The next wave is systems that surface risk earlier and more precisely, using patterns already present in operations: changes in eligibility, coverage gaps, unusual mileage patterns, and inconsistencies between expected and observed behavior. The goal is not to replace judgment. It’s to give safety, HR, finance, and ops a clearer early-warning panel, with fewer false alarms and better documentation for why something was flagged.
Second, reimbursement will move from paperwork to workflow. Enterprises still lose a surprising amount of time to submissions, corrections, approvals, and post-hoc cleanup. Over the next few years, I expect more automation across the reimbursement lifecycle: pre-filling what can be pre-filled, catching missing or inconsistent inputs early, routing exceptions to the right approver with context, and reducing manual back-and-forth. Done well, this makes reimbursement faster and more defensible because the evidence trail is generated as part of the process instead of reconstructed later.
What makes this exciting is how they converge when the foundation is right. When boundaries are clear and residuals are visible, you get a compounding loop: fewer exceptions, cleaner submissions, faster approvals, better risk signals, and a clearer record of how decisions were made.
The future is not “AI everywhere.” It’s AI embedded at the right moments, with strong measurement and feedback loops that keep improving.
Based on your journey through law, neuroscience, statistics, and applied AI, what guidance would you give to young professionals aspiring to lead data and AI in complex business environments?
Learn to build systems, not just models. Or put differently, build the highway, instrument the misses, and keep the map updated.
Get close to the people who live the outcome. Frontline operators often see signals before your data does. Their feedback is not “anecdotal.” It’s often the missing feature set.
Develop comfort with measurement, and humility about error. Residuals are gifts if you’re willing to listen. In regulated environments, add the discipline of burden-of-proof: be able to explain what you built, why it behaved the way it did, and what you will do when it changes.
Finally, remember that adoption is part of the work. Change management is not a soft add-on. It’s a core requirement if you want your AI to be used. That means it’s not enough to be strong on data, models, and algorithms. You have to work well across business units, earn trust, and navigate the human path that turns a good model into a real capability. If you can do that, you won’t just build models, you’ll build trust.
Thank you for the great interview, readers who wish to learn more should visit Motus.












