Connect with us

Interviews

Aron England, Chief Product & Technology Officer at Accruent – Interview Series

mm

Aron England, Chief Product and Technology Officer at Accruent, is a seasoned technology and product leader known for building and scaling global teams that deliver SaaS and agentic solutions from early research through high-growth, customer-facing products. He blends deep expertise across consumer marketplaces, B two B SaaS, ecommerce, and commercial technology with strong people leadership, pairing innovation with a sharp understanding of customer problems to drive durable product-market fit and measurable business outcomes, including growth through acquisitions and IP-driven strategy.

Accruent provides software that helps organizations run the physical side of their business more efficiently, bringing together tools for facilities, assets, space, and workplace operations in one connected system. Its platform is designed to reduce fragmentation, improve visibility and decision-making, and help teams plan, maintain, and optimize buildings and equipment across a wide range of industries.

You’ve built and led high-performing global teams for more than 25 years. Looking back across startups, large enterprises, and now Accruent, what pivotal experience most shaped how you think about building trustworthy technology at scale?

From spending time at Fortune 50 companies and working in technology leadership at early-stage startups, mid-sized, and larger public and private companies, I have gained a wide array of experience when it comes to promoting digital transformation adoption across different industries. Most notably, I was employee number nine at DocuSign and we were targeting a market that needed a true sea change. Pushing the analog contracting industry through a total digital transformation, not only required building market trust, but legislation to make the shift safe. There are a lot of lessons related to my time there that can be applied to the current market for LLMs and AI tools.

At a high level, the pattern across my experience has remained consistent: trustworthy systems don’t emerge by accident. They come from intentional architecture, data consistency, transparency, and a deep understanding of how real people are using technology.

You’ve warned that by 2026 technicians will no longer accept AI systems that simply say, “trust me.” From your vantage point at Accruent, what is driving this shift in expectations among frontline and field-service professionals?

In environments where facility managers and technicians are leveraging AI to diagnose equipment failures and guide complex repairs, a misstep from a false or inaccurate recommendation can cause major business and safety risks.

Oftentimes, LLMs create blended answers from multiple pages, without citing back to the underlying evidence. As a result, if a technician follows an AI-generated step that never directly existed in the OEM manual, an organization could face major compliance backlash, as they won’t have a defensible chain of evidence for audits or safety reviews. As AI becomes table stakes and more “invisible” in software, the importance of traceability will grow.

AI hallucinations can be more than an inconvenience in regulated industries — they can create real safety, compliance, and operational risks. What kinds of hallucination scenarios concern you most when it comes to maintenance, facilities management, or asset operations?

In manufacturing, if an AI-generated suggestion tells a factory worker to take the wrong action on a critical piece of equipment, it could result in unplanned downtime, wasted material, defective end products, or damaged machinery. These can be million-dollar mistakes as manufacturing lines stand still or even reputational damage if it later leads to recalls.

These hallucinations from AI tools are also especially detrimental to industries like healthcare, as liabilities and patients’ livelihoods are at risk when there is a machine failure that was not properly maintained or fixed in time. When you deal with industries that interact with the real world, fixing mistakes isn’t as simple as hitting delete and starting over.

You’ve emphasized that every AI output must point back to original sources — manuals, data tables, diagrams, historical logs. How is Accruent designing systems that ensure traceability and eliminate “black box” answers?

We ensure that AI recommendations can be traced back to meaningful output points in its source material, such as the specific manual page, diagram, data table, or log entry that informed the suggestion. For example, if the AI recommendations tell a facility manager in healthcare how to service a compressor, they should be able to track back to the exact paragraph that supports that step in one click, to ensure accuracy. To close the growing trust gap in today’s enterprise AI, it’s important that these systems are also able to reveal what points or pages were actually evaluated, so users know whether the AI reviewed all relevant documents or only a subset.

Many enterprise AI tools prioritize speed, but regulated environments require audit trails, documentation accuracy, and verifiable reasoning. How do you balance innovation with the need for transparency and compliance?

Embedding AI into existing workflows is the key. This simplifies the process of layering in approvals, documentation, maintenance routines, and compliance checks to augment known practices, versus implementing a new isolated tool. This means avoiding a full overhaul of operations and allowing employees to continue working the way they have, but with manual, time-consuming processes becoming automated.

Technicians in the field rely on precise instructions. How is Accruent approaching the challenge of grounding AI outputs in authoritative source material to reduce risk and improve technician confidence?

Our approach starts with capturing and organizing manuals, diagrams, drawings, leases, and historical work orders to ensure AI is providing answers from a company’s specific content, not generic training data. When generating procedures, recommendations, or checklists, our systems are designed so that each step is traceable back to the original documentation.

Without this feature, technicians who are already squeezed for resources would have to spend even more time digging through documents manually to verify accuracy, further delaying processes and work orders.

Delivering transparent, audit-ready AI requires large volumes of structured data. What data challenges — from unstructured legacy documents to inconsistent asset histories — need to be solved to make this vision real?

Delivering audit-ready AI starts with reliable and well-organized data. However, most of the built environment still lives in analog processes, with manual data entries, scanned PDFs, and siloed spreadsheets. When there are gaps in data and asset histories that are incomplete or inconsistent, AI hallucination risks increase. To make AI outputs trustworthy in regulated environments, companies must first solve legacy-data roadblocks, from unstructured formats to inconsistent histories, to lack of governance, by migrating into structured, version-controlled, centralized document and asset-data systems.

Our EDMS (Engineering Document Management System) can do that for multiple industries, including mining, utilities, manufacturing, and more. These industries often rely on physical engineering drawings and documentation, which can create version control nightmares. Using our EDMS solution to digitize these documents is the first step. From there, the software helps manage version control, workflow governance, and audit trails to ensure inconsistencies are eliminated.

As AI becomes embedded in maintenance, facilities, and asset lifecycle management, where do you see the biggest opportunities to improve productivity without compromising safety or regulatory requirements?

One of the biggest opportunities is automating mundane, non-value-add tasks for employees, such as manual data entry and scheduling work orders for technicians. From the outside it seems like a relatively easy, yet time-consuming task. However, AI can approach the task more strategically.

First, if the equipment in question is monitored with sensors, a work order may be triggered based on anomaly detection, before any true breakdown occurs. Second, AI can help automatically prioritize work orders based on urgency and schedule repairs at times that cause the least amount of disruption for a business – it can also weigh multiple simultaneous issues, costs, safety, and revenue at once for the best possible path forward.

AI has the potential to not simply “assist” maintenance and facilities teams – it will increasingly act as a digital operator.

Trust is becoming the new table stakes for enterprise AI. What do you believe vendors will need to do differently over the next two years to earn — and keep — that trust?

Vendors must stop assuming customers will simply “trust the model” when it comes to enterprise AI. Recommendations from AI need to show proof of how they were generated. One way to address this is in the form of citations and clear descriptions of what documents the AI did and did not look at. For example, if an employee asks AI to analyze 1,000 leases, they should know explicitly whether it evaluated all 1,000 or only 700, and why or why not.

As part of this, the top factor vendors should prioritize is transparency in data usage. That includes clarity on who sees the data, how it’s being used (including any training implications), and how it is segregated or isolated from other customers’ environments.

In the next two years, earning trust will be paramount, and vendors can gain the upper hand by being explicit about AI tool limitations, keeping humans in the loop for high-risk decisions, and starting with narrow, well-bounded use cases that deliver tangible value without putting customers in a “black box” situation.

Looking ahead, how do you see AI evolving within mission-critical operations, and what role do you expect Accruent to play in setting industry standards for trustworthy, transparent AI?

AI in mission critical operations is rapidly evolving from isolated single task automations into intelligent, multi-agent systems that can coordinate and optimize entire workflows. Instead of simply assisting users, AI will provide autonomous decision support, continuously monitor operational conditions, predict risks, and recommend actions with full transparency and traceability. As AI learns to combine unstructured documents, structured operational data, and real-time signals, it will become embedded directly into daily processes, driving faster, safer, and more reliable outcomes.

Over time, this will enable a shift toward autonomous operations, where systems can self-optimize and self-correct, while humans focus on oversight and strategic decision-making. As a market leader, Accruent will help set industry standards for trustworthy and transparent AI by embedding auditability, explainability, and strong governance into its platform and by collaborating with customers, partners, and regulatory bodies to define best practices for safe deployment in mission-critical environments.

Thank you for the great interview, readers who wish to learn more should visit Accruent.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.