Connect with us

Interviews

Efrain Ruh, CTO of EMEA at Digitate – Interview Series

mm

Efrain Ruh is the Chief Technology Officer for EMEA at Digitate, where he leads regional technology strategy with a strong focus on enterprise AI, automation, and large scale operational transformation. He works closely with customers across Europe, the Middle East, and Africa to help organizations deploy AI responsibly, emphasizing practical execution, governance, and measurable outcomes. Ruh is a frequent contributor to industry discussions on agentic AI, MLOps maturity, and the realities of moving from AI pilots to production systems.

Digitate is an enterprise software company and a subsidiary of Tata Consultancy Services that specializes in AI driven automation for IT and business operations. Its core platform, ignio, enables autonomous operations by combining machine learning, knowledge graphs, and intelligent automation to detect issues, predict outcomes, and self heal systems across complex enterprise environments. Digitate serves large global organizations seeking to reduce operational complexity while improving resilience and efficiency through applied AI.

Given your extensive technical leadership roles at Digitate and your earlier hands-on experience across IT operations, architecture, and enterprise systems, how has your view on the importance of explainable AI evolved compared to the black-box automation models many organizations still rely on today? 

Earlier generations of automation were largely reactive and rule-based, which made black-box behavior more tolerable as outcomes were very predictable. As AI systems now move toward proactive agents that can reason and generate this own response, the tolerance for opacity disappears. IT leaders are accountable for keeping mission-critical systems available around the clock, and that responsibility leaves very little room for experimentation with systems whose reasoning cannot be validated. Explainability has therefore shifted from a technical feature to an operational requirement that enables trust at scale. IN order to trust these new Ai Agents it is important that they share their line of thought (basically how they arrived at the solution) and their reasoning results.

Why are trust and explainability emerging as the biggest barriers to wider adoption of advanced AI-driven automation in enterprise IT? 

Enterprise IT operates under constant accountability and risk. When AI systems make decisions autonomously, especially in preventive or self-healing operating models, supervising teams must be able to understand why a certain action was taken. Without visibility into evidence, context, and decision logic, AI introduces uncertainty rather than reducing it. That lack of transparency and uncertainty impacts trust, therefore, slowing down adoption among IT teams more than model accuracy or performance ever could.

What does true explainability actually look like in AI-driven IT operations, and how can it help teams validate decisions before systems act autonomously? 

True explainability is practical and operator focused. This means that it requires technology to clearly show the data used for reasoning, validate that the system understands the correct operational context, and explain the recommended course of action in human-readable terms. It also includes historical validation, such as whether similar decisions have been made before and what the outcomes were. This allows teams to validate actions quickly, and confidently expand autonomous execution where risk is low. If people can’t quickly digest and act upon its readout, then an explainability tool fails to serve its purpose.

How does a lack of explainability translate into real operational, financial, or business risk for large organizations? 

If you ran a factory and a vital piece of equipment lacked alerts to flag a maintenance issue, it’d eventually start to malfunction without alerting the team – leading to poor quality products or unplanned downtime. Similarly, AI without explainability can turn small data issues into major business incidents. An AI-based capacity forecasting system operating on incomplete data may reduce infrastructure capacity to save costs, only to cause severe performance degradation during peak processing periods. In the case of enterprise AI systems you often see the impact of poor explainability through missed SLAs, financial penalties, and customer impact. Similarly, aggressive alert suppression can hide critical failures, allowing outages to go undetected until it’s a true emergency.

From your perspective, what role should AI platforms play in making their reasoning auditable and understandable for IT teams operating mission-critical systems? 

A well-designed AI agent or platform has explainability baked in, plain and simple. Autonomous systems should document the data used, the logic applied, the actions recommended or taken, and the outcomes that followed. This information needs to be presented in the operational language IT teams use, such as dependencies, historical incidents, and business impact, rather than abstract AI scores. Auditability is essential for accountability, learning, and long-term trust.

What architectural or design approaches are helping enterprises move away from opaque automation toward more transparent, glass-box decisioning? 

Enterprises are adopting architectures that ground AI decisions in high-quality operational data and clearly separate data validation from decision logic. Transparent systems ensure AI is working with the right inputs and assumptions before acting. Many organizations also use staged autonomy, starting with recommendations and progressing to constrain autonomous execution as confidence grows. Open data access and clear policy layers are critical to avoiding black-box behavior as autonomy increases. There’s still plenty of room for the industry to grow explainability though, and I think we’re going to see a boom in explainability features this year

How do organizations strike the right balance between autonomous AI action and human oversight without slowing down operations? 

The balance is achieved through risk-based controls rather than blanket oversight. Low-risk, high-frequency tasks can be automated end-to-end with guardrails, while higher-impact decisions remain human-validated until systems demonstrate reliability. As explainability improves and AI consistently proves sound judgment, autonomy naturally expands without introducing operational friction.

What advice would you give to CIOs and CTOs who want to scale AI across IT operations but face internal resistance due to visibility and accountability concerns? 

Start with transparency, not autonomy. Involve your IT teams early and prioritize visibility in both design and rollout to build trust. Resistance typically stems from a lack of insight into AI decisions, not from opposition to innovation itself. Then, focus on the use cases that reduce noise, eliminate alert fatigue, and clearly explain how decisions are made before allowing systems to act independently.

As AI systems take on more decision-making responsibility, how should leadership teams rethink governance, validation, and trust models? 

Governance must shift from static approval processes to continuous validation. Leadership teams need to define where autonomy is allowed, what evidence is required, and when human intervention is necessary. Trust should be earned through measurable outcomes such as accuracy, reduced incidents, and faster resolution times, rather than assumptions about model sophistication.

Looking ahead, how do you see explainable and transparent AI reshaping the future of autonomous IT operations over the next few years? 

Explainable AI will be a fundamental piece of the puzzle when building proactive and autonomous IT operations to scale safely, without losing the vital trust of your IT team. Generative AI is already improving transparency by enabling systems to surface evidence, validate facts, and explain decisions in human-readable terms. Over the next few years, this level of explainability will become standard, allowing organizations to move beyond experimentation and embed AI deeply into operations, governance, and workflows as a trusted partner rather than an opaque decision-maker. Over the next few years, AI agents will be in place to manage different activities of an organization, they will become part of our everyday lives but for these we need to be able to trust them and accuracy and transparency are key aspects to achieve this goal.

Thank you for the great interview, readers who wish to learn more should visit Digitate.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.