Connect with us

Interviews

Siddharth Rajagopal & Sujay Dutta, Authors of Data as the Fourth Pillar – Interview Series

mm
Co-Authors: Siddharth Rajagopal & Sujay Dutta

Sujay Dutta is a seasoned technology and business leader with 25+ years of global experience. He believes the future is being shaped at the intersection of AI, Business outcomes, Culture, and Data (“A.B.C.D.”). He presently works as a Global Account Lead at Databricks.

Siddharth (Sidd) Rajagopal is a Chief Architect in the Field CTO Organization at Informatica. In his role, he engages with senior executives at enterprise providing thought leadership around data and data management by sharing his insights and learnings.

Data as the Fourth Pillar presents the case for treating data as a fundamental element of enterprise success, on par with people, processes, and technology. Aimed at boards, CEOs, and senior executives, the book outlines a structured approach to embedding data strategy at the core of business decision-making. It introduces a maturity framework and practical metrics such as Total Addressable Value (TAV) and Expected Addressable Value (EAV) to help organizations quantify the impact of data initiatives. The authors also explore the interplay between data and artificial intelligence, highlighting how each strengthens the other. Supported by a case study from AUDI AG’s Rüdiger Eck, the book blends theory with real-world application, making it a practical guide for leaders in both SMBs and large enterprises navigating today’s competitive, data-driven landscape.

Your book title refers to data as the fourth pillar. Can you summarize what the first three pillars are, and why data should be considered the fourth pillar?

The first three traditional/existing pillars are People, Process, and Technology. Each Pillar has been added as enterprises have matured over the years. Historically, data was just an operational byproduct of these pillars, managed by IT. Now, in the present AI-first age, data is no longer a byproduct. It is the primary driver of value but can also risk the existence of an enterprise – thus we refer to Data as Fire. To succeed, data must be elevated to a co-equal Fourth Pillar. With Data as the Fourth Pillar, each pillar creates a flywheel effect with the other pillars, enabling and benefiting from each other. Data as the fourth pillar ensures data receives the same C-suite and board-level attention as people, process, and technology, transforming it from a cost center into a measurable enterprise asset that drives business growth.

The position of Chief Data Officer (CDO) is described as a core role, recommended to liaise with the CEO, CTO, and other senior executives. Could you give us a high-level overview of what this position entails and its key responsibilities?

The CDO as the leader of data pillar, is a value driver, accelerating business outcomes; develops understanding of data intensity (QCS – Quality, Compliance & Speed) for business use cases; continously balances and grows data demand and supply (through the DOM – Data Operating Model);  brings in execution excellence in terms of people, process and technology for the data pillar; and a change agent for planning and executing the structural change across the enterprise, with the sponsorship of the Board and the CEO, and involvement of the leaders of other pillars.

Why is collecting and executing on data so critical to leveraging AI at scale?

Again, Data is like Fire. It fuels AI. An AI model must learn patterns, relationships, and behaviors directly from the data it’s fed, to be able to deliver business impact. Additionally, for AI, unstructured data (such as PDFs, images, and videos) becomes critical. Most enterprises are presently not mature in processing unstructured data. Moreover, AI models are becoming/have become a commodity – Data creates the differentiation from the usage of an AI model.

The book goes into detail regarding data intensity. Could you briefly explain what this means and why it is so important?

Data intensity is a measure of how “fit for purpose” your data is for accelerating business value, especially for scaling AI. Each business case requires data differently, with different intensity. Our book introduces the QCS Framework to measure data intensity across three critical dimensions:

  1. Quality: Is the data accurate, complete, consistent, and reliable? This is the “garbage in, garbage out” principle. Low-quality data leads to flawed analytics and untrustworthy AI.
  2. Compliance: Does the data adhere to all legal and ethical standards, such as privacy regulations (like GDPR) and industry-specific rules? Non-compliant data creates massive risk.
  3. Speed: Is the data available quickly enough to be useful? This refers to the velocity at which data is collected, processed, and made available for decision-making (e.g., real-time vs. batch processing).Traditionally enterprises have matured to execute in one or two dimensions. A bank would be able to deliver on Q and C dimensions, while a start-up would focus on Q and S dimensions. The challenge for enterprises in the AI-first era is to execute at a high level on all three dimensions (Q, C, and S) simultaneously and consistently.

Why is defining a data strategy so important, and why is this often overlooked?

Defining a data strategy is critical because it serves as the blueprint that directly connects all data activities to the enterprise’s business strategy. It outlines the roadmap for developing and leveraging data capabilities to accelerate business outcomes, such as increasing revenue, improving efficiency, and building a competitive advantage.

Despite this, a data strategy is often overlooked for several key reasons.
Historically, business leaders have viewed data as a byproduct of business operations and a technical IT problem, rather than a C-suite strategic function. Without a clear owner, such as a Chief Data Officer, this essential work often falls into a leadership vacuum. This leads companies to jump straight to exciting AI projects without a robust data foundation, which is a primary reason why so many of them fail.

Could you elaborate on what a data governance framework is, how it differs from a data strategy, and why it is needed to mitigate risks associated with data usage?

A data strategy defines the goals a business wants to achieve with its data. In contrast, a data governance framework enables the business use cases to use data at their required data intensity (Q, C and S), to be able deliver expected value.

The data governance framework is crucial for mitigating risk. Without governance, data becomes a liability. It ensures compliance with regulations like GDPR, preventing massive fines and legal issues. It establishes the security and privacy standards that protect against data breaches and the resulting reputational damage. Enforcing data quality prevents costly business decisions based on flawed information. And AI agents are useful only when they receive the data at the required speed.

Think of it this way: your strategy is the destination on a map; your governance framework is the traffic rules you follow to get there without crashing.

You also discuss the concept of the DOM (Data Operating Model). Could you explain what this is, and how it helps organizations operationalize their data strategy?

A Data Operating Model (DOM) is the engine that fulfills the supply of data to meet the data demand. The DOM operationalizes strategy by translating high-level goals into concrete, reusable actions. It’s a practical framework that industrializes the delivery of data at the required data intensity, comprising people, processes, and technology.

 While having the correct data strategies and governance ensures good intentions, success often depends on data adoption and data engineering management. Could you briefly discuss these two elements and why executives should pay close attention to them?

Success with data hinges on Data Adoption and Data Engineering Management.

Data Adoption is the cultural side – with your teams actually using data to make daily decisions. Without adoption, the entire investment in the data pillar goes to waste.

Data Engineering Management is the technical backbone – building and maintaining the reliable “data factory” that collects and processes data to meet the data intensity (QCS) requirements.

Executives must champion both. Poor adoption means the investment is wasted. Poor engineering means the business operates on unfit data (i.e., data that does not meet the required data intensity), leading to costly mistakes, eroding trust, creating compliance issues, and rendering any AI initiative impossible.

The book is written with larger enterprises in mind, where roles such as CDO, data risk management, data access management, and data quality and observability teams are well-defined. Why should smaller companies also consider this book, and how can they compensate for not having these roles in place?

For a smaller company, in most cases, data is its biggest differentiator. It’s far easier to build the ‘Data as the Fourth Pillar’ DNA correctly from the start than to fix a large, traditional organization later. Getting the data foundation right early provides a massive competitive advantage for growth and future AI adoption. Like a CEO of an SMB enterprise told us: for me, Data is the first pillar, and I am the CDO as well.

If there is one key takeaway from your book, what would you like it to be?

The definitive takeaway is that enterprises must immediately implement the structural change to establish Data as the Fourth Pillar of the operating model, co-equal with People, Processes, and Technology. This is an existential decision that Boards and CxOs must champion, because data is the definitive differentiator and the indispensable foundation required to successfully scale AI and secure a competitive edge in the future. Enterprises that fail to embed data as a core pillar risk irrelevance and will struggle to compete in the AI-first age. The time to act is NOW!

Thank you for the great interview, readers who wish to learn more should read Data as the Fourth Pillar.

Disclaimer: The views expressed in this article are those of the authors and do not necessarily reflect those of their current or past employers.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.