Interviews
Holly Grant, SVP, Strategy & Innovation, DXC Technology – Interview Series

Holly Grant, SVP, Strategy & Innovation at DXC Technology, is a technology and operations executive with deep experience spanning enterprise AI strategy, fintech, startup leadership, and operational transformation. At DXC, she helps shape the company’s AI-first innovation initiatives, including enterprise-scale AI orchestration, advisory services, and product incubation efforts designed to help organizations move from experimental AI pilots to operational deployment. Prior to DXC, she held multiple leadership roles at the Long-Term Stock Exchange (LTSE), ultimately serving as Chief Operating Officer, where she focused on operational scaling and strategic growth within the fintech sector.
DXC Technology is a global IT services and consulting company focused on helping enterprises modernize mission-critical systems across cloud computing, cybersecurity, artificial intelligence, data infrastructure, and enterprise operations. Formed through the merger of Computer Sciences Corporation and Hewlett Packard Enterprise’s Enterprise Services division, the company works with organizations across industries including healthcare, banking, manufacturing, insurance, and government. In recent years, DXC has increasingly positioned itself around AI-native enterprise transformation, offering services that integrate generative AI, intelligent automation, observability, digital twins, and large-scale IT modernization into complex corporate environments. The company also emphasizes “AI-first” operational models designed to help enterprises deploy AI securely within existing infrastructure rather than replacing legacy systems entirely.
You’ve built a career at the intersection of strategy, operations, and innovation—from scaling organizations earlier in your career to now leading Strategy & Innovation at DXC. How did those experiences shape your approach to launching LabX and designing an AI incubation environment focused on real-world business impact?
My career has taken me across family offices, startups, venture capital, and now a Fortune 500 company in the middle of a turnaround. What I’ve seen across all of those environments is that ideas don’t land on their own. The ones that actually create value tend to share three things: a real customer pulling for them, the right moment in the market, and a scope that’s clear and appropriately narrow. Miss any one of those and even a brilliant idea stalls.
That pattern shaped how I thought about LabX. You need a theory of winning—a real strategy—but you also need the operational muscle to bring it to life, and the discipline to adapt as you learn and conditions change. Strategy without execution is a deck. Execution without strategy is motion without progress. LabX is designed to hold both at once.
Under our CEO Raul Fernandez’s leadership, DXC has put AI fluency and innovation at the center of our turnaround strategy. LabX is how we translate that conviction into products, capabilities, and customer outcomes—fast enough to matter.
Many enterprises are experimenting with AI but struggle to move from pilots to production. From what you’re seeing at DXC, what are the biggest barriers preventing organizations from scaling AI beyond proof-of-concept projects?
Two barriers come up again and again, and neither of them is really about the technology.
The first is change management. AI changes how people work, what they’re accountable for, and how decisions get made. If you don’t bring your workforce along, the most elegant model in the world will sit unused. The second is that companies begin to scale AI without changing the underlying operating model. They bolt intelligence onto a specific system or application so one user can use it, but the rest of the team can’t. AI is a horizontal intelligence—it creates the most value when it can move across functions, data, and workflows. When the operating model doesn’t change, that value stays trapped locally instead of compounding across the enterprise.
So the pilot works, everyone celebrates, and nothing actually scales. That’s the pattern we’re trying to break at LabX by designing for enterprise-wide unlocks from day one.
LabX operates on a rapid concept-to-MVP cycle of roughly 90 days or less. What changes in mindset, governance, or development processes are required for large enterprises to move at that kind of speed?
The biggest mindset shift is being willing to decide earlier with less perfect information—and the discipline to cut what isn’t working. Large enterprises get comfortable with long planning cycles because they feel safe. They’re not. In a market moving this fast, a slow “yes” and a slow “no” are both expensive.
Inside LabX we assign a small triad—design, product, and engineering—to run a sprint against a real customer problem. They build a minimum viable product, test it for value and scale, and we graduate ideas that show commercial promise within 90 days. What makes that speed possible isn’t the absence of governance, it’s the presence of the right governance. Security, privacy, compliance, and responsible AI sign-off are built into the process on day one, not bolted on at the end. Every product goes through a formal governance review before it scales.
For most enterprises, getting to this kind of cadence requires protecting a space where it’s legitimate to move this way—without forcing every experiment through the same cycle time as a multi-year platform build. That’s what LabX is for us.
DXC describes LabX as a way to validate high-potential AI concepts with customers before scaling them. How does this “Customer Zero” approach help ensure AI solutions are grounded in real operational needs rather than theoretical use cases?
Customer Zero is, honestly, our edge. Before a LabX product ever goes to market, it has to survive inside DXC first. We manage 115,000 employees across 70 countries, regulated industries, complex customer contracts, legacy systems, and real operational stakes. That’s not a sanitized demo environment—that’s enterprise reality.
A traditional startup can move fast, but they can’t easily replicate the lived experience of operating inside that kind of complexity. When we test a product on ourselves first, we find the places where it breaks on real data, real workflows, and real regulatory constraints—things that would have surfaced in a customer environment six months later. By the time we bring an offering to a customer, we’re not pitching a theory. We can say: ‘Here’s what it did inside our own operations, here’s what we changed, here’s what we measured.’
It also keeps us honest. If a product can’t prove itself internally, it doesn’t graduate. That’s a much higher bar than saying ‘it worked in a demo.’
Enterprise environments are often filled with legacy systems, fragmented data, and regulatory constraints. How do you design AI workflows that can operate effectively within that real-world complexity?
We start from the assumption that the environment is complex—that’s the baseline, not the exception.
Architecturally, we work with a decomposable approach to our platforms. The leading AI tools are changing monthly, not yearly. If you hard-wire yourself to a single model, vendor, or framework, you’re betting that today’s leader will still be the leader in 18 months. That’s a bad bet. A decomposable architecture lets us swap components as the frontier moves, stay fluent with what’s actually best-in-class, and stress-test tools against real customer challenges rather than vendor marketing.
On the regulatory and data side, compliance is designed in from day one. Every product goes through a governance review, and responsible AI sign-off is part of the process, not an afterthought. Operating in highly regulated industries across 70 countries forces that discipline on us—which turns out to be a feature, not a bug, when we bring products to clients with the same constraints.
Traditional IT consulting relied on long planning cycles and rigid implementation frameworks. As AI evolves faster than those cycles can accommodate, how do consulting models need to change?
The honest answer is that the whole model has to shift, but if I had to pick the linchpin, it’s the value proposition. The industry has spent decades selling deliverables—decks, roadmaps, implementation plans—and getting paid for effort. In an AI-native world, clients don’t want a deliverable. They want an outcome. They want the workflow to actually run, the cost to actually come down, the revenue to actually show up.
Once you commit to selling outcomes, everything else has to change to support it. Team composition gets more technical. Engagements move from advise-and-leave to build-and-operate. Pricing shifts away from hours. The people doing the work need to be as comfortable shipping code as running a steering committee.
That’s a big cultural change for our industry, and not everyone is going to make it. The firms that do will look very different in five years than they do today.
LabX also functions as an experimentation environment for employees and technology partners. How important is internal experimentation when trying to build organization-wide AI fluency?
It’s the whole game. You don’t build AI fluency by reading about AI—you build it by trying things, watching them break, and trying again. That’s as true for a 30-year IT professional as it is for someone two years out of school.
We recently ran an AI challenge inside one of our business units and got over 1,300 unique ideas in two weeks. That’s not a statistic about a tool—that’s a statistic about what happens when you give people permission to think outside the box. The creativity already exists within the organization. Our job is to create the space for it to grow.
LabX also runs a rotation program: technical experts from across DXC spend six to twelve weeks embedded with us, building real products with the latest AI tools. When they go back to their home teams, they bring a new skill set and, more importantly, a different way of thinking. They start asking different questions of their colleagues and their customers. They become champions for what’s possible. That compounding effect across the workforce is worth more than any single product we ship.
DXC frames its approach as Human+, emphasizing that AI should expand human capabilities rather than replace them. In practical terms, how does that philosophy influence how AI solutions are designed and deployed within enterprises?
I’ll be direct: there’s a view taking hold in the industry that the most valuable thing enterprise AI can do for a company is reduce headcount. I think that’s a failure of imagination.
Cost discipline matters, but the real opportunity is growth: new revenue streams, new products, new service offerings that simply weren’t feasible before. AI’s highest-value use case is enabling people to do work that creates new business value, not just optimizing what already exists. The companies that get this right will outperform those that treat AI as a pure cost exercise.
In practice, Human+ means we design AI to handle high-volume, routine processes so our people can focus on higher-value work: strategic thinking, creative problem-solving, client relationships, and complex judgment calls. We keep human expertise and oversight at the center of every deployment, particularly where decisions carry real consequences. That’s how you build trust with clients, and it’s how you unlock durable competitive advantage.
When organizations attempt to integrate AI into existing workflows, what common mistakes do you see them making that slow down adoption or limit real business value?
Two mistakes I see constantly. The first is starting with the technology instead of the problem. Someone falls in love with a model or a vendor demo, and the initiative becomes about deploying that thing rather than solving something that actually matters to the business. The second is treating AI as an IT project instead of a business transformation. If you hand AI entirely to the CIO and ask the rest of the business to keep running unchanged, you’ll get a tool nobody uses and a budget nobody wants to defend next year.
The antidote to both is simple to say and hard to do: start with the business problem, put the right cross-functional team on it—people, process, technology—and build backwards from the outcome you’re trying to create. That’s the posture we take at LabX, and it’s how we work with customers like Ferrovial, where we’ve helped deploy AI Workbench—a generative AI offering combining consulting, engineering, and secure enterprise services, now leveraged by more than 24,000 employees with over 30 AI agents making real-time decisions. That kind of scale doesn’t happen if you treat it as an IT project.
Looking ahead, how do you expect AI incubation environments like LabX to shape the way enterprises develop, test, and deploy new technologies over the next several years?
Here’s what I think will be obvious in hindsight: the winners in this era won’t be the companies with the flashiest point solutions. They’ll be the integrators—the ones who can stitch AI across operating models, across functions, and across workflows so that intelligence isn’t trapped in a single tool or a single user’s screen.
That’s a harder problem than deploying a model. It requires deep enterprise context, the ability to work across legacy and modern systems, and the discipline to change how work actually gets done. It’s also the opportunity I’m most excited about.
Incubation environments like LabX are how we get the reps. They’re where you learn what breaks at scale, what governance actually looks like in practice, and what customers will and won’t adopt. The enterprises that invest in that kind of space now—internally or through partners—will have a very different capability curve three years from now than the ones still deciding whether it’s worth the effort. And those of us building in this space will keep finding new problems worth solving, because the technology isn’t slowing down and neither is the opportunity.
Thank you for the great interview, readers who wish to learn more should visit DXC Technology.












