Connect with us

Thought Leaders

How Trusted Data Foundations Enable Orgs to Modernize, Govern and Adopt AI With Confidence

mm

What data does your business have? Where did it come from? And what systems does this data flow through?

In 2026, if you can’t answer these questions, you don’t have the trusted data foundations to modernize, govern and adopt AI with confidence.

The AI conversation right now is happening at the wrong level of abstraction. Everyone is discussing the latest models, the Copilot integrations and so on. But the real question is whether you know your own data well enough to trust any AI system!

Here be dragons

Medieval cartographers drew monsters on the parts of the map they hadn’t explored. The phrase “Here be dragons” appears on the Hunt-Lenox Globe. It means, we don’t know what’s here – assume the worst!

Most organizations’ data estates have areas like this. There are the well-mapped modern territories (the production databases, the core transactional systems), and then there’s everything else. The shadow databases, the test database under someone’s desk, or the staging environment set up for an integration test with production data in it.

You can’t navigate territory you haven’t mapped, and you certainly shouldn’t build AI systems on unmapped foundations.

What do we know about the landscape?

This isn’t just a hypothetical metaphor. Redgate’s 2026 State of the Database Landscape report, which surveyed over 2000 IT professionals across the globe, gives a glimpse into what these unmapped territories look like in practice.

  • 74% of organizations now run two or more database platforms, with 25% running more than four. Data doesn’t just live in one place; it’s distributed across platforms, cloud environments and legacy systems. Each platform has its own access controls, its own query patterns, its own quirks. When data is this fragmented, the question isn’t whether you have blind spots; it’s how many you’ve got!
  • 39% still rely on manual testing and deployment. Each manual deployment carries risk, checklists that might not be followed, unclear data provenance and unclear lifetimes of data.
  • 47% of multi-platform organizations have experienced security or privacy issues. Here be dragons indeed!

Despite these glaring issues, 58% of organizations are willing to accept higher risk for AI efficiency. However, it doesn’t have to be this way if you have the right foundations.

Modernize

Most database modernization projects don’t fail because the technology doesn’t work. They fail because nobody fully understands the old system, such ashe stored procedures that encode business rules nobody documented and the implicit data contracts between systems that only exist in the heads of people who’ve since left.

This is Chesterton’s fence applied to the data estate: Before you remove something, you need to understand why it was built that way!

In practice, that means treating your database changes with the same rigor as your application code. Version control, automated deployments, repeatable processes; the practices that application teams adopted years ago are still surprisingly rare on the database side. When database changes are manual and untracked, every step in the modernization process carries hidden risk. You can’t confidently migrate what you can’t reliably deploy.

Test data is the other blind spot. Organizations seeking to modernize their data estate need to validate that everything works on the other side. However, testing against production data copies creates its own problems: Sensitive data can end up in environments with weaker access controls, nobody tracks how long it persists, and compliance obligations follow the data whether you meant to copy it or not. Reliable, representative test data that doesn’t carry these risks is a prerequisite for modernizing your database safely.

Organizations that modernize successfully treat database DevOps and test data management as first-class concerns, not afterthoughts you bolt on once the migration is underway.

Govern

There’s a temptation to treat AI governance as a simple policy exercise: 1) Write a document, 2) publish a framework and 3) tick the compliance box. But governance that exists only in documents is theatre. Real governance means building systems that make best practices the default option, not something people have to remember to do.

True governance also means consistent visibility of your database deployment pipeline, the queries running in production and where sensitive data flows. It means knowing (operationally, not theoretically) what data an AI system has access to, where it came from and who approved its use.

This isn’t an abstract aspiration. Regulation is heading squarely in this direction. The EU AI Act classifies AI systems by risk level and imposes specific obligations around data governance, traceability and human oversight for high-risk applications.

ISO 42001, the international standard for AI management systems, goes further still; it requires organizations to demonstrate how they manage data quality, provenance and lifecycle across AI systems with auditable evidence.

The common thread is that regulators aren’t going to ask whether you wrote a governance policy. They’re going to ask whether you can show them how it works:

Can you trace the data that informed a specific decision?

Can you demonstrate that sensitive information was handled in accordance with your own rules?

Can you prove that the controls you described on paper are running in production?

Adopt AI with confidence

Once you can answer these questions, you’ve built a solid foundation and are in a great position to adopt AI. You now have confidence in your inputs, not more garbage-in, garbage-out problems.

The organizations that are getting real value from AI aren’t necessarily the ones with the most advanced models. They are the ones who did the “boring” groundwork, cataloguing data, establishing lineage, automating deployments, securing access controls and testing data quality.

When organizations report concerns about security, accuracy and compliance they’re really saying that they don’t trust their own foundations enough to trust what gets built on top of it.

Don’t fall into the same trap. Modernize, govern and only then can you adopt AI with confidence.

Are you ready for AI?

Organizations wanting to adopt AI should be able to answer these three questions with confidence:

  1. Can you produce a complete inventory of where sensitive data lives across your estate?
  2. Can you trace the lineage of data from source to a point where an AI model consumes it?
  3. If a regulator tomorrow asked where your PII is, could you verify its not in any of your test environments?

If you can’t, start there! Build your data landscape map and explore thoroughly. No more dragons!

Jeff Foster is the Director of Technology and Innovation at Redgate, leading technical architecture and driving scalable, sustainable innovation, with a strong focus on harnessing AI to evolve products, enhance customer value, and increase business efficiency.