Artificial Intelligence
EnterpriseDB Introduces “Intelligence per Watt” to Reduce AI Energy Consumption

EnterpriseDB has announced a new set of performance benchmarks and architectural improvements within its EDB Postgres AI platform, introducing what it calls an “intelligence per watt” standard for enterprise AI. The concept is designed to address a growing challenge: how to scale AI systems without a proportional increase in energy consumption and infrastructure costs.
The company’s latest results suggest that meaningful efficiency gains can be achieved not at the model or GPU level, but within the data layer that underpins every AI interaction. By optimizing how data is retrieved, indexed, and processed, EnterpriseDB claims it can reduce token usage, shrink infrastructure requirements, and significantly lower emissions tied to AI workloads.
Performance Gains Focused on the Data Layer
The announcement is backed by a series of benchmarks that highlight improvements in both speed and efficiency. EnterpriseDB reports that its platform can accelerate vector indexing while using far less memory than traditional approaches, and reduce token consumption without materially degrading output quality.
In practical terms, this means AI systems can complete the same tasks with fewer computational steps. Since token generation and data retrieval are directly tied to compute usage, these reductions translate into lower energy consumption per interaction.
The company also points to broader improvements across analytical workloads, where operations on live data can be completed dramatically faster. These gains are not limited to isolated use cases but apply across enterprise environments where AI, analytics, and transactional systems operate simultaneously.
Infrastructure Reduction and Emissions Impact
Beyond workload-level improvements, EnterpriseDB is emphasizing reductions at the infrastructure level. In a set of enterprise deployments, the company reports that its platform enabled a significant decrease in compute cores required to run applications, which in turn reduced energy usage and associated emissions.
In one example involving large-scale financial services environments, the reduction in infrastructure translated into a substantial drop in carbon output. The scale of these savings highlights how efficiency improvements at the database layer can have system-wide effects, especially in organizations operating multiple data centers.
This dual focus on infrastructure and workload optimization forms the foundation of the “intelligence per watt” framework. The idea is not just to make AI faster, but to make it fundamentally more efficient as it scales.
The Growing Energy Challenge of AI and Data Centers
The importance of these improvements becomes clearer when viewed against the broader trajectory of data center growth. AI is rapidly increasing the demand for compute resources, and with it, electricity consumption.
The International Energy Agency has projected that global data center electricity demand could reach around 945 terawatt-hours by 2030, more than doubling from current levels. AI workloads are expected to be the primary driver of this increase.
This surge in demand has direct environmental implications. Data centers already account for a meaningful share of global electricity usage, and their expansion places additional pressure on energy infrastructure and emissions targets. Without improvements in efficiency, the cost of scaling AI could extend far beyond financial considerations.
What EnterpriseDB Is and Why This Matters
EnterpriseDB has long been associated with enterprise-grade PostgreSQL solutions, but its evolution into a data and AI platform provider reflects broader changes in the market. As organizations integrate AI into core operations, the boundary between databases and AI systems is disappearing.
EDB Postgres AI is designed to operate at that intersection, combining transactional processing, analytics, and AI workloads within a unified system. This approach reduces the need for multiple specialized platforms, which often require data to be duplicated and moved between environments.
By consolidating these functions, EnterpriseDB is positioning itself as a foundational layer for AI infrastructure. Its focus on efficiency aligns with a growing recognition that scaling AI is not just about increasing capability, but about doing so sustainably.
How This Compares to Other Industry Efforts
Across the industry, most efforts to improve AI efficiency have focused on hardware and model optimization. Chipmakers continue to develop more efficient processors, while AI companies are working to reduce the size and computational requirements of models.
Cloud providers are also investing heavily in data center efficiency, including cooling innovations and renewable energy integration. At the same time, data platforms are evolving to support AI workloads more directly, often by integrating vector search and machine learning capabilities into their systems.
What distinguishes EnterpriseDB’s approach is its emphasis on the data layer as the primary lever for efficiency. Rather than competing with GPUs or model architectures, it targets the operations that occur before and around inference, where significant inefficiencies can accumulate at scale.
This perspective does not replace hardware or model improvements but complements them. As AI systems become more complex and more autonomous, the efficiency of the underlying data infrastructure may play an increasingly important role in determining overall performance and cost.
A Shift Toward Measuring AI Efficiency
The introduction of “intelligence per watt” reflects a broader shift in how enterprises may evaluate AI systems. Performance alone is no longer sufficient. Organizations are beginning to consider how much energy is required to generate that performance and whether it can be reduced without sacrificing quality.
EnterpriseDB’s announcement suggests that the next phase of AI adoption will be shaped not just by what systems can do, but by how efficiently they can do it. As AI agents scale into the billions and operate continuously, even small improvements in efficiency can have a large cumulative impact.
In that context, optimizing the data layer is no longer a secondary concern. It is becoming a central part of the conversation about the future of enterprise AI.












