Thought Leaders
How Much Will the EU AI Act Actually Impact Your Business?

As new provisions take effect, here’s what companies really need to know about compliance
February 2, 2025 marked the first major milestone in the rollout of the European Union’s AI Act, with provisions banning prohibited AI practices and requiring organisations to ensure their staff have enough knowledge, skills and understanding about how AI works, its risks and its benefits (AI literacy). Now, August 2, 2025 represented another critical juncture, as obligations for General-Purpose AI models have kicked in.
The AI Act applies to those that sell, import or make available AI systems or general-purpose AI models in the EU, whether they are based in the EU or not. It also applies to those companies based in the EU that use AI systems or models.
While companies are genuinely concerned about AI compliance obligations, the reality for most businesses will be less dramatic than the provisions might come across at first glance.
As someone who runs a global company that uses AI extensively in our document management platform, I’ve had to navigate this regulation firsthand. The truth is, for the vast majority of businesses, the AI Act is much more manageable than it initially appears—similar to how GDPR seemed overwhelming from an American perspective but proved workable once you understood the principles.
But unlike GDPR’s single implementation date, the AI Act rolls out in phases. With fines reaching up to €35 million or 7% of global turnover, and one critical enforcement wave just behind us and another major deadline ahead, getting your compliance strategy right is a must.
Where We Are On The Timeline
As of August 2025, obligations for General-Purpose AI (GPAI) models are now in effect—and this affects far more companies than most realize. If you’re using foundation models like GPT-5, Claude, or Llama in your products, you may inherit compliance duties even if you consider yourself just a “user” of the model.
The obligations include demonstrating compliance with copyright law in training data, conducting adversarial testing for security vulnerabilities, implementing robust security measures, and providing detailed technical documentation about model capabilities and limitations.
Many SaaS companies assume they’re exempt because they’re not developing models from scratch. But if you’re in particular fine-tuning or otherwise modifying models, you may find yourself subject to GPAI obligations. The line between “using” and “providing” AI systems is deliberately broad in the regulation.
August 2, 2026 is the big milestone to watch for. By this date, AI systems classified as “high-risk” must meet comprehensive compliance requirements. The scope is broader than many businesses anticipate, and the obligations are substantial.
High-risk classifications include systems used for recruitment and hiring, credit scoring and financial decisions, educational assessment, medical diagnosis, safety-critical infrastructure, and law enforcement applications. If your AI tool helps determine who gets hired, approved for loans, admitted to programs, or diagnosed with conditions, you’re likely in scope.
There is a burden that comes with this. You’ll need comprehensive risk management systems with ongoing monitoring, technical documentation proving your system’s safety and reliability, data quality standards with auditable proof of training data integrity, automatic logging of all system decisions and operations, meaningful human oversight with the ability to intervene in real-time, and CE marking with third-party conformity assessment.
This isn’t just about adding a disclaimer to your website. High-risk systems require the kind of quality management systems typically seen in medical device manufacturing or automotive safety systems.
Understanding the Risk Categories
The AI Act operates on a four-tier, risk-based approach that’s more nuanced than many realize.
- Unacceptable Risk (Prohibited): These AI applications are banned outright – social scoring systems, manipulative AI targeting vulnerable groups, real-time biometric identification in public spaces (with limited exceptions for law enforcement), and emotion recognition in workplaces or schools.
- High-Risk (Heavily Regulated, But Allowed): This is where many businesses get caught off guard. As mentioned above, high-risk applications include resume screening and hiring tools, credit scoring and loan underwriting systems, medical diagnostic devices, safety systems in transport (autonomous vehicles, traffic management), educational assessment tools, law enforcement applications, and critical infrastructure management.
- Limited Risk (Transparency Required): these systems primarily involve AI that interacts directly with humans or generates content that could be mistaken for human-created material. This includes chatbots, virtual assistants, and AI systems that create synthetic media like deepfakes or manipulated images and videos. For these applications, the main regulatory requirement is transparency – users must be clearly informed when they’re interacting with an AI system rather than a human, or when content has been artificially generated.
- Minimal Risk: The majority of AI applications fall into this minimal risk category, which covers systems that pose little threat to fundamental rights or safety. These include common business tools like spam filters, inventory management systems, basic analytics platforms, recommendation engines for content or products, and automated customer service routing. For minimal risk systems, there are essentially no specific regulatory obligations under the AI Act beyond general requirements like AI literacy for staff.
If you fall into the “Unacceptable or High-Risk” categories, transparency alone isn’t going to cut. If you fall anywhere, else, the compliance requirements are manageable.
The Foundation Model Ripple Effect
The August 2025 GPAI deadline that just passed deserves special attention because it creates a ripple effect throughout the AI ecosystem. Foundation model providers like OpenAI, Anthropic, and Meta have had to implement new compliance measures, and those requirements flow downstream to their enterprise customers.
If you’re building on top of these models, you’ll need to understand your provider’s compliance posture and how it affects your own obligations. Some model providers may restrict certain use cases, others may pass compliance costs through in the form of higher pricing or new service tiers.
Companies should audit their AI supply chain now if they haven’t already. Document which models you’re using, how you’re customizing them, and what data flows through them. This inventory will be crucial for understanding your current GPAI obligations and preparing for the 2026 high-risk system requirements.
Getting Ahead of the Curve
The AI Act represents the world’s first comprehensive AI regulation, and we’re now in the middle of its phased rollout. With GPAI obligations now in effect and the major high-risk system deadline approaching in August 2026, companies that viewed GDPR as a burden missed the opportunity to turn privacy into a differentiator. Don’t make the same mistake with AI governance.
The businesses that will struggle most are those caught unprepared when enforcement intensifies. Those building responsibly today will find that compliance enhances rather than hinders their AI strategy. There’s still time to get ahead of the curve—but the window is closing fast.












