noun_Email_707352 noun_917542_cc Map point Play Untitled Retweet Group 3 Fill 1

Navigating the Impact of the EU AI Act on Your Business: Feeling Prepared?

Taking proactive steps for EU AI Act compliance

Mikel Echegoyen / April 11, 2024

As discussed in previous posts, regulation and safety of AI has been high on the agenda of governments, discussed in industry forums and it was at the heart of the recent OpenAI board and governance fiasco.

The EU AI Act: The timer for adoption is ticking

After lots of legal, business, political, technical commentary as well as lobbying from public and private actors, the EU Parliament, as of 13th of March 2024, has approved the EU AI act and is underway for publication in the Official Journal of the EU between May – July, and 20 days later entering into force and kicking off the 12-24 months period where different elements become enforceable (such as General Purpose AI, covering services like ChatGPT).

Key Provisions and Compliance Requirements: A risk-based approach

Touted as the first comprehensive law for AI globally, it will bring strict requirements for the complete AI ecosystem of providers, users, manufacturers, and distributors of AI systems in the EU market. The act follows other major EU digital legislation, such as the GDPR, the Digital Services Act (DSA), the Digital Markets Act, the Data Act, and the Cyber Resilience Act.

In a nutshell the act introduces a risk-based approach and categorization of systems based on risk levels with specific compliance requirements. The prohibited category includes things like social scoring, exploiting vulnerable people, behavioral manipulation, or facial recognitions systems in public spaces for law enforcement (with exceptions).

Implications for Businesses: Your AI journey can become very expensive

The act specifically defines requirements for General Purpose AI (Foundation Models) that pose systemic risks (those trained with greater than 10^25 FLOPS of compute, for example GPT4) on transparency on technical and training data, safeguards against unlawful output, energy consumption and more. The act also carves exceptions for research and proposes regulatory sandboxes for SMEs and innovative businesses to develop and test in real-world conditions before placing solutions in the market and allow for safe innovation.

Penalties from noncompliance for enterprises would amount to the tune of 7% (or 35M€ whichever higher) of worldwide turnover for prohibited systems, and 3% (or 15M€ whichever is higher) for high-risk AI systems, or penalties for providing incorrect or misleading information to authorities. Strict compliance enforcement is set to be overseen by national authorities designated by each EU Member State as well as a centralized European AI Office for monitoring.

Preparing Your Organization: Strategies towards Compliance and Adaptation

The act is not without criticism as it is not clear on specific definitions and approach to categorize systems, causes ambiguity on what elements would come under compliance, adds costs and burden of compliance and steep liability risks, and tries regulates a technology that is nascent and rapidly evolving and subject to change, causing concerns of innovation slowdown and scaring investments away from the EU.


To prepare your organization, we would recommend considering the following:

  1. Arrange awareness sessions with leadership and teams involved in AI enabled services, covering 360 aspects of the EU AI act (legal, business, technical, operations, compliance) and formulate or update your AI strategy.
  2. Assess and categorize your AI Solutions/Services/products and suppliers and create an initial view of your posture, risk areas and likelihood and mitigation strategies.
  3. Streamline your AI posture risk classification, ensure enterprise-wide AI policies and governance is in place, consider adopting dashboard with accountability, transparency, and compliance metrics.
  4. Adopt a “Know your model” policy to evaluate, create or request model cards (or similar documentation) describing how the models were trained, fine-tuned, and are expected to perform. Track the transparency index for popular models/services and challenge suppliers and partners on their EU Act conformance plans and actions.

  5. Align your best practices in AI development to be at least equivalent of those in regulatory sandboxes, including trustworthy AI practices, reinforcement learning from human feedback (RLHF), MLOps and red teaming to ensure compliance as well as conducting real-world testing and constant service monitoring for unexpected behaviors.

We are living a fast-paced innovation and adoption era for AI, with leading companies competing aggressively and introducing services early and often, while governments worry about risks for people caused by bad actors and carelessness, as well as the immaturity of the technology. If you feel your organization is walking on a razors edge and needing a lifeline to make it speedily and safely across, do reach out.

Mikel Echegoyen
Head of Technology, Tietoevry Create

Mikel is a senior business and technology leader with broad experience in helping global customers develop and ship next-generation digital products and services. His passion is to collaborate and combine business, technology, and software to create value. At Tietoevry Create, he is responsible for driving technology leadership across the organization and with customers, including technology excellence for solutions, assets and capabilities.

Share on Facebook Tweet Share on LinkedIn