How to comply with the EU AI Act.

Blog Post • 5 min read

If your organisation isn't already using AI as part of customer service or internal support, it will be soon.

No matter how AI is being used, you must make sure you know the risks and address them. For example, on a functional level, AI systems can suffer from bias, which can lead to unfairness and even discrimination. On a technical level, you should be careful with processing confidential or privacy data in AI systems, as this can lead to data leakage or privacy law violations.

Recognising these risks, the EU is leading the way with AI regulation. The EU AI Act introduces rules around transparency, accountability and risk management.  

What is the EU AI Act?

The EU AI Act is a regulatory framework set by the European Commission to ensure AI systems in the EU market are safe, ethical and aligned with fundamental rights.

It applies to AI providers, deployers and users within the EU and companies outside the EU if their AI systems affect EU citizens.

The EU AI Act became legally binding in August 2024, with compliance requirements being rolled out gradually through August 2026.

How do you comply with the EU AI Act?

EU AI Act compliance can be broken down into 3 stages:

  1. Organise – You must classify AI systems to understand if you’re allowed to run them and, if yes, what the implementation requirements are. To get this classification in place, you need a central team that understands the organisation’s AI systems, their usage and purpose, where they run and the information they use.
  2. Classify – The AI Act requires you to classify AI systems based on risk level (unacceptable risk, high risk, limited risk, minimal risk). The risk level determines what compliance measures you need to take. The expectation is that you can showcase how you performed the classification process. It’s best to have a single process, run or supervised by the central team.
  3. Implement – The AI Act lists the requirements you need to in place before you’re allowed to use AI systems classified as high risk. This includes having documentation, logging and supervision, and registering with EU authorities. Implementation should be done centrally if possible (e.g. via a centrally managed AI platform and universal AI management processes) and locally if needed (e.g. risk analysis and human supervision per AI system).

Business benefits of EU AI Act compliance

  • Reduce legal and reputational risks – Non-compliant organisations face penalties of up to €35 million or 7% of global turnover. They’ll also become headline news.
  • Strengthen trust with customers and partners – As all companies will be using AI, there will be questions about responsible use. When you are AI Act compliant, you can showcase and credentialise your robust approach.
  • Mitigate business risks associated with AI use – Being compliant means you have effective AI risk management in place. This means you can identify and address risks like discrimination due to AI system bias or lack of human oversight.
  • Competitive advantage and scalability –When you comply with the AI Act, you can use your AI services across the EU. And as the AI Act is arguably the strictest AI legislation globally, you will also have a robust approach for scaling your AI services worldwide.

The 3 compliance steps to take now

Implementing AI – and complying with the AI Act – is a major undertaking. You can compare it with how cloud emerged 20 years ago and needed to be structured and integrated into the organisation. It’s a journey, and here are the first steps:

  1. Inventory current and potential AI use – Document current and planned AI initiatives. Inventory the data AI systems are using.
  2. AI data usage – Make sure no one in the organisation is using personal data or confidential corporate information in public AI solutions (like ChatGPT) to prevent violating privacy legislation or leaking company information. Communicate what basic AI does and doesn’t do.
  3. Responsibilities – Make sure you have someone responsible for each AI service and data within that system (product owner), someone accountable for organising AI usage in the organisation (Chief AI) and someone responsible for setting AI ethics and usage rules (governance).

How to plan your EU AI Act compliance roadmap

Once you’ve taken those 3 initial steps, it’s time to plan a robust compliance approach. This is inextricably linked to your overall AI approach.

  1. Create an AI strategy – to define what you want and (need) to do with AI.
  2. Set up AI governance – Set roles and responsibilities aligned with IT, cloud and data governance. Use an AI governance model to set up a central team that supports the organisation to run AI services in a responsible way.
  3. Start AI training – The AI Act requires a level of AI literacy within the organisation. Plan training to improve awareness around responsible AI use and to ensure data confidentiality.
  4. Build out your AI team – To support the organisation with using generic AI services and building custom AI services. This usually involves an AI platform team for tech and an AI enablement team for support.
EU AI Act compliance roadmap

Next step: Free AI Act workshop

The required organisation, classification and implementation processes aren’t easy – but they are essential.

Nordcloud’s advisory team has developed a 1-hour session to demystify and kick-start AI Act compliance planning. It covers key topics like:

  • AI Act compliance roles
  • Risk classification for AI systems
  • Requirements for high-risk systems
  • Initial compliance steps

It’s a useful next step in ensuring you take a best-practice approach – and make compliance as efficient and cost-effective as possible.

Contact us now to arrange your free EU AI Act workshop.

Let’s discuss how we can help with your cloud journey.

Our experts are standing by to talk about your migration, modernisation, development and skills challenges.

Sander guides organisations through effectively implementing cloud-based governance, risk, and compliance strategies.
Sander Nieuwenhuis LinkedIn
GRC Advisory Global Lead
Scroll to top