Navigating EU AI Act compliance: Which scenario(s) are you?
AI is everywhere, whether helping you pick a movie on your favourite streaming platform, answering questions on a website chatbot, polishing your report for that big meeting or writing code. AI tools are powerful and useful if they’re used and governed properly. Left unchecked, they can create risks that spiral out of control.
The EU doesn’t want a dystopian scenario. That’s why it acted early. And that’s why the EU AI Act exists.
The EU AI Act in a nutshell
The EU AI Act is a regulatory framework designed to make sure AI systems in the EU market are safe, ethical and respect fundamental rights. It’s already in effect. If you operate in the EU or if your AI serves EU customers, you need to comply.
Why comply with the EU AI Act?
Ignoring the AI Act can cost you – literally. Penalties can reach €35M or 7% of global revenue. And the financial hit isn’t the only risk. You could lose customer trust, damage your reputation or even face legal action from partners.
On the flip side, early compliance can be a competitive advantage. It builds trust and positions you as a leader. For example, we’re working with a financial sector client (one of the most regulated industries) on a pilot for AI-powered risk assessments that will deliver a huge efficiency advantage against their competitors.
And here’s another reason to act now: the EU AI Act is one of the strictest frameworks in the world. If you meet its requirements, you’re well-prepared for almost any other market. Europe is setting the global tone for AI governance, and you can be part of that story.
Start by classifying your AI systems
The EU AI Act might look massive challenge, but you can break it down into practical steps that make it easier to digest:

The initial steps are to figure out if a system is actually a system within the AI Act’s scope. Key questions you need to answer are:
- Is the system actually an AI system?
- Will it be used on the EU market?
- Is it a generic AI system?
- Which risk classification is applicable?
Based on the risk classification, you have a proper starting point of what you need to do:
- If the AI system is a danger to fundamental human rights, it’s probably a forbidden AI system you're not allowed to use
- If the AI system can have impact on people's lives, it’s likely high risk – and you must implement a set of requirements before August 2026
- If the use of the AI system can cause confusion, it’s probably a limited-risk system, and you must make sure people know AI is involved
- For other AI systems, for example product recommendations, you’re advised to have a code of conduct, but no mandatory requirements are applicable
As the EU AI Act is new legislation, there will be discussions on determining risk levels and applying measures. So whatever you do, document everything. Decisions, risk ratings, operating procedures... If an auditor or the authorities come knocking, a clear paper trail will save you time and stress.
Your next compliance steps
Don’t wait. The best time to start is now, and here is where you should start:
- List your current and planned AI use cases – Create a single register of systems and use cases, so you know what you are doing and will be doing
- Assign an accountable owner – Avoid shadow AI by putting someone in charge of organising AI throughout your organisation, setting policies and approving AI
- Secure your everyday AI – Don’t paste internal, personal or confidential data into public tools. Provide a safe, controlled alternative
- Check the compliance timeline and set a roadmap – High-risk systems must be compliant by August 2026, so start implementing AI Act requirement now
A best-practice approach
We break compliance into 3 streams:
- Governance – Strategy and architecture guardrails, clear roles and responsibilities, and AI literacy at scale
- Processes – Risk assessment workflows, service and data management, and operational practices aligned with the Act
- Technology – A secure AI platform with integrations and services on your cloud(s), built for security and scalability

To prioritise, we map these streams on to 4 scenarios.
Where are you on the AI maturity curve? 4 scenarios to guide your next move
Every organisation is somewhere on the AI journey, but not everyone starts from the same point. That’s why we created the Nordcloud AI Landscape: a practical way to map where you are and what to tackle first.
Scenario 1: Platform first

You’re at the beginning. AI is on the agenda, but you don’t have a secure, centralised way to run it. The priority? Build a solid foundation – an AI platform that gives you control, security and scalability from day one.
Scenario 2: Governance next

You’ve launched AI initiatives, but now the EU AI Act is knocking. Before you scale, you need governance: policies, risk assessments and clear rules on when and how to use AI.
Scenario 3: Operational excellence

You have tools and policies, but day-to-day operations are messy. People aren’t sure what’s allowed, documentation is patchy and AI literacy is low. The focus here is embedding AI into your ways of working - training teams, setting up an AI team and ensuring human oversight.
Scenario 4: Integration at scale

You have multiple AI solutions running, but they’re not connected. The next step? Integrate AI into data management systems, link APIs and make sure your architecture supports growth without creating compliance gaps.
Not sure which scenario fits you?
Most organisations are a mix of 2 or 3. So take a step back and look at where you are today.
- Do you have a solid platform?
- Are your compliance processes clear?
- Is AI part of everyday work?
If you’re unsure where to start, we can support you. Sometimes an outside perspective helps you see the gaps and opportunities you might miss internally.
Let us help you with your compliance journey.
Contact us to book an AI Landscape session, and we can help you map where you are and where you need to be for EU AI Act compliance (aligned with broader business goals).