Watch on demand:
EU AI Act Webinar - Strategies to build compliant AI systems.

AI is Accelerating. Regulation Is Catching Up. Are you ready?

Are you:

✅ Considering AI for your business & IT operations?
✅ Building AI products right now?
✅ Deploying AI across your operations?
✅ Using third-party AI systems that touch EU users? 

The Act applies to you if you’re based in the EU or have customers in the EU. 
And when it comes to compliance, there’s no middle ground. It’s not optional, it’s essential. 

Watch the video now 👉

 

Watch the webinar👇

Remove all compliance confusion and get some clear action steps.

You need to be compliant with the EU AI Act if you build AI products, deploy AI systems or use third-party AI tools serving EU users.  

In this webinar from Nordcloud, you get the actionable insights to achieve just that – straight from Nordcloud’s own team of regulatory experts.  

Get the risk assessment methods, governance implementation strategies and technical controls needed to work towards compliance requirements before the August 2026 deadline

Here's what we'll cover.

EU AI Act overview

Risk categories, key roles and who the regulation affects in practice.

Risk classification process

How to assess and document risk levels for your specific AI use cases.

High-risk system requirements

Human oversight, transparency logs, and monitoring controls you need to deploy.

Nordcloud AI Landscape

Explore the areas you need to cover for AI and identify the implementation strategy that fits you best.

Initial steps for AI approach

Get some initial recommendations on what you should do to start working on structural approach for AI.

Frequently asked questions around the EU AI Act.

What kind of tooling can I use to monitor AI act compliance?

If you want to monitor compliance, you need to take into account both the technical part (security measures) and the organizational part. Using tools like AWS audit manager, Azure Purview Compliance Manager and GCP Compliance Manager you can go a long way for the technical part. The organisational part (like AI supervision or having up-to-date documentation) would be hard to monitor to use tooling and would require procedures mostly.

How is the EU AI Act applicable to companies operating in the UK?

The EU AI Act is applicable to countries within the EU. So initially it is not applicable to UK companies operating in the UK. However: the scope of the EU AI Act includes AI systems on the EU market. So if you are a UK company operating within the EU market, for that scope, you need to comply with the EU AI Act. And there are some additions and special cases, for example when EU data is collected, processed by AI outside EU and used within EU (to prevent circumventing the AII Act in that way). 

What are other examples of General Purpose AI? Would it also apply if your system uses ChatGPT via API?

There are many General Purpose AI systems on the market beyond ChatGPT. Google Gemini and Microsoft Copilot as some examples that you probably know. Taking ChatGPT as an example, OpenAI as developer of ChatGPT must ensure that the Provider requirements are implemented. If you are an integrator using ChatGPT, you need to apply the requirements applicable to your AI Act role. For example: if you are using ChatGPT as part of a high risk AI system you develop (e.g. screening CVs in an app you developed) you must implement the Provider requirements associated to high risk AI systems, including implementing a risk management system and having documentation in place for your AI system, but also making sure to transfer the ChatGPT context to your AI system.

Can you briefly touch on how the EU AI act differs to the regulatory guidance that is being used in the UK?

There are similarities in the risk based approach, transparency requirements and ethical principles.  However, UK does not (yet) have a single, unified AI law or regulation, and is regulating AI through the existing supervisory bodies (like the Competition and Markets Authority or the Financial Conduct Authority). Overall, UK seems to be taking a more innovation driven direction, whilst EU AI Act focuses on protecting fundamental rights first and aims to create a level playing field for AI innovators.

Do we know what the main players in this are going to do? Are they looking to comply or just remove the service from the EU?

Frankly we have no direct view on that. But considering the EU AI Act being strict but fair, it would not be a major barrier to keep or start service the EU market with AI services. Especially the bigger player you expect to have done their homework, and they should not have major difficulties showcasing EU AI Act compliance. But overall that would be a business decision balancing out EU market potential and EU AI Act compliance costs.

Do regulatory bodies like Bafin and Gematik have specific guidelines which consulting companies need to be aware of before proposing this to clients in FSS Sector?

Bafain actually published principles for using Big Data and AI (BDAI) with major overlap to the EU AI Act regarding risk management, fairness, documentation and outsourcing for example. In addition. Gematik is more technically focuses and has similar goals as EU AI Act regarding (technical implementation of) information security.

How do you propose addressing AI in a heavily regulated industry?

We see that EU regulations (like DORA, NIS2 or AI Act) are all developing in a similar direction of being risk based and having similar requirements for higher risk context (for example security incident management and security incident reporting). We recommend taking data protection, your applicable legislation and business continuity as foundation. And it is important to make sure you align your internal policy making teams (like privacy, security, compliance, risk, outsourcing, architects) on the AI topic, to prevent them setting independent and potentially conflicting controls on the topic.

With respect to roles & responsibilities, how easy is under EU AI Act to tell if I am a provider or a deployer of an AI system? Are the rules clear?

In the most situations, this is quite clear: if you develop an AI system (build it yourself) you are a Provider. And if you are using an AI system, you are a Deployer. And if you are developing and using an AI system, you have both roles. And if you integrate an existing AI system in a new service, like in question #3, you can also be both a Deployer and a Provider.

Who has to take responsibility if the model acts biased for example in assessing CVs? And how can that be measured?

If we consider this specific example of CV screening, you should consider the AI system a high risk system. The Provider (developer) of the AI system must ensure that the AI system is trained in a fair way to prevent bias. And the Deployer (user) of the AI system must implement human supervision, to ensure that decisions based on AI system outcomes are made in a fair way. So it is a split responsibility. And you measure through checks performed by humans (and never have the AI system hold accountability).

Meet our speakers.

Sander NieuwenhuisLinkedIn

GRC Advisory Global Lead
Nordcloud

Sander combines 20+ years in information security with a focus on practical governance, risk, and compliance. He advises on data, AI, and cloud sovereignty, risk management, and regulations like the AI Act, GDPR, DORA, and NIS2.

Allan ChongLinkedIn

Head of AI and Data
Nordcloud

Allan’s passion for and deep knowledge of data and AI have helped him guide businesses on their analytics journey – from initial concepts to full-scale deployment. His expertise includes data transformation and GenAI applications for production use.

Scroll to top