What is sovereign AI? Models, control, and why AI changes sovereignty | Ladybug Unplugged | Episode 3.

Ladybug Unplugged – Sovereign Cloud Series | Episode 3

Sovereign AI is not just about where AI runs — it is about what models you run, who controls them, and how much visibility you have into their behaviour.

As organisations embed AI deeper into core business processes, sovereignty increasingly shifts from infrastructure to foundational models, training data, and decision‑making control.

In this episode, we discuss:

  • What sovereign AI means and how it extends the sovereign cloud discussion
  • Why sovereignty is affected by the AI models you use, not just where they run
  • The difference between closed, proprietary models and open‑weight models
  • How model ownership, training transparency, and data flows affect control
  • The role of AI regulation, including the European AI Act, in sovereignty decisions
  • Why AI agents and automation introduce new business continuity and resilience risks

This conversation highlights why AI must be treated as a sovereignty concern, especially when AI systems begin to make or influence critical business decisions.

Who should watch

CIOs, CTOs, CISOs, data and AI leaders, architects, and compliance teams responsible for AI strategy, governance, and risk.

🔗 Learn more about digital sovereignty, AI control, and real‑world cloud strategies:
👉 https://nordcloud.com/services/cloud-migration/digital-sovereignty/

Part of the Sovereign Cloud series:

What is sovereign AI? Models, control, and why AI changes sovereignty.

Lysa Banks:
There’s another topic I don’t think we talk about enough when we discuss sovereignty. We focus a lot on where things run, where data resides, and who has access — but not enough on what we’re actually running.

For me, that topic is AI.

Sovereign cloud is important, but I’m increasingly concerned about sovereign AI. It’s not just about where AI runs; it’s about which foundational models you use, who built them, and who owns them.

When we look at AI models, I tend to group them into two categories. On one side, you have closed, proprietary models where you don’t know how the model is built or what the weights are. On the other side, you have open‑weight models, where you can inspect them, understand them better, and even influence how they behave.

Are you seeing more discussion about AI in the sovereignty context — not where it runs, but what models organisations are using?

Sander Nieuwenhuis:
Those discussions are starting to emerge, mainly because AI is becoming a structural part of organisations. You can’t allow AI to grow in an unstructured, ad‑hoc way.

As soon as AI becomes embedded in core processes, governance becomes necessary. In Europe, the AI Act also plays a role, as it introduces new rules and responsibilities around AI usage.

Organisations now need to decide not only where AI runs, but whether they train and operate their own models or rely on pre‑trained models provided by third parties.

Lysa Banks:
And that decision probably depends on how the AI is used.

Sander Nieuwenhuis:
Exactly. For general‑purpose tasks, pre‑trained models can be useful. But if AI is used for sensitive workloads or intellectual property‑driven activities, organisations need to be very careful.

With closed models, you don’t know how they were trained, how decisions are made, or where data may end up. That creates both control and risk issues.

There’s also a distinction between where a model is trained and where it is run. An organisation might train models using public data in the cloud, but run them locally when processing confidential information.

Lysa Banks:
If I put on my tinfoil hat for a moment, one concern I see is the rise of AI agents. Organisations are increasingly giving AI systems control over business processes and infrastructure.

If sovereignty is about control, and we’re handing control to AI models, then AI has to be part of the sovereignty discussion.

Sander Nieuwenhuis:
I agree. I’m currently working with a Dutch financial organisation that is using AI for credit risk assessments.

AI can replace large teams of people and improve efficiency. But if something goes wrong and the AI system needs to be switched off, there’s a significant operational gap.

That’s why the business continuity aspect of AI is critical. Sovereign AI is not just about governance — it’s about ensuring organisations can continue to operate if AI systems fail or need to be withdrawn.

Scroll to top