McKinsey published a piece this week on sovereign AI — Sovereign AI: Building ecosystems for strategic resilience and impact — and it crystallised something we’ve been circling around in our own work for a while.
The headline number: 30–40% of all AI spending could be influenced by sovereignty requirements by 2030. That’s $500–600 billion globally. This isn’t a niche compliance concern anymore. It’s the water everything swims in.
We operate out of Dubai with nodes in LA, London, Paris, Singapore, and Sydney. Every system we build touches data that crosses borders. Every client engagement involves at least one conversation about where data lives, who controls it, and which jurisdiction governs it. This isn’t theoretical for us — it’s Tuesday.
What McKinsey gets right — and what most of the sovereign AI conversation gets wrong — is that sovereignty isn’t a binary. It’s not “sovereign or not sovereign.” It’s a spectrum with four distinct dimensions:
A system can be territorially sovereign (data stays in-country) but operationally dependent (a foreign provider manages it). Or legally sovereign (governed by local law) but technologically foreign (built on someone else’s stack). The combinations matter. And most organisations we work with haven’t thought through which combination they actually need.
The concept McKinsey calls “minimum sufficient sovereignty” resonates with how we think about data architecture. The idea: not everything needs to be sovereign. Classify workloads by regulatory exposure and sensitivity, then apply the right level of control to each tier. Defence and citizen data? High sovereignty. Marketing analytics? Probably fine on a global cloud. The mistake is treating it as all-or-nothing.
We see this mistake constantly. A client decides “all our data must stay in-region” and suddenly they can’t use half the tools that would make their team effective. Or they go fully global and then discover their regulated workloads are technically non-compliant. The answer is almost always a layered approach — and that requires actually understanding your data well enough to classify it.
This is where DataSpec’s architecture becomes relevant in ways we didn’t fully anticipate when we designed it. The unified data layer with bidirectional sync and data governance controls was built to solve a different problem — fragmentation. But the same architecture that lets you unify data across twelve systems also lets you enforce residency rules, audit trails, and access controls at the data layer. When a client asks “can we keep this segment of customer data in-region while the rest flows globally?” — that’s not a new feature request. That’s the existing architecture doing what it was designed to do.
There’s a practical concern McKinsey flags that we keep running into: sovereign AI solutions are perceived as 10–30% more expensive than global alternatives. And sovereignty alone rarely drives a vendor switch — enterprises still choose on price, performance, and reliability first. Sovereignty matters for a specific subset of workloads, not as a blanket feature.
This matches what we see. Nobody wakes up wanting sovereign AI. They wake up wanting their systems to work, their data to be secure, and their regulator to not call. Sovereignty is a constraint to satisfy, not a feature to sell. The organisations that handle it well don’t make sovereignty the point — they make it invisible. The data goes where it needs to go, stays where it needs to stay, and the team never has to think about it because the infrastructure handles the complexity.
That’s the design goal. Not “sovereign AI” as a badge. Sovereign-ready infrastructure as a default.
The thing that stuck with us most from the McKinsey piece is the failure mode they identify: mis-sequencing. Organisations invest heavily in infrastructure before demand and governance are ready. They build sovereign compute capacity without first figuring out what workloads actually need it, who governs what, or how the operating model works.
We see the same pattern at the company level, not just the nation level. A client buys into “we need AI” and starts procuring infrastructure before they’ve answered the basic questions: What data do you have? Where does it live? Who’s allowed to see it? What decisions do you actually want the system to make?
The parallel to McKinsey’s three-wave model is almost exact:
That sequencing is basically how we run a 21-day engagement. Week 1: figure out what actually matters. Week 2: prove it works. Week 3: build the system that scales it.
The sovereign AI version just takes longer and involves more stakeholders.
For anyone building AI systems that touch multiple jurisdictions — which, increasingly, is everyone — the takeaway isn’t “build everything locally.” It’s: understand which parts of your stack actually need sovereignty controls, apply them deliberately at those control points, and leave everything else open to the best available tools and partners.
Sovereignty isn’t about walls. It’s about knowing where your doors are and who has the keys.