Last week, I listened in on another AI governance workshop.
Same circular debates.
Same well-meaning proposals.
Same fundamental blindness to where power actually lives.
Here we were, forty smart people in a room, debating ethical frameworks and regulatory approaches, while somewhere in Silicon Valley, a handful of engineers were making decisions that would reshape entire industries.
The disconnect was almost poetic.
I've been researching AI governance at the Futures Institute here in Edinburgh for a little over a year now, while simultaneously building solutions for the friction of AI in education through my “startup”. (startup is in quotes because its more of a bootstrapped organization than it is startup culture)
This dual perspective - academic and entrepreneurial - keeps revealing the same pattern: we're having the wrong conversation entirely.
We're so busy arguing about guardrails, we've forgotten to ask who owns the road.
The Governance Theater We've Built
In the "Handbook of Critical Studies of Artificial Intelligence," there are some themes mentioned that highlight some barriers.
Nation-states, wanting to support AI growth while appearing responsible, have essentially outsourced their homework. They've created what the authors call "hybridising governance" - deferring regulatory responsibilities to think tanks, ethics boards, and (conveniently) the corporations themselves.
The result?
A "disorderly regulatory environment that cements power among those already invested in AI while making it difficult for those outside these privileged groups to contribute their knowledge and experience."
I've watched this theater from orchestra seats. Ethics committees that produce guidelines with no enforcement mechanisms. Industry roundtables where tech giants help write their own rules. Academic panels that generate frameworks nobody implements.
We've mistaken motion for progress. We've confused ethics guidelines for actual governance.
And the public - the people actually affected by these systems - sit outside, watching through the window.
Now, before you think I'm heading toward a "regulate everything" argument…
Traditional regulation moves at geological pace.
By the time we'd properly regulated GPT-4 through conventional channels, we were already on GPT-7. It's like asking someone who just learned email exists to regulate the future of the internet. (We tried this already; it turned out terribly)
Heavy-handed regulation also triggers a predictable response: innovation flees to friendlier jurisdictions. We've seen this movie before with crypto, with biotech, with finance.
Push too hard in London or Washington, and the labs pop up in jurisdictions with lighter touch.
But here's the deeper problem: most regulators are trying to govern something they don't fully understand. I've been in rooms where well-intentioned policymakers ask questions that reveal they're regulating their imagination of AI, not the technology itself.
This isn't their fault. The knowledge gap is structural.
By the time you understand the technology well enough to regulate it properly, you're probably working for a tech company, not a government.
And you see that’s what kills me, while we debate external governance, the real decisions happen inside a few companies. These organizations have more power over AI's trajectory than most nation-states, yet they're structured like medieval kingdoms.
Autocratic corporate structures are making civilizational decisions.
Think about that for a moment. The tools that will(are) reshape(ing) education, healthcare, creative work, scientific research - they're being designed by companies where a tiny group owns everything and everyone else is just employee number whatever.
This hit home when I thought about my mother's story and realized this is a problem beyond simply AI and isn’t just found in the “Big Tech” companies, its found even in schools, the things that we are supposedly trying to protect.
She's taught in Florida for over a decade. Every innovative teaching method she's developed, every creative solution for struggling students - it all belongs to the school.
She's created intellectual property worth hundreds of thousands of dollars. She'll never see a cent beyond her salary.
Scale that up to AI. Millions of us are training these systems with our data, our expertise, our creativity. We're building the foundation of transformative technology.
And our compensation? The privilege of being users. (which don’t get me wrong, you can certainly use these systems to help make you money, but my point here is beyond that)
Ownership as Governance
What if we're thinking about this backwards?
Instead of asking "How do we control AI?" what if we asked, "Who owns AI?"
The most effective governance mechanism isn't regulatory - it's proprietary.
At this point you might be wondering, is this guy a capitalist or a communist? He is arguing against big tech and for worker rights but is pushing to individual ownership and compensation?
The answer is, I don’t care about words that barely encapsulate the complexity of modern society. The terms Communism and Capitalism are just that, terms. They have no feet to walk nor mouths to speak; they can do no harm nor any good.
I look at what I think will work best and don’t worry about labeling it unless it is absolutely necessary to do so.
And at this point, I think we need to stop thinking about what is wrong with the terms and what is right about what we can build.
GitHub didn't need ethics committees to create a culture of open collaboration. They just let developers own their code. Bandcamp didn't need content guidelines to support musicians. They let artists own their distribution.
Ownership aligns incentives in ways guidelines never could.
I keep thinking about what would happen if students graduated owning the AI they trained during their education. Not just having learned about AI - actually owning an AI asset trained on four years of their thinking, problem-solving, and creativity.
Suddenly, they're not just job seekers. They're intellectual property owners. They can license their AI's capabilities. They have recurring income from their educational investment.
The university could take a small percentage, funding better education. The student builds wealth from their learning. Companies get specialized AI trained by actual experts. Everyone's incentives align.
No regulation required. Just ownership.
(I write a lot more on what this could look like here)
The Third Way Nobody's Discussing
Between the chaos of unregulated AI and the stagnation of over-regulation lies something more interesting: democratic governance inside tech companies themselves.
I’m not talking about the technocracy you are thinking of when you first hear that statement, trust me I know the warning flags that pop up when you hear “tech government”.
Not advisory boards. Not ethics committees. Actual governance power.
Imagine educators who use AI tools having real votes on feature development.
Imagine the communities generating training data participating in revenue. Imagine mini-publics inside companies - diverse groups of users and stakeholders making binding decisions about the technology's direction.
This isn't utopian. Estonia's digital democracy platforms prove structured online governance works. Platform cooperatives show that tech companies can have democratic structures. We have the models. We just haven't applied them to AI.
Through my company, we're piloting exactly this - mini-publics of educators who don't just give feedback but actually govern our AI development. They own their contributions. They share in success. Their expertise shapes our tools not through suggestion boxes but through actual power. (Shameless plug but I cant “tell people to build” if I’m not also building a way forward right??)
The technical challenges? Solved. The legal frameworks? Established. The only barrier is imagination.
Why This Window Won't Stay Open
Here's what keeps me up at night: we're rapidly approaching lock-in.
Once AI systems become deeply embedded in education, healthcare, and work, changing their governance becomes exponentially harder.
We're watching the concrete pour while arguing about the blueprint.
We're discussing ethical guidelines while ownership concentrates. We're debating transparency requirements while decision-making power calcifies.
Every month we spend on governance theater is a month where the real governance structure - concentrated corporate ownership - becomes more entrenched.
So here's my challenge to everyone tired of circular governance debates:
Stop asking "How do we regulate AI?" Start asking "Who owns AI?"
Stop demanding "What ethics guidelines?" Start demanding "What ownership structures?"
Stop wondering "How do we slow down?" Start planning "How do we distribute the acceleration?"
The future of AI governance isn't in Brussels or Washington. It's not in ethics committees or regulatory frameworks. It's in the ownership structures and internal governance of the companies building these systems.
We can keep performing governance theater, producing papers and guidelines that tech giants politely acknowledge and ignore.
Or we can focus on the real mechanism of power: ownership and internal consensus.
The tools exist. The models work. The only question is whether we'll use them before the window closes.
If you're working on democratic ownership models for AI, experimenting with internal governance, or thinking about how to distribute the value these systems create - I want to hear from you.
The conversation we need to have isn't happening in the usual rooms.
It's time to build new ones.
For those interested in going deeper: I'm documenting our mini-publics experiment [here] and writing about the intersection of AI governance and education [here]. If you want to support my work or get involved directly, subscribe:
The future we want won't regulate itself into existence. We have to build it.