Behind the glossy AI announcements of big tech lies an unsettling ambition: a club of billionaires racing to replace human decision-makers with machines of their own design. The same people who hoard wealth in unprecedented amounts and build monopolies now aim to create the "AI god".
But for whom?
Let’s be clear. AGI—Artificial General Intelligence—isn’t here. Today’s AI models are still tools, a simulated intelligence. The promises of AGI by 2025, casually offered by leaders like Sam Altman, feel more like marketing than reality. AI isn’t advancing exponentially—it’s plateauing. New features are branded as breakthroughs, and we’re not there yet.
The real question isn’t if we’ll achieve AGI someday—we probably will. It’s why this group is so desperate to control the journey.
A race for power
The billionaires, hedge funds, and venture capitalists funding these projects don’t want AGI to solve humanity’s problems. They seek a different prize: control.
Why settle for employees when you can have obedient digital labor? Why worry about regulators when you can automate policy decisions? Why deal with messy democracy when you can "optimize" it out of existence? (Consider Elon Musk’s leading role in the Trump administration’s newly created "Department of Government Efficiency.")
The risk isn’t rogue AGI. It’s the ambitions of those building it!
What’s at stake?
Societies don’t crumble because of technology. They crumble under unchecked power. If we let a handful of people define intelligence, bake their values into these systems, and control who benefits, we’re outsourcing our collective future.
To be clear: most leading AI vendors are currently aligning their frontier models in responsible ways. But Musk’s Grok has already shown how misguided ideology, backed by unlimited wealth, can reshape how models behave in the blink of an eye.
The problem isn’t AI itself. It’s the lack of guardrails—not against machines but against those misusing them to entrench power.
Who decides?
Technology is never neutral. AGI, if it ever arrives, will not be humanity’s salvation. But it will reflect the worldview of who those we allow to shape it.
Every invention reflects its creators’ intentions. So the question isn’t whether AGI will happen. It’s whether we demand to know why it’s being built, who decides, and whose interests it serves.
What if we insisted the future belongs to all of us, not just the few who can afford to build it? What if AI models became small and efficient enough to run anywhere, free from control by a handful of individuals answering only to investors and shareholders? What if new bio-inspired architectures allowed for diverse, flexible, and decentralized proliferation of simulated intelligence?
What kind of world could we create instead? It’s time to get loud, ask the hard questions, use the only real intelligence we know and start build something truly worth celebrating!