At the heart of every empire lies an ideology, a belief system powerful enough to justify expansion, no matter the contradictions. For European colonial powers, it was Christianity paired with resource extraction. For today’s AI industry, it’s the promise of artificial general intelligence (AGI) to “benefit all humanity.”
And according to journalist and bestselling author Karen Hao, OpenAI has become its chief evangelist.
In her new book, Empire of AI, Hao argues that the only way to grasp the scope of OpenAI’s influence is to view it as an empire, one with extraordinary economic and political power. “They’re terraforming the Earth,” she told TechCrunch’s Equity. “They’re rewiring geopolitics and all of our lives.”
The Ideology of AGI
OpenAI describes AGI as a highly autonomous system that can outperform humans at most economically valuable tasks. In theory, AGI could elevate humanity by increasing abundance, accelerating scientific discovery, and driving progress.
But these promises remain abstract, while the costs are tangible: massive energy demands, oceans of scraped data, escalating compute requirements, and untested systems released into the world.
Hao argues that speed, not safety or efficiency, became OpenAI’s guiding principle. Rather than pursuing novel algorithms or efficiency gains, the company opted for the “cheap” path: scaling existing methods with ever more data and compute. This framing, where “the victor takes all,” forced rivals like Google and Meta to follow suit.
The result: AI research increasingly shaped by corporate agendas, not independent science.
The Cost of Scale
The financial stakes are staggering. OpenAI projects a cash burn of $115 billion by 2029. Meta is set to spend up to $72 billion on AI infrastructure in 2025. Google expects $85 billion in capital expenditures the same year.
Meanwhile, the promised societal benefits lag behind. Instead, harms are mounting, from job losses and wealth concentration to the psychological fallout of AI chatbots. Behind the scenes, low-wage workers in places like Kenya and Venezuela handle disturbing content for just $1–$2 an hour, moderating the toxic data fueling these models.
Hao stresses that progress doesn’t need to come at this cost. Google DeepMind’s AlphaFold, for example, used focused datasets to solve protein folding, a breakthrough with profound implications for medicine and disease research. Unlike AGI-scale models, AlphaFold avoided mass exploitation and environmental tolls.
“These are the types of AI systems we actually need,” Hao said.

A Race That Reshaped the World
Alongside the AGI mission, the industry has leaned heavily on the “race with China” narrative. But instead of liberalizing the world, Hao argues, Silicon Valley’s approach has done the opposite, concentrating power and leaving the U.S.–China gap largely unchanged.
The one clear winner? Silicon Valley itself.
The Profit–Mission Dilemma
OpenAI’s unusual structure, part non-profit, part for-profit, has blurred how the company measures its impact on humanity. With its deep ties to Microsoft and recent moves edging toward a public offering, critics worry the lab has conflated financial success with human benefit.
Two former safety researchers told TechCrunch they fear OpenAI now sees consumer enjoyment of ChatGPT as proof it’s fulfilling its mission. Hao sees this as dangerous, a belief system so consuming it blinds leaders to evidence of harm.
“Even as the evidence accumulates that what they’re building is actually harming significant amounts of people,” Hao said, “the mission continues to paper all of that over.”
The Bottom Line
AI’s current trajectory wasn’t inevitable. It was shaped by choices: speed over safety, scale over science, ideology over evidence. As the industry barrels ahead, Hao’s warning is clear, unchecked belief in AGI risks building an empire whose costs outweigh its promises.
Sources (Techcrunch)