According to Forbes, artificial general intelligence has become an explicit goal for major corporations like Meta and OpenAI, with Mark Zuckerberg personally pushing for smarter-than-human AGI at his company. The development has sparked massive concern, with nearly 70,000 people including AI pioneer Geoffrey Hinton and Apple co-founder Steve Wozniak signing the Statement on Superintelligence calling for a prohibition on superintelligence development. At the recent Beneficial AGI conference, author and futurist Gregory Stock argued AGI could bring about radical changes including “the death of death” and the end of scarcity. Meanwhile, Microsoft’s AI CEO Mustafa Suleyman recently claimed super-intelligent AIs won’t experience consciousness, adding to the ongoing debate about what AGI will actually mean for humanity.
The optimism vs extinction debate
Here’s the thing about AGI discussions – they swing wildly between utopian dreams and apocalyptic nightmares. On one side, you’ve got people like Stock talking about ending death and scarcity. On the other, you’ve got serious researchers warning about human extinction. Both sides sound pretty convinced they’re right, but nobody actually knows what’s going to happen. That’s what makes this whole conversation so frustrating – and so dangerous.
I mean, think about it. We’re talking about creating intelligence that could rapidly become vastly smarter than all of humanity combined. The superintelligence statement warns about “human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control.” That’s not some sci-fi movie plot – that’s what actual experts are worried about. And yet companies are charging ahead anyway.
The race nobody should win alone
So who gets to decide our future? Right now, it looks like it might be Meta, OpenAI, and other massive corporations. The OpenAI charter talks about “planning for AGI and beyond,” but let’s be real – corporate priorities and human wellbeing don’t always align. Do we really want our species’ future determined by what maximizes shareholder value?
Chinese President Xi Jinping suggested creating a global AI governance body, but let’s be honest – that’s not happening with current geopolitical tensions. So we’re left hoping that either these companies develop AGI “in pro-social ways” (whatever that means) or that open-source organizations get there first. Basically, we’re crossing our fingers and hoping for the best while corporations build technology that could literally end humanity.
The impossible preparation
How do you even prepare for something that could either solve all human suffering or cause human extinction? The answer is: you can’t. Not really. We’re talking about changes so fundamental that our current frameworks for thinking about risk, regulation, and ethics might be completely useless.
And here’s what really worries me – the people building this stuff don’t seem to have solid answers either. They’re moving fast and hoping they can figure out the safety issues later. But with something as potentially powerful as AGI, “later” might be too late. We’re basically doing the world’s most dangerous science experiment with humanity as the test subject, and nobody seems to have a clear stop button.
