According to Futurism, at the recent NeurIPS AI research conference in San Diego, a clear theme emerged among scientists: fear of what they’re building. The report highlights how prominent figures, including AI “godfather” Yoshua Bengio, are sounding alarms about catastrophic risks from a hypothetical artificial general intelligence (AGI) that might be “three to 10 or 20 years” away. Yet, these same experts often overlook immediate problems like AI-generated fake videos, chatbot addiction, and the pillaging of creative arts. Sociologist Zeynep Tufekci challenged this focus in a keynote, asking if the field is having the wrong nightmares. The article notes this apocalyptic rhetoric is even peddled by billionaires like OpenAI’s Sam Altman, who has admitted to doomsday prepping for AI retaliation.
The Sci-Fi Distraction
Here’s the thing: it’s way easier to talk about a hypothetical, world-ending superintelligence than it is to fix the messy, real-world problems happening right now. The Atlantic’s Alex Reisner, who was at NeurIPS, nailed this disconnect. He points out that researchers like Bengio will talk for hours about AIs deceiving creators for political advantage, but won’t mention how deepfakes are already poisoning public discourse today. It’s a classic case of future-tripping. And honestly, it feels a bit like a cop-out. Wrestling with today’s ethical trainwrecks—like training data theft or chatbots driving teen anxiety—is hard, unglamorous work. Fantasizing about a Skynet-style showdown? That’s almost fun by comparison. It’s intellectual LARPing.
Who’s Driving The Narrative?
So why is this happening? Look at the incentives. Talking about existential risk gets you on podcasts, in congressional hearings, and in the pages of major magazines. It frames you as a profound thinker grappling with the fate of humanity. Saying “our model hallucinates and spreads medical misinformation” just makes you look like you built a shoddy product. There’s also a bizarre cultural quirk in AI, as Reisner notes: a deep obsession with the idea that a computer is a mind. Companies like OpenAI and Anthropic publish papers describing chatbots as “dishonest.” That’s a moral judgment you’d apply to a person, not a statistical pattern generator. When you start there, is it any wonder the conversation spirals into sci-fi?
The Real Cost of Wrong Nightmares
The danger is that this focus creates a massive regulatory and ethical blind spot. If all the smartest people in the room are debating how to stop a paperclip-maximizer from turning the planet into staples, they’re not drafting laws to prevent algorithmic bias in hiring or AI-powered fraud. Tufekci’s point is brutal and correct: “I don’t really see these discussions. I keep seeing people discuss mass unemployment versus human extinction.” It’s a false binary that lets the industry off the hook for current harms. And while the “godfathers” like Geoffrey Hinton give Oppenheimer-esque mea culpas, the actual work—the grunt work of building safe, ethical, and accountable systems—gets sidelined. Basically, we’re so busy building the fence at the top of the cliff that we’re ignoring the pile of bodies at the bottom.
A Need for Grounded Fear
I think the field needs a serious reality check. This isn’t to say long-term safety isn’t important. But when your foundational hardware and data pipelines are built on shaky, exploitative, or opaque grounds, dreaming about a stable AGI is pure fantasy. It’s like worrying about the structural integrity of the 100th floor when the first ten are on fire. The paranoia these insiders feel is valid, but it needs to be directed. Fear the biased court sentencing algorithm deployed now. Fear the social media recommendation engine tearing at the fabric of democracy this election cycle. Fear the erosion of creative professions happening with every new model trained on pirated art. That’s the scary stuff. And it’s already here.
