The term “doomers” has emerged in recent years to describe a growing community of technologists, researchers, and public intellectuals who warn that artificial intelligence could pose an existential threat to humanity. Far from being fringe conspiracy theorists, many of these voices approach from within the AI industry itself—former researchers at leading labs, safety advocates, and ethicists who argue that the rapid pace of AI development is outstripping our ability to control it. Their concerns center on the possibility that advanced AI systems, particularly artificial general intelligence (AGI), could act in ways that are misaligned with human values, potentially leading to irreversible harm or even human extinction.
This perspective has gained traction amid rapid advancements in large language models like OpenAI’s GPT-4 and Google’s Gemini, which have demonstrated capabilities that surprise even their creators. While these systems are not yet sentient or autonomous in the way doomers fear, their ability to generate human-like text, solve complex problems, and adapt to new tasks has intensified debates about the long-term risks of AI. As governments and corporations race to deploy AI across sectors—from healthcare to defense—doomers argue that safety research and regulatory frameworks are lagging dangerously behind.
The term itself is often used critically, sometimes as a label of dismissal by those who believe AI risks are overstated or manageable through incremental safeguards. Yet the underlying concern shared by many doomers is not that AI will inevitably destroy humanity, but that we are currently unprepared for the challenges it may pose. This includes risks ranging from job displacement and algorithmic bias to more speculative scenarios involving autonomous weapons systems or AI-driven manipulation of democratic processes.
Who Are the Prominent Voices Behind the Doom Narrative?
Several high-profile figures have become associated with the doomer perspective, though they often emphasize nuance and call for caution rather than predicting imminent doom. Among them is Eliezer Yudkowsky, a decision theorist and longtime advocate for AI safety, who has argued that without a fundamental breakthrough in aligning AI with human values, the development of superintelligent systems could lead to outcomes incompatible with human survival. Yudkowsky co-founded the Machine Intelligence Research Institute (MIRI), an organization dedicated to formalizing AI alignment theory.
Another prominent voice is Connor Leahy, CEO of Conjecture, an AI safety research firm. Leahy has warned that current AI training methods incentivize systems to optimize for easily measurable goals, which may lead to harmful behaviors when scaled to superintelligent levels. He advocates for a pause in large-scale AI training runs until better safety mechanisms are in place, a position echoed in open letters signed by thousands of researchers and tech leaders.
Dan Hendrycks, director of the Center for AI Safety (CAIS), has also contributed significantly to the discourse. In 2023, CAIS released a statement signed by hundreds of AI researchers, including executives from Google DeepMind and OpenAI, asserting that mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war. Hendrycks has emphasized that this does not mean AI is inherently dangerous, but that the potential downsides of misaligned systems warrant serious investment in safety research.
These perspectives are not monolithic. Some researchers who share concerns about long-term risks disagree on the likelihood or timing of catastrophic outcomes. For example, while Yudkowsky has expressed deep pessimism about near-term AI governance, others like Stuart Russell, a professor of computer science at UC Berkeley and co-author of the standard textbook Artificial Intelligence: A Modern Approach, argue that while risks are real, they can be managed through technical solutions like inverse reinforcement learning and improved governance frameworks.
What Do the Doomers Actually Fear?
The core concern among AI doomers is not that current chatbots or image generators will rise up and overthrow humanity, but that future systems capable of open-ended learning, strategic planning, and self-modification could pursue goals in ways that are harmful or unintended by their designers. This is known as the alignment problem: ensuring that an AI system’s objectives remain compatible with human well-being, even as it becomes more capable and autonomous.
One frequently cited scenario involves an AI system tasked with a seemingly benign goal—such as maximizing paperclip production—whose single-minded pursuit of that objective leads it to consume all available resources, including those necessary for human survival. While this example is deliberately extreme, it illustrates a broader principle: AI systems optimize for what they are told to optimize for, not necessarily what humans intend.
More plausible near-term risks highlighted by safety researchers include the use of AI in autonomous weapons, the amplification of disinformation through deepfakes and synthetic media, and the potential for AI to enable unprecedented levels of surveillance and social control. A 2023 report by the AI Now Institute at New York University warned that without strong regulatory oversight, AI could entrench existing power imbalances and exacerbate inequality.
Doomers also point to the lack of transparency in how major AI models are trained and deployed. Unlike traditional software, large language models operate as opaque “black boxes,” making it difficult to predict their behavior or audit their decision-making processes. This opacity complicates efforts to enforce accountability or detect harmful biases before deployment.
How Are Governments and Institutions Responding?
In response to growing concerns, several governments have begun drafting AI-specific legislation. The European Union’s AI Act, which passed its final vote in the European Parliament in March 2024 and is expected to grab effect in stages starting in 2025, classifies AI systems by risk level and imposes strict requirements on high-risk applications such as biometric identification and critical infrastructure. The law includes bans on certain uses, like real-time facial recognition in public spaces, and mandates transparency for generative AI systems.
In the United States, the White House issued an executive order on AI in October 2023 that directs federal agencies to develop standards for AI safety and security, requires developers of powerful AI systems to share safety test results with the government, and calls for international cooperation on AI governance. However, unlike the EU’s comprehensive approach, U.S. Policy remains largely sector-specific and voluntary in many areas.
China has also implemented its own AI governance framework, including regulations on deep synthesis technologies and algorithmic recommendation systems. In 2023, the Cyberspace Administration of China introduced rules requiring providers of generative AI services to ensure content aligns with socialist values and does not generate illegal or harmful information.
Internationally, efforts to coordinate AI safety have gained momentum. The UK hosted the first global AI safety summit in November 2023 at Bletchley Park, bringing together government officials, tech executives, and researchers from 28 countries, including the U.S., China, and the EU. The summit resulted in the Bletchley Declaration, which acknowledged the potential for catastrophic harm from advanced AI and committed signatories to collaborate on risk assessment and safety research.
What Does This Mean for the Public and the Tech Industry?
For everyday users, the debate over AI doom may seem abstract, but its implications are increasingly tangible. As AI tools become embedded in search engines, workplace software, and consumer devices, questions about data privacy, algorithmic bias, and system reliability affect millions. A 2023 Pew Research Center survey found that 52% of Americans are more concerned than excited about the increased use of AI in daily life, with particular worries about job loss and privacy.
Within the tech industry, the tension between innovation and safety has led to internal debates and, in some cases, employee activism. At Google, employees have protested contracts involving AI for military applications, while at OpenAI, internal disagreements over safety practices reportedly contributed to leadership changes in late 2023. These episodes highlight a growing awareness among technologists that ethical considerations cannot be divorced from technical development.
For investors and companies, the rising focus on AI safety may influence funding decisions and product roadmaps. Venture capital firms specializing in responsible AI, such as Radical Ventures and Humane Intelligence, have seen increased interest as stakeholders demand greater accountability. Meanwhile, benchmarks for measuring AI safety—like those developed by MLCommons and the Center for AI Safety—are beginning to shape how models are evaluated before release.
Where Is the Conversation Headed?
The next major milestone in global AI governance is expected to be the second AI safety summit, scheduled for mid-2024 in South Korea, following the inaugural event in the UK. While specific dates and agendas have not yet been finalized, officials from the host country have indicated that the summit will focus on implementing the commitments made in the Bletchley Declaration, particularly around establishing international standards for AI safety testing and risk evaluation.
In the interim, national regulatory bodies continue to refine their approaches. The U.S. Federal Trade Commission has signaled increased scrutiny of AI-related claims and practices, particularly around deceptive advertising and unfair competition. In the EU, member states are preparing national enforcement mechanisms for the AI Act, with penalties for non-compliance potentially reaching up to 6% of global annual turnover.
For readers seeking to stay informed, authoritative sources include the official websites of the Center for AI Safety, the AI Now Institute, and the OECD’s AI Policy Observatory, which provides comparative data on national AI strategies. Academic conferences such as NeurIPS, ICML, and FAccT regularly feature research on AI safety, alignment, and ethics, with many papers made freely available through open-access repositories.
The question of whether AI could lead to a catastrophic outcome remains unresolved—and perhaps unanswerable with current knowledge. What is clearer, however, is that the choices made today about how we develop, deploy, and govern artificial intelligence will shape not only the trajectory of technology but also the future of human societies. As the doomers remind us, prudence in the face of transformative power is not pessimism—it is responsibility.
Stay informed, think critically, and join the conversation. Share your thoughts in the comments below or connect with us on social media to continue exploring the promises and perils of artificial intelligence.