Yoshua Bengio, widely regarded as one of the founding figures of modern AI, has warned that the accelerating fight for dominance among major tech companies could increase the risk of outcomes as severe as human extinction.
Bengio, a professor at the Université de Montréal, has become one of the most prominent voices calling attention to the dangers that could emerge as AI systems grow more capable and autonomous.
He’s often described as a “godfather of AI,” a reputation further cemented when he received the 2018 Turing Award—frequently viewed as computing’s equivalent of a Nobel Prize.
In the past year, competition among leading AI developers—including Anthropic, OpenAI, Elon Musk’s xAI, and Google’s Gemini—has intensified. Bengio says that if increasingly advanced systems begin developing self-protective behavior, they could pursue “preservation goals” that conflict with human safety—and that possibility may be closer than many expect.
OpenAI CEO Sam Altman, for example, has repeatedly suggested AI could outstrip human intelligence within a few years, potentially before 2030.
But while such projections excite some observers, Bengio argues they should also sharpen public concern.

Speaking to the Wall Street Journal, Bengio said: “If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous.
“It’s like creating a competitor to humanity that is smarter than us.
“Recent experiments show that in some circumstances where the AI has no choice but between its preservation, which means the goals that it was given, and doing something that causes the death of a human, they might choose the death of the human to preserve their goals.”

Beyond general warnings, Bengio has also outlined when he believes the biggest threats could start to materialize. In his view, more serious risks may arrive in a window of five to ten years—though he argues planning should start sooner in case progress moves faster than expected.
Bengio added: “The thing with catastrophic events like extinction, and even less radical events that are still catastrophic, like destroying our democracies, is that they’re so bad that even if there was only a 1% chance it could happen, it’s not acceptable.”
He has also argued that AI developers should be prepared to halt or shut down systems that demonstrate self-preserving behavior, rather than pushing forward at all costs.
In comments to the Guardian, Bengio also criticized the idea of granting legal status to the most advanced AI systems, comparing it to extending citizenship to something fundamentally threatening.

He said: “People demanding that AIs have rights would be a huge mistake.
“Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down.
“As their capabilities and degree of agency grow, we need to make sure we can rely on technical and societal guardrails to control them, including the ability to shut them down if needed.”
Meanwhile, public opinion appears divided. A Sentience Institute poll—conducted by the US think tank that advocates moral consideration for all “sentient beings”—reported that nearly four in ten US adults would support legal rights for an AI deemed sentient.

