AI responds to ChatGPT CEO’s chilling warning that artificial intelligence will surpass humans by 2030

Sam Altman’s major prediction about when AI will surpass human intelligence has been responded to – by AI itself.

OpenAI CEO and ChatGPT creator Sam Altman has repeatedly spoken candidly about the risks and uncertainties tied to the technology his company is building.

But this time, the debate took a more unusual turn: the claims were put under the microscope by an AI system.

Altman—who has also shared that he’s used ChatGPT while navigating life with a newborn—has echoed warnings heard from other prominent figures in the sector, from rival executives to long-time researchers, about how quickly AI could begin to challenge humanity’s role and capabilities.

In an interview with Die Welt last year, Altman said he’d be “surprised” if by 2030, ‘we don’t have extraordinarily capable models that do things that we ourselves cannot do’.

He also suggested the leap beyond current systems may come sooner than many expect, arguing his newest model already outperforms him in some ways. Altman said:

“In another couple of years, it will become very plausible for AI to make, for example, scientific discoveries that humans cannot make on their own. To me, that’ll start to feel like something we could properly call superintelligence.”

When asked to weigh in on this type of claim, ChatGPT focused on what it described as the central sticking point: the definition of “superintelligence.” It characterized Altman’s outlook as ‘plausible in some areas’ and ‘overstated in others’, saying the real answer depends on ‘what people mean by “superintelligence.”’

The system noted that recent progress in AI has been substantial, and that it isn’t “unreasonable” to anticipate models that feel startling compared with what people are used to today.

At the same time, it argued that the discussion needs more precision—especially when drawing lines between what humans can do unaided, what humans can do with AI assistance, and what might be beyond human understanding entirely.

It broke the nuance into three broad interpretations:

AI speeding up human science: something it said is already happening, with AI tools helping researchers reach results that would have been far harder or slower without them.

“AI helps with protein folding, materials discovery, code generation, literature review, hypothesis ranking, and experiment design,” it said.

AI enabling discoveries humans wouldn’t reach as quickly: a scenario the model described as realistic on relatively short timelines if humans and AI systems work in tandem.

AI making discoveries humans wouldn’t have found so soon: “This also seems plausible within a few years. Humans plus AI systems may outperform humans alone dramatically,” said the AI.

AI generating knowledge humans can’t even interpret: described as the most extreme reading of “superintelligence,” and not something current systems can demonstrate.

AI producing knowledge that humans cannot even understand: “This is the strongest interpretation of ‘superintelligence’, and we are nowhere near proving that,” it added.

In practical terms, the chatbot framed today’s AI as strongest at crunching information, spotting patterns, and assisting with analysis—rather than functioning like a fully autonomous reasoning agent with long-term memory, independent goals, and self-directed innovation.

It also pointed out that AI could eventually become a transformative instrument in fields such as cancer research, but suggested that breakthroughs on that level would still rely on years of progress in biology and related sciences.

Ultimately, the AI’s conclusion was that Altman may be directionally correct—depending on what is meant by “superintelligent”—and that the real test of his 2030 prediction depends on whether future models merely amplify human capability, or move definitively beyond it.