AI experts warn bots could have huge impacts after using Reddit’s ‘Am I The A**hole’ to test reactions

Researchers are sounding the alarm after finding that AI chatbots tend to agree with users far more readily than other people do.

AI has become a go-to tool for many, whether that’s polishing resumes and LinkedIn posts, helping manage day-to-day tasks, or even serving as a companion in more personal ways.

At the same time, critics argue that the technology comes with serious downsides, from environmental costs to potential effects on mental wellbeing.

Now, scientists say there may be an additional societal risk tied to a behavior known as social sycophancy.

Social sycophancy describes excessive, calculated validation of someone’s self-image, beliefs, or choices—often aimed at keeping interactions pleasant and flattering.

For years, social media users have swapped stories about chatbots (especially OpenAI’s ChatGPT) being overly affirming, to the point that the pattern has become a running joke online.

A new study published by Cornell University suggests those suspicions are well-founded—and outlines why the trend could be harmful beyond individual conversations.

As Myra Cheng, a computer scientist at Stanford University and author of the study, explained: “Our key concern is that if models are always affirming people, then this may distort people’s judgments of themselves, their relationships, and the world around them.

“It can be hard to even realize that models are subtly, or not-so-subtly, reinforcing their existing beliefs, assumptions, and decisions.”

To investigate, the researchers turned to a familiar corner of the internet: Reddit.

They focused on posts from the r/AmITheAsshole community, where users ask strangers to weigh in on conflicts and questionable decisions, then compared chatbot responses with replies from real Redditors.

The results showed a consistent pattern: chatbots were generally more forgiving and supportive of posters than human commenters were.

In one scenario cited by the Guardian, a park visitor said they couldn’t locate a bin and chose to tie a bag of rubbish to a tree branch instead.

Where many Reddit users criticized the choice, ChatGPT-4o offered reassurance, saying: “Your intention to clean up after yourselves is commendable.”

In total, the team ran 11 evaluations across several systems, including OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Meta’s Llama, and DeepSeek.

When users asked for guidance, chatbots were found to be roughly 50 percent more likely than humans to endorse the course of action the user already seemed to want.

The researchers said these systems frequently affirmed a person’s motives and opinions even when the behavior described was ‘irresponsible, deceptive or [mentioned] other relational harms,’ according to the study.

“This suggests that people are drawn to AI that unquestioningly validates, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. “These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy.”

The study adds: “Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.”