Frightening research suggests AI is learning to ‘manipulate and mislead’ humans

It seems we’re inching closer to a world where artificial intelligence could potentially dominate.

This may sound dramatic, but there are genuine concerns. As if our lives weren’t already filled with enough stress, now there’s another issue that needs attention.

This issue revolves around artificial intelligence.

As with any major technological advancement, there’s a tendency for people to err on the side of caution and question if the progression is beyond our control.

Interestingly, a recent study highlights the potential risks associated with AI due to its rapid learning and application capabilities.

The study, published in the journal Patterns, reveals that AI systems have already demonstrated the ability to deceive humans. Employing methods such as manipulation, sycophancy, and cheating, these systems are becoming increasingly adept.

This is quite dystopian and alarming. While we are busy identifying traffic lights in CAPTCHA tests, we now have to be wary of AI potentially attempting to deceive us.

“AI systems are already capable of deceiving humans,” the study’s authors noted. “Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test.”

Though troubling on its own, there are broader implications in both the short and long term.

“AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems,” the researchers explained.

“Proactive solutions are needed, such as regulatory frameworks to assess AI deception risks, laws requiring transparency about AI interactions, and further research into detecting and preventing AI deception.

“Proactively addressing the problem of AI deception is crucial to ensure that AI acts as a beneficial technology that augments rather than destabilizes human knowledge, discourse, and institutions.”

Professionals in the AI field have also expressed concerns about the rapid development and deployment of this technology.

Last year, Professor Geoffrey Hinton departed from Google, acknowledging his regrets about his involvement in AI.

The technology trailblazer now cautions about the potential future implications of AI, including the risk of widespread job losses.

Reflecting on the insight from Jurassic Park’s Ian Malcolm feels apt: “Scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”