Experts issue crucial alert to 1.8 billion Gmail users regarding novel attack that deceives AI

In recent developments, Google has issued a cautionary note to Gmail users about a new tactic employed by hackers aiming to acquire sensitive data.

While deception to obtain passwords and data has long been a strategy of cybercriminals, current trends indicate a surge in inventive methods by these individuals.

On June 13, through a blog post, Google highlighted the gravity of these data acquisition attempts.

The blog stated: “With the rapid adoption of generative AI, a new wave of threats is emerging across the industry with the aim of manipulating the AI systems themselves.

“One such emerging attack vector is indirect prompt injections.”

Cyber attackers are leveraging Google Gemini, the AI tool integrated within Gmail and Workplace, to deceive users, manipulating the AI into assisting them in extracting users’ information.

These hackers dispatch emails with concealed instructions that prompt Gemini to fabricate phishing alerts, which seem to originate from Google, misleading users into divulging their passwords or accessing harmful websites.

Recently, Mozilla’s 0din security team uncovered evidence of such an attack.

Their analysis demonstrated how AI could be manipulated to display a false security alert. This alert would falsely inform the user of compromised passwords or accounts, urging them to act and unwittingly surrender their passwords.

The technique involves embedding the prompt in white text that merges with the email background. Consequently, if a user opts to ‘summarize this email,’ Gemini processes the concealed message alongside the visible text.

Experts from 0din have advised that the 1.8 billion users of Gmail should disregard any Google alerts within AI summaries, as Google does not issue warnings in this manner.

Google, in their blog post, detailed their ongoing efforts to enhance technology to better combat these hacker attacks.

They wrote: “Google has taken a layered security approach introducing security measures designed for each stage of the prompt lifecycle.

“From Gemini 2.5 model hardening, to purpose-built machine learning (ML) models detecting malicious instructions, to system-level safeguards, we are meaningfully elevating the difficulty, expense, and complexity faced by an attacker. This approach compels adversaries to resort to methods that are either more easily identified or demand greater resources.

“This layered approach to our security strategy strengthens the overall security framework for Gemini – throughout the prompt lifecycle and across diverse attack techniques.”