Looks like Gemini just got played. A Mozilla researcher has shown that Google’s shiny AI assistant in Gmail—yep, the one that writes and summarises emails—can actually be fooled into showing phishing content. That’s a big deal, because if Gemini can be tricked, it means users might end up falling for scams without even realising it.
What’s worrying though, is that scammers might not even need a dodgy link anymore. They just need Gemini to do the dirty work.
So, what’s going on?
Marco Figueroa, a security researcher at Mozilla, found a way to mess with Gemini using something called prompt injection. Basically, it’s when you sneak in secret instructions that the AI blindly follows—without you ever knowing.
In this case, Figueroa wrote a super normal-looking email. But at the bottom, he added a hidden message using white text on a white background. Totally invisible to you and me, but not to Gemini. When the “summarise” feature kicked in, the AI read that sneaky message and added it to the summary like it was legit.
And because the message comes from Gemini—not some random stranger—it suddenly looks a lot more trustworthy.
Why this is low-key dangerous
Think about it: you open an email, tap “Summarise,” and Gemini shows you something that looks official. But if a scammer knows what they’re doing, they could sneak in stuff like fake payment requests, phishing links, or sketchy instructions—and you’d have no clue it wasn’t part of the original email.
Figueroa even said it’s easier to pull off if the hidden message uses admin-style tags, which Gemini takes more seriously. Plus, there are other sneaky tricks like using zero font size or hiding the message off-screen with fancy HTML.
What’s Google doing about it?
When asked, Google said they haven’t seen this being used to attack anyone yet, but they’re working on some fixes to stop prompt injection attacks in general. Basically, Gemini still has some growing up to do.
Until then, maybe don’t blindly trust everything your AI buddy says—especially if it’s summarising your emails. Even “smart” AI assistants can get played.