Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Google's ready to pay up to $20,000 if you can break Gemini very, very badly

You'll have to do a lot worse than just embarrass Gemini to get these bug bounties.
By

October 6, 2025

Google Gemini logo on smartphone stock photo (2)
Edgar Cervantes / Android Authority
TL;DR
  • Google shares new bug bounty details for AI with its Vulnerability Reward Program.
  • The most dangerous exploits can score their finders up to $20,000.

It’s not nice to be a bully. At least, not to people. But for a certain segment of AI “fans,” there’s nothing more fun than ganging up on an AI chatbot and trying to get it to do basically anything other than what’s intended. That can include hallucinating wildly incorrect answers, or even convincing the bot to ignore restrictions that its creators have tried to enforce. And now if you’re good enough at breaking AI bots in epic fashion, Google might just be willing to pay you for it.

Don’t want to miss the best from Android Authority?

google preferred source badge light@2xgoogle preferred source badge dark@2x

We’re not talking about anything as simple as dropping a line in your resume that causes a recruiter’s AI tools to include a recipe for flan in their correspondence with you. Or, at least, not nearly so harmless.

Google’s got a dedicated new AI Vulnerability Reward Program intended to compensate security researchers for uncovering some of the potentially most dangerous AI bugs. That means stuff that tricks Gemini into messing with your Google account, or even lets attackers extract information about how Gemini itself works. For the purposes of this program, the consequence has to be a lot bigger than “this makes Gemini look silly.”

But for researchers who do manage to uncover such impactful exploits, the potential for compensation is big: The most severe, affecting flagship AI products like Search and the Gemini app, can pay $20,000.

Sure, those kind of vulnerabilities may not have the viral potential of resume flan recipes, but they’re much more important to Gemini’s stability and reputation — it’s one thing to trick Gemini into advising you to eat rocks, and quite another if you’re able to get it to include a phishing link in one of its Search AI Mode responses. As frequent Gemini users, it’s reassuring to know that Google’s trying to make sure that the good guys are working as hard as the bad guys at identifying just that type of exploit.

Follow

Thank you for being part of our community. Read our Comment Policy before posting.