Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Can you clone Gemini by asking it enough questions? Google says attackers tried

A Google report reveals that one actor used 100,000 prompts in an attempt to work out the chatbot's reasoning abilities.
By

2 hours ago

Add AndroidAuthority on Google
Google Gemini logo on smartphone stock photo (7)
Edgar Cervantes / Android Authority
TL;DR
  • Google report claims one campaign sent over 100,000 prompts to Gemini in an attempt to clone the model.
  • Attackers tried to coax Gemini into revealing more details about its internal reasoning abilities.
  • Google says it detected the behavior, blocked associated accounts, and strengthened safeguards against misuse.

Copying a successful product has been a practice as long as tools and technologies have existed, but chatbots are a special case. Competitors can’t pull them apart, but they can ask the AI as many questions as you like in an attempt to figure out how it works. According to a new report from Google, that’s exactly how some actors have been trying to clone Gemini. In one case, Google says a single campaign sent more than 100,000 prompts to the chatbot, in what it describes as a large-scale model-extraction attempt.

The findings come from Google’s latest Threat Intelligence Group report (via NBC News), which outlines a rise in so-called “distillation” attacks. In simple terms, that means repeatedly querying a model to study how it responds, then using those answers to train a competing system. Google says this activity violates its terms of service and amounts to intellectual property theft, even though the attackers are using legitimate API access rather than breaking into its systems.

Don’t want to miss the best from Android Authority?

google preferred source badge light@2xgoogle preferred source badge dark@2x

One campaign highlighted in the report specifically targeted Gemini’s reasoning capabilities. While Gemini doesn’t normally expose its full internal “chain of thought,” Google says attackers tried to coerce it into revealing more detailed reasoning abilities. The scale of the prompts — over 100,000 in this case — suggests an effort to replicate Gemini’s ability to reason across different tasks and even in non-English languages. Google says its systems detected the activity in real time and adjusted protections to prevent internal reasoning details from being exposed.

What feature should Gemini copy from ChatGPT first?

335 votes

While Google declined to name suspects, it says most of the extraction attempts appear to have come from private companies and researchers seeking a competitive edge. John Hultquist, chief analyst at Google’s Threat Intelligence Group, told NBC News that as more businesses build custom AI systems trained on sensitive data, similar cloning attempts could become more common across the industry.

Beyond model extraction, the report also outlines other ways Gemini has been misused. Google describes instances of threat actors experimenting with AI-assisted phishing campaigns and even malware that calls Gemini’s API to generate code on the fly. In each case, Google says it disabled associated accounts and updated safeguards to limit further abuse.

Follow

Thank you for being part of our community. Read our Comment Policy before posting.