Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Study finds Perplexity uses AI-generated content as sources, amplifying hallucinations

Would you trust an AI search engine after this discovery?

Published onJune 27, 2024

perplexity ai play store stock image
Calvin Wankhede / Android Authority
  • A study has found that Perplexity regularly cites AI-generated content as sources for its answers.
  • Perplexity is an AI search engine that generates answers after searching for information online, similar to Google search.
  • A different investigation recently alleged that Perplexity subverts restrictions on web scraping in order to improve its chatbot.

AI-powered search engine Perplexity is facing scrutiny and criticism after a study has emerged alleging that the tool routinely cites AI-generated blog posts as sources for its responses. The report, compiled by AI detection service GPTZero, claims that users encounter “second-hand hallucinations” or information backed by AI-generated sources after just three prompts on average.

The study found that while Perplexity cites sources, it often doesn’t take the content’s authenticity into account. This means that the chatbot will regurgitate articles written by ChatGPT and AI hallucinations as fact. Worse still, it presented content from social media websites as fact — in one example, a prompt for “cultural festivals in Kyoto, Japan” returned a plausible-sounding answer. However, Perplexity cited a single AI-generated article from LinkedIn as its only source for this prompt.

GPTZero says it can identify AI-generated content with a 97% accuracy rate. Forbes also ran the offending content through a different AI detection algorithm and arrived at the same conclusion. In a statement offered to the publication, Perplexity’s Chief Business Office Dmitri Shevelenko admitted that the search tool assigns trust scores to different domains. This is similar to how Google’s PageRank is believed to work — the algorithms downrank and exclude low-quality websites.

Shevelenko also confirmed that Perplexity uses internal AI detection algorithms to flag offending content. However, he didn’t offer any explanation as to why the tool relied so heavily on AI-generated content, instead claiming that the study doesn’t represent a “comprehensive evaluation” of Perplexity’s sources.

Perplexity was founded by Aravind Srinivas, who worked as an AI researcher at OpenAI, and is currently valued at over $2 billion. The company has attracted sizable investments from Nvidia and Amazon founder Jeff Bezos.

The aforementioned study is not the first controversy that has hit the AI startup in recent weeks. Just last week, Wired claimed that Perplexity had scraped the publication’s website despite attempts by the site’s engineers to block the attempts. And after testing the chatbot across multiple prompts, Wired also found that the chatbot would sometimes make educated guesses based on article URLs instead of directly summarizing them. Perplexity’s founder and CEO told the publication that their conclusions reflected “a deep and fundamental misunderstanding of how Perplexity and the Internet work.”

Got a tip? Talk to us! Email our staff at You can stay anonymous or get credit for the info, it's your choice.

You might like