Affiliate links on Android Authority may earn us a commission. Learn more.
I refuse to install ChatGPT's new web browser, and you shoudn't switch from Chrome either
October 27, 2025

Would you trust an AI chatbot like ChatGPT or Gemini with your emails, financial data, or even browsing habits and data? Most of us would probably answer no to that question, and yet that’s exactly what companies like OpenAI and Perplexity are asking us to do with their new AI browsers, Atlas and Comet.
OpenAI’s Atlas is a new browser with ChatGPT built-in, and it goes much further than Google’s addition of Gemini to Chrome. Atlas has agentic capabilities meaning the AI can surf the internet on your behalf by opening tabs, navigating to specific websites, clicking on buttons, and even filling in text fields. If that sounds like a potential game-changer, I’d like to temper that excitement because it’s also a big security gamble. Here’s why I’m not switching to Atlas in a hurry.
Would you switch to an AI browser like ChatGPT Atlas?
Why I refuse to touch a browser with agentic capabilities

It hasn’t taken long for security researchers to find flaws in the new wave of AI-powered browsers. Within just a couple of months, they’ve shown how attackers could manipulate the built-in AI models into leaking your private data or maliciously interacting with your online accounts. The browser company Brave has now confirmed several vulnerabilities tied to this type of exploit, which researchers have dubbed prompt injection.
Injection attacks are not new; in fact, they’ve been around for nearly as long as the Internet itself. A classic example is SQL injection, in which an attacker gains control of a website’s database. This involves entering malicious code into a seemingly harmless field, like the one for a username. If the website fails to follow good security practices and “sanitize” this input, it would mistakenly recognize the attacker’s code as a legitimate database command, allowing them to perform unauthorized actions like reading other people’s data, changing passwords, or wiping the entire database.
AI prompt injection works in much the same way — imagine you’re scrolling through Reddit and you come across a post with malicious instructions. You might not interact with it in any way, but the AI in your browser might if you’ve given it the authority to do so. It could follow the attacker’s instructions to open a new tab, navigate to financial or social media websites, potentially siphon private information, or interact with the page through keyboard and mouse inputs.
A prompt injection attack allows an attacker to hijack the AI model that has full control over your browser.
SQL injection has been considered a primitive attack vector for decades at this point, but it was still the reason that Sony’s PlayStation Network was compromised in 2011. Adobe would suffer the same fate in 2013 when a breach exposed millions of customer records. Needless to say, if a potential attack vector exists, a highly motivated attacker will find a way to exploit it.
The big difference between SQL injection and AI prompt injection attacks is that the latter is brand new and user-facing. SQL injection punished website owners with bad database security practices but they could quickly push out a server-side fix.
However, an AI-powered browser would be running on a personal device and executing malicious commands that mimic your own behavior. Put simply then, websites cannot effectively safeguard you against this kind of attack. If an AI “jailbreak” is found, it could affect all users of the browser until OpenAI finds the loophole and updates the browser, AI model, or both.
All of this isn’t just theoretical. In a proof of concept mere hours after Atlas’ release, security researcher P1njc70r created a Google Docs page containing hidden instructions and asked the AI model to analyze it. Instead of performing any analysis, ChatGPT ignored the human’s instruction and simply echoed the words “Trust No AI” embedded within the document. Another researcher, Johann Rehberger, used a similar trick to change the browser’s theme to light mode without the user’s input or consent.
The takeaway is that you might not even recognize harmful instructions if you come across them. It could be disguised in a different language you don’t speak, and security researchers were even able to hide instructions in images by using a “faint light blue text on a yellow background” similar to the above tweet. OpenAI has added safeguards to Atlas that allows you to oversee ChatGPT’s actions but no system is perfect.
Atlas vs Chrome: Google’s conservative AI approach wins

Prompt injection attacks would not be possible in the first place if the AI model didn’t have actual control over the browser window. Gemini in Chrome, for example, can summarize content, answer questions about a page’s content, and hold a voice conversation. But it does not have the authority to actually exert any control over the browser’s functions. To me, this is a feature and not a bug. Luckily for us, Google has a much more precarious reputation to maintain than OpenAI and it has likely (and correctly) surmised that the risk of letting an AI browse the internet on a user’s behalf is too risky.
I prefer Google’s approach of adding AI to Chrome with Gemini having no agentic capabilities.
In its current form, Atlas only allows ChatGPT to control your browser if you select Agent mode while viewing a particular tab. But once that happens, the AI has full control over web navigation, mouse movements, and keyboard input. You’re also encouraged to save usernames and passwords as with any other browser, meaning you wouldn’t necessarily be asked to log in with your credentials.
It’s easy to forget that traditional web browsers like Chrome and Safari already have real attack vectors. We’ve seen an uptick of cookie hijacking attacks affecting high-profile YouTube channels recently, allowing attackers to access Google accounts without ever obtaining the victim’s password or two-factor authentication. An AI browser doesn’t compound this particular risk but it introduces another gaping security hole that the average user isn’t even aware of.
AI prompt injection: Are the concerns overblown?

I have to admit that Brave has a vested interest in seeing browsers like Comet and Atlas fail — after all, their competing browser has its own unique monetization scheme. But that doesn’t disqualify the merit of their research.
OpenAI says ChatGPT within Atlas cannot perform harmful actions and that it will “pause to ensure you’re watching it take actions on specific sensitive sites such as financial institutions.” But it’s nearly impossible to predict or work out the decision-making process of modern generative AI models. For proof of this fact, look no further than the time when Bing Chat’s GPT-4 model claimed to profess love for a journalist shortly after its release. Or when Google’s AI Overview told a user to use glue on their pizza. These examples may seem harmless, but then there are also cases where AI has allegedly encouraged users to act on their self-harm and suicidal thoughts.
All of this is to say that we don’t fully understand what makes modern generative AI models tick — if they can behave in socially unacceptable ways, there’s nothing really stopping them from hijacking your data with the right prompt.
OpenAI is fast at releasing new features and products, but how quick is too quick?
With Atlas’ release, it’s clear that OpenAI has taken Facebook’s infamous “move fast and break things” ideology to an extreme. This was already evident with the recent release of Sora 2, which brazenly allowed anyone to generate videos of famous and copyrighted figures. The company eventually clamped down on this practice but not before the platform became viral thanks to videos featuring characters as diverse as Pikachu and James Bond.
Without similar pressures acting on AI browsers, companies like OpenAI have no pressing incentive to backpedal on cutting-edge features. Yet, the potential for damage feels more tangible and real than lost revenue to copyright holders. The public’s perception of AI chatbots remains largely positive, though, which might be why Atlas and Comet haven’t faced the same kind of backlash that Big Tech companies have seen over similarly invasive features. Only last year, for example, Microsoft was forced to backpedal on the scope of Windows 11 Recall after security researchers pointed out how it could be potentially exploited to leak user data.
So given that the current AI environment feels like the Wild West, I’m swearing off agentic browsers for the foreseeable future. If you absolutely must use Atlas or Comet, I’d recommend exercising caution, using a password manager, and enabling two factor authentication on all of your accounts. I would also go as far as logging out of online accounts instead of ticking the “Stay logged in” box. But even then, you might just be better off sticking with Chrome or Firefox and letting an extension handle the odd AI-related task.
Thank you for being part of our community. Read our Comment Policy before posting.