Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Is ChatGPT safe? Risks, data safety, and chatbot privacy explained

ChatGPT is mostly safe, but you should still watch what you say around it.

Published onMarch 8, 2024

ChatGPT stock photo 73
Calvin Wankhede / Android Authority

Whether it’s drafting an essay or researching, you’ve probably used ChatGPT to make your life simpler. Indeed, the chatbot’s ability to take in large amounts of data, process it within seconds, and respond in natural language makes it extremely valuable. But does that convenience come at a cost and can you trust ChatGPT to keep your secrets? It’s a question worth asking since many of us drop our guard around chatbots and computers in general. So in this article, let’s ask and answer a simple question: is ChatGPT safe?

Is ChatGPT safe to use?

Yes, ChatGPT is safe in that it cannot cause any direct harm to you or your computer. Both web browsers and smartphone operating systems like iOS use a security technique known as sandboxing. This means that ChatGPT cannot access the rest of your device. You don’t have to worry about your system getting hacked or infected with viruses while using the official ChatGPT app or website.

ChatGPT is safe, but your conversations aren't necessarily private.

Having said that, ChatGPT has some potential to be unsafe in other ways like privacy and confidentiality. We’ll discuss this further in the next section, but for now, just remember that your conversations with the chatbot aren’t exactly private even if they only appear when you log into your account.

The final safety aspect worth discussing has to do with ChatGPT’s existence as a whole. Several tech icons have criticized modern chatbots and their creators for recklessly innovating without considering the possible dangers of AI. Computers can now mimic human speech and creativity so well that it has become nearly impossible to distinguish the two. For example, AI image generators can already conjure up misleading images that have the potential to incite violence and political turmoil. Does this mean you shouldn’t use ChatGPT? Not necessarily, but it’s still an alarming glimpse at what the future might hold.

Who has access to your data when you use ChatGPT?

ChatGPT cannot work without an internet connection since all of the language processing is handled by powerful, state-of-the-art computers in data centers. This means that when you type in a prompt, it’s first sent to OpenAI’s servers. The company maintains a record of everyone’s conversations to further train and improve the chatbot. Put simply, ChatGPT records every conversation you have with it. Even after you delete a chat from the sidebar, a copy will continue to exist in a de-identified form.

ChatGPT saves your data to train future models like GPT-5, so don't spill any secrets.

When ChatGPT was first released to the general public, OpenAI said that the chatbot would only be available for free during the research preview phase. Several months have passed now but the free access has continued with no signs of stopping. This may indicate that OpenAI gets some value out of free users interacting with the chatbot.

However, the silver lining is that OpenAI doesn’t intend to sell user data or chat histories to advertisers or third parties. Instead, it keeps all of the data on its own US-based servers for training future large language models like GPT-5. OpenAI then hires contractors to review some of these conversation records and provide feedback to help the chatbot improve. In theory, only OpenAI’s trusted researchers can see or access chat records. That said, ChatGPT did suffer from a bug once where users could see others’ chat histories.

How to safely use ChatGPT

chatgpt data controls improve model everyone
Calvin Wankhede / Android Authority

Even though OpenAI claims that it stores user data on American soil, we cannot assume that their systems are impenetrable. We’ve seen higher profile companies suffer security breaches, regardless of their location or allegiences. So what can you do to use ChatGPT safely? We’ve put together a short list of tips:

  1. Don’t disclose any private information you don’t want the world to know about. This includes trade secrets, proprietary code belonging to the company you work for, credit card numbers, and addresses. Some companies like Samsung have banned its employees from using the chatbot for this reason.
  2. Use the official ChatGPT app from the App Store and Play Store and stay away from third-party ones. Alternatively, simply access the chatbot via a web browser.
  3. If you don’t want OpenAI to use your conversations for training purposes, you can opt-out of data collection by changing a toggle in Settings > Data controls > Improve the model for everyone.
  4. Create a strong password for your OpenAI account so that others cannot break into your account and see your ChatGPT chat history.
  5. Delete your chat history periodically. This way, even if someone manages to forcefully enter your account, they won’t be able to read any of your past chats.

Assuming you follow these tips, you shouldn’t worry about using ChatGPT for help with everyday, mundane tasks. After all, the chatbot has the backing of large industry players like Microsoft and its underlying language model also powers competing chatbots like Microsoft Copilot.

You might like