Affiliate links on Android Authority may earn us a commission. Learn more.
Here's why I ditched ChatGPT and moved to local AI
3 hours ago

I was one of the first people to jump on the ChatGPT bandwagon. The convenience of having an all-knowing research assistant available at the tap of a button has its appeal, and for a long time, I didn’t care much about the ramifications of using AI. Fast forward to today, and it’s a whole different world. There’s no getting around the fact that you are feeding an immense amount of deeply personal information, from journal entries to sensitive work emails, into a black box owned by trillion-dollar corporations. What could go wrong?
Now, there’s no going back from AI, but there are use cases for it where it can work as a productivity multiplier. And that’s why I’ve been going down the rabbit hole of researching local AI. If you’re not familiar with the concept, it’s actually fairly simple. It is entirely possible to run a large language model, the brains behind a tool like ChatGPT, right off your computer or even a phone. Of course, it won’t be as capable or all-knowing as ChatGPT, but depending on your use case, it might still be effective enough. Better still, no data leaves your device, nor is there a monthly subscription fee to consider. But if you’re concerned that pulling this off requires an engineering degree, think again. In 2025, running a local LLM is shockingly easy, with tools like LM Studio and Ollama making it as simple as installing an app. After spending the last few months running my own local AI, I can safely say I’m never going back to being purely cloud-dependent. Here’s why.
Have you considered running a local AI on your computer?
Privacy

We’ve all pasted something into ChatGPT that we probably shouldn’t have. Perhaps it was a code snippet at work that you were trying to make sense of. Or perhaps a copy of a contract, maybe some embargoed information, or even just a really personal journal entry that you don’t feel comfortable exposing to our corporate overlords. Every time you hit send on a cloud-based AI, that data is processed on an entirely opaque server that will inevitably use your data for the greater good of AI-kind.
Here’s the deal. Alongside my journalistic endeavors, I run a business where I’m regularly exposed to NDA-protected information. Beyond the obvious privacy risk, it would be illegal for me to share this information with a public AI tool. However, running a local LLM flips the script entirely. I’ve tried many tools, but these days, I’m testing out AnythingLLM. It’s a fantastically simple desktop tool that lets you chat with your documents entirely on your own computer. This lets me do things like feed it tax statements, invoices, bank statements, and even NDA-protected documents, and ask it to summarise things like expenses or flag clauses that I should keep an eye out for. Because the entire LLM is running on my computer, I know this data isn’t being beamed to offshore servers for processing, and it empowers me to have faster AI-driven analysis for data that is strictly not meant to be shared. Effectively, this is like running the enterprise version of ChatGPT or Copilot but on your computer with little to no cost attached.
Grammar checking

I write for a living, and as my managing editors would attest, a grammar check is a necessary evil for the job. But tools like Grammarly are, by definition, keyloggers. To get anything done, they have to read and process whatever you write. That might be fine for a quick email, but I’m not comfortable using it when working with privileged information. Moreover, to get proper utility out of these tools, you need to sign up for a subscription. The cons definitely outweigh the pros.
So, my solution has been to use one of the many offline LLMs to run as a high-powered spellchecker for me. For example, you can install LM Studio running a model like Llama 3 or Mistral, and give it a highly detailed command telling it to fix grammar and spelling without changing the tone or phrasing. That’s all it takes. I can paste in entire articles into the local LLM and get instant feedback and corrections on my writing. You can even train it on specific style guides, like, say, a preference for or against Oxford commas, and the local LLM will follow those rules. Effectively, you can trust a local LLM to work as a second set of eyes without any fear of information leaking out to the broader web.
Coding

Much has been said about the spaghetti code generated by LLMs. However, it is hard to deny that it can be useful for generating boilerplate code to give you a head start. Look, I’m a fairly rusty engineer fifteen years out of university who still likes to dabble in code once in a while over the weekend. For someone like me, AI has been incredible at making sense of code snippets from open-source apps. Turns out, I don’t really need Copilot or Claude for this. For basic stuff, I can quite literally just dump a code fragment into Ollama and ask the AI to explain the logic to me. When it comes to writing code, I’ve dabbled in using extensions for VS Code that can plug Ollama right into the code editor.
Just download a coding-specific model, like, say, DeepSeek Coder locally. The LLM offers suggestions to autocomplete code, refactoring, and explanations for bugs when things inevitably fail to compile. And it does all of this within the IDE itself without phoning back home. It’s the GitHub Copilot experience, but running entirely on my computer. For developers, this means that they could be working on private projects or client code with absolute certainty that they’re not leaking intellectual property to a public-facing LLM. Plus, you might just notice an improved user experience when working with a developer-focused LLM versus a general-purpose model.
Uncensored creativity

Here’s where things get interesting. Public-facing AI models are built with guardrails in mind. Anything deemed too extreme or potentially dangerous is effectively blocked off from access. This can be problematic if you’re a naturally curious person, or if you want to do some research that goes a step beyond what ChatGPT can offer. In my case, one of my weekend hobbies is to write gritty thriller short stories, and when I want to do some research for a thriller or horror arc, ChatGPT just doesn’t go all that far.
However, if you are running a local LLM, you can download an unlocked model that is designed to follow instructions without moralizing. This can unlock creative freedom by letting you develop characters with whatever kind of personality you are aiming for. The same freedom can also be useful if you dabble in board games and want to generate a gruesome new DnD campaign.
It works when ChatGPT won’t

I travel a lot and prefer to have most of my commonly used services be available offline. Predictably, reliance on cloud AI can become a major pain point the moment you step on a plane or visit a remote location with spotty internet. Even if you have WiFi on a plane, the latency can often lead to connection dropouts mid-query. It’s not a very good user experience.
My local AI setup, on the other hand, travels with me. I’ve got Ollama set up on my laptop, and I can spin it up whenever I need it. As geeky as that might sound, it’s been a revelation for productivity, as I can get pretty much all the things I use ChatGPT for running on my computer at 36,000 feet in the air. If you want to take it a step further, you could even use an app like SmolChat to run a smaller language model on your Android phone and use it for simpler tasks. There’s a lot of flexibility here.
Subscription fatigue

In an era where everything from media consumption to reading the news is becoming a subscription, it should come as no surprise that getting a decent AI experience is also, you guessed it, locked behind a subscription. I understand that building and running an AI model is extremely expensive. However, no single GPT is good at every task, and if I started running up the numbers, that $20 a month fee for every LLM I sign up for starts looking very spendy very quickly.
Local AI is the buy-it-for-life alternative. The only real cost is the hardware that you run it on. If you’re on a PC, you’ll want a beefy graphics card. On the Apple side of things, even a MacBook Air is able to run reasonably sized models at a decent clip. The open-source models themselves are free. In fact, if you have a recent computer, you likely already have all the hardware you need to get your own local AI instance up and running. Moreover, while you have to wait for OpenAI to release its yearly upgrade to ChatGPT, you can test out new models and variations practically weekly via Hugging Face.
Speed and latency

While your computer is obviously not going to compete with the massively overpowered data centers running ChatGPT, depending on how you use these LLMs, you might observe that the gap is smaller than you’d imagine. Running locally on reasonably powered hardware, things like text generation and summarisation can often happen just as quickly as a commercial tool, and sometimes even quicker. Additionally, you don’t have to face the issue of latency or server load. If you’ve used ChatGPT with any frequency, you’ve inevitably come across a processing wheel, or sometimes even the dreaded connection timeout. None of those are things when your AI model runs on your computer.
Is running a local AI model for everyone?
Look, I’m not gonna sugarcoat it. Local AI isn’t perfect, nor is it for everyone. If you are using Claude to solve complex mathematical problems, or want the absolute state-of-the-art reasoning capabilities available, a massive cloud model like ChatGPT 5.1 or Claude is going to be your best bet. They simply have a much larger parameter base to work from than what can fit on your computer. Nor is your computer powerful enough to run them. But you don’t necessarily need that power on a day-to-day basis. If my experience is anything to go by, 90% of my daily AI tasks like summarisation, editing, code explanation, and brainstorming work perfectly fine with local models.
Is it a nerdy activity? Yes. Is it fun to dabble in? Also yes. It’s a bit like the early wild, wild west days of the internet, except entirely in your control.
Don’t want to miss the best from Android Authority?
- Set us as a favorite source in Google Discover to never miss our latest exclusive reports, expert analysis, and much more.
- You can also set us as a preferred source in Google Search by clicking the button below.
Thank you for being part of our community. Read our Comment Policy before posting.

