Back to Blog

Use AI Chatbots Without Leaking Your Data

Practical privacy settings for ChatGPT, Gemini, Claude, and Copilot. Learn what to never share with AI and how to run models locally.

Use AI Chatbots Without Leaking Your Data

Be honest - you probably use ChatGPT, Gemini, or Copilot every day at this point. But here’s the thing nobody reads in the fine print: everything you type could be read by actual humans, used to train AI models, or leaked in a breach. Yep, by default, most AI chatbots happily collect your conversations. Let’s fix that.

The Problem

Here’s what happens when you chat with most AI tools: your conversations get stored on their servers and can be used to train future models. That means your words might literally end up shaping what the AI says to other people. Weird to think about, right?

And this isn’t just some hypothetical scary story. Samsung employees accidentally leaked confidential source code through ChatGPT back in 2023. In 2025, OpenAI confirmed a data breach that exposed user conversations. And in early 2026, a popular AI chat companion app got breached - over 300 million messages leaked, including deeply personal ones. Ouch.

Oh, and fun fact: human reviewers at these companies can and do read your conversations to improve response quality. A 2024 study found that 77% of employees were using AI tools at work, often pasting in proprietary company data without a second thought. Imagine your boss finding out about that one.

The good news? Every major AI chatbot lets you opt out. You just need to know where to look.

Lock Down ChatGPT

OpenAI stores your chats and uses them for training by default. Not great. But fixing it only takes a minute - dig into the privacy and data controls in your ChatGPT settings and look for these:

  • Model training toggle - find the option that controls whether your conversations are used to improve their models. Flip it off.
  • Temporary or ephemeral chats - ChatGPT has a mode where conversations aren’t stored long-term and aren’t used for training. Perfect for anything sensitive.
  • Memory - ChatGPT can remember stuff about you across conversations. Handy, sure, but also a bit creepy. Turn it off if you’d rather it didn’t keep notes on you.
  • Chat history - go through your old conversations once in a while and delete what you don’t need. The fewer chats sitting on their servers, the better you’ll sleep.

Heads up: even with training turned off, your chats may still hang around briefly for safety monitoring before they’re actually deleted.

Lock Down Google Gemini

Google can keep your Gemini conversations for a really long time by default, and yes - human reviewers may read them. Your main weapon here is Google’s activity settings. Head to your Google account’s activity controls, find anything Gemini-related, and turn it off. While you’re there, delete any saved activity too.

Here’s the sneaky part: outside of Europe, this activity logging is typically turned on by default. And even when you disable it, Google may still peek at recent conversations briefly for safety purposes. That’s baked into their policy, and there’s not much you can do about it.

If you’re using Gemini through a paid Google Workspace account, you’re in better shape - your data generally gets stronger protections and isn’t used for training by default.

Lock Down Claude

Anthropic is a bit more privacy-friendly out of the box, but you still need to tweak things. Head to Claude’s settings, find the privacy section, and look for the toggle that controls whether your conversations help improve the model. Turn it off.

Here’s why it matters: with training enabled, your conversations may be kept for years. With it off, retention drops dramatically. Check Claude’s current privacy policy for exact numbers - the gap is usually huge.

One cool thing - Claude also lets you disable location sharing. Some AI chatbots use your approximate location to personalize responses, which is something you probably didn’t even know was happening. If you spot a location or personalization setting in any AI tool, switch it off. More chatbots are likely to add this kind of thing over time, so keep your eyes open.

If you use Claude through the API or a paid plan with training opted out, your data generally isn’t used for model improvements.

Lock Down Microsoft Copilot

Copilot collects your conversations by default - no surprise there, it’s Microsoft. Dig into Copilot’s settings and look for privacy controls, specifically anything about model training on your text or voice data. Turn it off.

Don’t forget to check your Microsoft account’s privacy dashboard at account.microsoft.com/privacy too. You can find and delete your Copilot history there. By default, Microsoft can hang onto this stuff for many months.

Using Copilot through a Microsoft 365 business or enterprise account? Your data is generally handled under your organization’s policies and isn’t used for model training. That’s one time corporate IT actually works in your favor.

What to Never Type Into Any AI

Even with all the right settings, treat anything you type as if it could end up on a billboard. Never paste these into any AI chatbot:

  • Passwords or secret keys - not even to ask “is this password strong?” Just don’t.
  • Government IDs - social security numbers, passport numbers, ID cards
  • Medical information - diagnoses, prescriptions, health records
  • Financial data - bank accounts, credit card numbers, tax info
  • Proprietary code or business secrets - your employer’s confidential stuff. Getting fired over a chatbot prompt isn’t worth it.
  • Other people’s personal data - you simply don’t have the right to share it

Also watch out for permissions beyond text. Some AI chatbots ask for access to your location, contacts, or files. Just say no unless you have a really good reason. The less data an AI tool has about you, the less there is to leak.

Simple rule of thumb: if you wouldn’t post it on a public forum, don’t paste it into an AI chatbot. Breaches happen, and your conversations are sitting on someone else’s servers.

The Real Private Option: Local AI

If privacy is your top priority, here’s the ultimate move - run AI models right on your own computer. Nothing leaves your machine. No servers, no logging, no training. Once you download a model, you can literally unplug your internet cable and keep chatting.

Three solid options worth checking out:

  • Ollama - A command-line tool, lightweight and fast. If you’re comfortable with a terminal, this one’s for you. Supports tons of open-source models like Llama, Mistral, and Gemma.
  • LM Studio - The best graphical interface out there. Download and run models with a few clicks, and the chat interface feels a lot like ChatGPT. Works on Mac, Windows, and Linux.
  • GPT4All - The simplest option if you just want to get going. One-click install, works fully offline, and was designed with privacy in mind from day one.

All three are free. Honestly, the models won’t quite match the quality of cloud-based AI, but for everyday questions, writing help, and code assistance, they’re surprisingly good. And your data? It stays completely, 100% yours.

One catch though: you’ll need some muscle under the hood. Running AI locally is demanding. We’re talking a fairly modern machine with at least 16 GB of RAM (32 GB or more if you want to run larger, smarter models). The real bottleneck is your graphics card - a dedicated GPU with enough video memory makes a night-and-day difference in speed. Some newer laptops and chips come with a built-in NPU (Neural Processing Unit) that’s designed specifically for AI workloads, which helps a lot. But if your computer struggles to run a browser with ten tabs open, local AI probably isn’t going to have a good time. Start with smaller models, see what your machine can handle, and work your way up.

Privacy Ranking

Not all AI chatbots treat your data the same way. Here’s how we’d rank them from best to worst, based on default behavior, opt-out options, and data retention:

RankServicePrivacy Notes
1Local AIThe gold standard. Nothing leaves your machine. Zero trust required.
2ClaudeShorter retention when training is off. Location toggle. Paid plans don’t train by default.
3ChatGPTSolid opt-out controls and temporary chat mode. But trains by default, and memory features collect extra data.
4CopilotDecent enterprise protections, but long default retention and deeply tied to your Microsoft account.
5GeminiLongest default retention. Human review even after you opt out. Activity logging on by default outside Europe.

This ranking is our personal opinion based on publicly available privacy policies at the time of writing. Your situation may be different depending on your region, account type, or how you use these tools. Companies update their policies all the time, so what’s true today might not be tomorrow. Do your own research before trusting any service with sensitive data.

Bottom Line

Every major AI chatbot collects your data by default - but every single one also lets you opt out. Take 5 minutes, go through your settings, and lock things down. For anything truly sensitive, keep it off the cloud entirely and run a local model instead.

And one last thing to remember: free cheese only comes in a mousetrap. When you’re using a free AI product, you’re paying with your data. Free tiers tend to have weaker privacy protections, longer data retention, and your conversations are way more likely to end up in a training dataset. Paid plans across all major providers generally offer better data policies. That doesn’t mean paying automatically makes you safe - but the free version is almost always the worst deal for your privacy.

×