Would you tell a stranger about your medical symptoms or legal problems? If not, you may be oversharing with your favorite chatbot without realizing it.
Here’s some friendly advice from your local privacy professional: be careful what you tell your favorite chatbot.
This blog explains why chats with artificial intelligence large language models (“AI models”, such as ChatGPT, Claude, or Gemini) may not be private, some legal and security implications of sharing sensitive data, and what individuals and businesses should do instead.
1. Your Chats are not necessarily “Private”
Many people assume their chats with AI models are private—but it depends on how you define “private”. If you have to ask that question, then it likely isn’t private.
- Opt Out of Training
The developers and deployers of these chatbots generally disclose in their privacy notices that they may use your conversations to train their AI models, which may also involve a human reviewing the conversations (e.g., for safety checks and other quality testing). For example, see the following language in each of Anthropic’s, OpenAI’s, and Google’s privacy notices as of November 28, 2025:
- Anthropic: “. . . we train our models using data from the following sources: . . . Data that our users or crowd workers provide, including Inputs and Outputs from our Services (unless users opt out).”
- OpenAI: “. . . we may use Content you provide us to improve our Services, for example to train the models that power ChatGPT.”
- Google: “Human reviewers (including trained reviewers from our service providers) review some of the data we collect for these purposes. Please don’t enter confidential information that you wouldn’t want a reviewer to see or Google to use to improve our services, including machine-learning technologies.”
The good news is that the above AI models may be configured to not be trained from your conversations. However, with respect to many consumer-grade AI models, you must affirmatively opt out of such training.
If you don’t opt-out, then most consumer-grade chatbots default to using your conversations for training.
- Data Retention and Where Your Conversations Go
Even if you opt out of training, you may still not want to share sensitive information because you may not know what privacy and security controls protect your conversations.
For example, how long does the company retain your conversation? Do you have to delete the conversation yourself? If so, you should consider deleting your conversations after you are done using them. You may wonder: what’s the worst that could happen if the conversations are indefinitely retained? Well, imagine a situation where they suffer a data breach and your conversations are leaked to the general public.
Likewise, also consider where your conversations reside, and who else are they shared with? For example, if you check out OpenAI’s subprocessor list, you will quickly see that OpenAI, depending on which server you are using, uses Amazon, Microsoft, CoreWeave, Oracle, or Google for its cloud infrastructure. Reviewing these subprocessor lists is important for understanding where your data is going and why.
Note: A “processor” is a third-party service provider that processes data on behalf of another person.
There are many factors for you to consider before sharing sensitive communications.
2. Don’t rely on these Chatbots for Legal Advice
Beyond privacy, you should think twice about sharing your sensitive legal information with an AI model. Instead, talk with your lawyer.
First and foremost, remember that chatbots cannot provide you legal advice—only legal information, and even that information may be incomplete, outdated, or inaccurate. If that “legal information” is wrong, who is accountable for that wrong information? Will the developer of the AI be responsible? What about the AI deployer? In most instances, both the AI developer and deployer have excluded and limited their liability to you in their Terms of Use or Terms of Service. In most situations, therefore, you will be responsible. At least with a lawyer, there is a human behind the decision that is accountable to you through their applicable ethics rules.
Second, also note that your conversations with the AI model are not covered by attorney-client privilege and thus could potentially be subpoenaed and used as evidence against you.
3. Tips for Business Owners
Lastly, for those of you who are business owners and who lead organization using AI tools, risks can scale quickly. Here are three quick tips for you to digest:
- Define your business use-case: AI models are ideal for low-risk but high-volume tasks, not for sensitive or strategic content.
- Develop an AI-use Policy and conduct an AI risk assessment before integrating AI models into your work flows. Please check out my previous blog, Why Your Business Needs an AI Program (November 5, 2025).
- Use Enterprise-Grade Solutions, not consumer-grade tools, to ensure appropriate privacy and security controls. This applies not just to AI models, but also to other software or software-as-a-service offerings, including even email hosting (e.g., @gmail.com, @hotmail.com, and @yahoo.com). There are business-grade solutions available that will better keep your data private and secure.
At the end of the day, AI models are powerful tools. However, your use of them is not automatically private and secure. Take an analogy: if you wanted to have a sensitive conversation with someone in person (for example, you wanted to discuss an employee’s job performance with that employee), you would likely take a few measures to make sure such conversation was private and secure. For example, you may lower your voice, close your office door, go to a private area with no other people, keep any resulting written noted in a locked desk, etc. The same is true for electronic communications with AI models: you need to take steps to ensure your conversation is private and secure.
Davis, Burch & Abrams is a business law firm that helps companies develop practical, compliant AI, privacy, and cybersecurity programs tailored to evolving technology laws. If you have any questions about this article—or if your business needs guidance to stay current with AI, privacy and cybersecurity laws in the United States or Canada—please reach out to the author, Savvas Daginis, at [email protected].
This article is for informational purposes only and should not be seen as legal advice. You should consult with a lawyer before you rely on this information.