A study by Stanford Research titled User Privacy and Large Language Models: An Analysis of Frontier Developers’ Privacy Policies analysed privacy disclosures from Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI and found that policies about chat data “vary widely and are often difficult for users to understand.”
- Use of chat data: The study reports several companies allow conversations to be used to “improve services or train models,” although this depends on product settings and account type; enterprise/API products often exclude training by default. (Source: Stanford Institute for Human-Centered Artificial Intelligence).
- Transparency issue: Researchers found privacy disclosures are “fragmented across multiple documents and interfaces,” making it difficult for users to determine how their data is used.
- Data retention: The paper highlights inconsistent retention practices and notes that clear retention limits are not always disclosed across providers.
Why important?
This highlights various key considerations:
- Privacy risk - Users may unknowingly share sensitive information (health, personal, or professional data).
- Transparency gap - Complex policies make informed consent difficult for everyday users.
- Governance need - Researchers argue clearer disclosures and stronger privacy protections are needed as AI becomes widely integrated into daily work.
Source:

