Stanford Study Finds Wide Gaps in Privacy Policies for Large Language Models
A new study by Stanford Research analyzed privacy disclosures from Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI, finding that policies on chat data use vary widely and are often difficult for users to interpret.
The research highlights inconsistent practices around data usage, retention, and transparency, with many platforms using chat data to improve services or train models depending on account type, while enterprise/API products often exclude training by default. Fragmented disclosures and unclear retention limits create privacy risks and impede informed consent. The study underscores the urgent need for clearer policies and stronger governance as AI tools become increasingly integrated into daily life.
