A recent federal court decision is raising serious concerns across the legal industry and it has implications for anyone using generative AI tools like ChatGPT, Claude, or Gemini to discuss sensitive matters.
In United States v. Heppner, Judge Jed Rakoff of the Southern District of New York ruled that a defendant’s conversations with Anthropic’s Claude, used to explore legal defense strategies, were not protected by attorney-client privilege.
The court’s reasoning was straightforward. AI is not a lawyer, and communications with it do not qualify as legal advice. Privilege applies only when advice is sought from a qualified legal professional. That threshold was not met.
The court also emphasized that Anthropic’s policies allow collection and review of user inputs and outputs, meaning the information was effectively shared with a third party.
“Communications lose their privileged character when disclosed to third parties.”
Key Takeaways
The defendant used Claude after receiving a grand jury subpoena, without direction from counsel
The chats included discussion of legal defense strategy
The court found the communications were not confidential and not privileged
As a result, the chats were potentially admissible
Industry Impact
Law firms are already responding:
- Client advisories warning against sharing sensitive information with AI tools
- Updated engagement letters highlighting privilege risks
- Increased focus on secure and enterprise AI solutions
Legal Context
A separate case, Warner v. Gilbarco in Michigan, reached a different outcome, treating certain AI assisted materials as protected work product. However, the law remains unsettled and fact specific.
What Comes Next
This decision signals a broader shift in how courts treat AI:
- Stricter internal AI usage policies
- Greater scrutiny of AI generated content in litigation
- Rising demand for confidential, enterprise grade AI systems
- Possible regulatory or legislative responses
“We are watching the boundaries of privilege being redrawn in real time.”
Why Important?
This ruling fundamentally challenges the assumption that AI interactions are private. Information shared with AI tools may be treated as disclosed to a third party and therefore not protected in court. That creates real risk for individuals and organizations using AI for legal, strategic, or confidential matters. It also accelerates the need for clearer policies, safer tools, and a more disciplined approach to AI use across the board.
Sources & further reading:
- United States v. Heppner (S.D.N.Y.) https://www.courtlistener.com/
- Warner v. Gilbarco Inc. (Michigan state court) https://casetext.com/
- Anthropic Privacy Policy https://www.anthropic.com/legal/privacy
- Attorney Client Privilege overview https://www.law.cornell.edu/wex/attorney-client_privilege
- American Bar Association analysis https://www.americanbar.org/news/

