Resources
Artificial Intelligence and Privilege: What Clients Need to Know
A Legal Analysis of United States v. Heppner, — F. Supp. 3d —, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026)
A recent federal court decision is a reminder that using AI tools in connection with legal issues can pose real risks. On February 17, 2026, Judge Jed S. Rakoff of the United States District Court for the Southern District of New York issued a written opinion in United States v. Heppner, addressing whether communications a criminal defendant made using a publicly available generative AI platform, Anthropic’s Claude, were protected by attorney-client privilege or the work product doctrine. The court held that they were not and granted the government’s motion to compel disclosure of those materials.
What Happened
On October 28, 2025, a federal grand jury indicted Bradley Heppner, a former executive of GWG Holdings, Inc., on charges including securities fraud, wire fraud, and related offenses tied to an alleged $150 million scheme. He was arrested on November 4, 2025, plead not guilty, and was released on bond. As part of the investigation, federal agents executed a search warrant at Heppner’s residence and seized documents and electronic devices. Among those materials were approximately 31 documents reflecting Heppner’s interactions with Anthropic’s Claude AI platform.
Those exchanges took place after Heppner had received a grand jury subpoena and knew he was a target of the investigation. He created the materials on his own and later shared them with his attorney, explaining that he used the AI tool to analyze information and prepare for a potential indictment. When the government sought access to those documents, Heppner argued they should be protected by attorney-client privilege and the work product doctrine. The court disagreed and ordered them to be produced.
Why the Court Rejected Privilege
The court found that the documents did not meet the requirements for attorney-client privilege or the work product doctrine. For attorney-client privilege to apply, there must be confidential communication between a client and an attorney for the purpose of obtaining legal advice. The court pointed to three key reasons the privilege did not apply:
- No attorney-client relationship: The AI platform was not a lawyer, and communications with a nonattorney are not protected.
- No expectation of confidentiality: The defendant shared information with a third-party platform whose terms allow for the collection and potential disclosure of user inputs and outputs. As a result, the court found that any privilege was waived.
- No legal advice purpose: The defendant used the AI tool on his own initiative, not at the direction of counsel, and the platform itself does not provide legal advice.
The court also rejected work product protection. Although the documents were created after the defendant anticipated legal action, they were not prepared by or at the direction of counsel. Instead, they were created independently, which meant the doctrine did not apply.
Why This Matters for Businesses
While this case arose in a criminal context, the same principles apply in civil litigation, internal investigations, and regulatory matters, often creating even broader exposure for businesses. Courts apply the same rules for attorney-client privilege and work product in these settings. As a result, businesses face similar risks when employees use AI tools in connection with sensitive issues, especially before involving legal counsel.
In practice, this means:
- Information entered into learning AI platforms may not remain confidential
- Sharing that information can waive privilege, even if it originally came from counsel
- AI-generated materials may be subject to discovery
- Work product protection may be limited when materials are created without direction from counsel
This risk is especially important early in a dispute, when employees may use AI tools to gather facts, draft summaries, or assess potential exposure before legal counsel is involved. Courts have also suggested there may be a narrow path to protection if an attorney directs the use of AI as part of a legal engagement. However, that issue remains unsettled and should be approached cautiously.
Eight Practical Warnings for Business Clients
- Treat Every Learning AI Platform Input as Potentially Discoverable
Any information entered into a publicly available AI tool, including ChatGPT, Claude, Gemini, Copilot, or similar platforms, should be treated as a communication to a third party. Do not assume those inputs are confidential. If you would not want opposing counsel, a regulator, or a jury to read it, do not include it in an AI prompt. - Contact Counsel Before Using AI in Connection with Any Legal Matter
If your company is facing a potential claim, regulatory inquiry, government investigation, employment dispute, or any situation where litigation is reasonably anticipated, contact counsel before using learning AI to research, analyze, or document anything related to that matter. Privilege is far more likely to apply when counsel directs the use of an AI tool, and even then, the protection is not guaranteed. - Read and Understand the Privacy Policies of AI Tools You Use
Anthropic’s Claude privacy policy, which was at issue in United States v. Heppner, allows the platform to use inputs to train its models and disclose data to third parties, including government authorities. Many other AI platforms have similar policies. Enterprise subscriptions may offer stronger protections, but those terms must be reviewed carefully and do not automatically establish privilege. - Implement a Written AI Policy for Sensitive Matters
Companies should adopt clear policies governing employee use of AI in connection with litigation, regulatory matters, internal investigations, and compliance work. These policies should require employees to notify legal counsel before using AI in these situations and should prohibit entering confidential or sensitive information into public AI tools without approval. - Preserve Existing AI-Generated Documents Related to Litigation — Do Not Delete Them
If employees have already used AI in connection with a legal matter, those materials must be preserved. Deleting them after litigation is anticipated or after a subpoena or litigation hold is issued can create serious risk, including court sanctions. Preservation obligations apply regardless of whether the materials are ultimately protected. - Do Not Rely on Privilege Labels as a Safety Net
Labeling a document as “privileged” does not make it so. In United States v. Heppner, documents were logged as privileged but still had to be produced. A privilege log protects only documents that are actually privileged. - Update Litigation Hold Protocols to Include AI Content
Litigation hold notices and preservation checklists should explicitly include AI-generated content, including prompts, outputs, and conversation histories on personal or company devices. This material is discoverable and should be treated the same as emails and text messages. - Do Not Assume Enterprise AI Tools Resolve Privilege Issues
Using a corporate or enterprise AI platform does not eliminate privilege concerns. While these agreements may provide better data protections, they do not replace the attorney-client relationship or satisfy the requirements for privilege or work product protection.
How We Can Help
We are available to assist you with reviewing and updating your AI usage policies, conducting litigation-hold training, advising on privilege issues that may arise in the context of a regulatory inquiry or civil investigation, and evaluating existing AI-generated materials for potential exposure. Please do not hesitate to contact Steve Barham or your Chambliss attorney with questions.

