Resources
Protecting Client AI Use in Litigation
A Legal Analysis of Warner v. Gilbarco, Inc., – F.Supp.3d -, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026)
A recent federal court decision offers new early guidance on how courts may treat client use of artificial intelligence in litigation — but it also raises important unanswered questions.
In Warner v. Gilbarco, Inc., former employee Sohyon Warner filed a pro se employment discrimination lawsuit against Gilbarco, Inc. (d/b/a Gilbarco Veeder-Root) and its parent, Vontier Corporation. During the latter stages of discovery, defendants learned that Warner had been using ChatGPT to assist with her litigation preparation. Defendants then sought access to all materials related to her AI use, arguing that entering information into an AI platform waived any applicable privilege or legal protections. The court disagreed.
What the Court Decided
The court rejected the argument that using AI tools automatically waives all legal protections. Instead, it treated AI as a tool similar to traditional legal research or drafting software. At the center of the decision was the work-product doctrine, which protects materials prepared “in anticipation of litigation.” Therefore, documents and related information for that use were protected from discovery under Federal Rule of Civil Procedure 26(b)(3)(A). The court further held that the AI materials were not relevant under Rule 26(b)(1) and, even if marginally relevant, were not proportional to the needs of the case.
Why AI Use Was Protected
The court identified several reasons why the plaintiff’s AI-related materials were protected:
- AI is a tool, not a person. The court emphasized that generative AI programs are tools, even if they have administrators behind the scenes. The key focus is on the user’s thought process, not the tool itself.
- The materials reflected internal analysis. Prompts, queries, and AI-generated responses revealed the plaintiff’s mental impressions and legal strategy, which is core work product.
- The work was done for litigation. The materials were created in anticipation of litigation, not in the ordinary course of business.
- There was no disclosure to an adversary. The information was not shared in a way that would allow an opposing party to access it.
The court also cautioned that accepting the defendants’ argument would undermine work-product protection in modern legal practice, where digital tools are routinely used.
An Important Caveat: The Law Is Still Developing
While the decision is notable, it is not the final word. This case comes from a federal magistrate judge and is not binding. Courts in other jurisdictions, including those in the Sixth Circuit, have not yet weighed in on many key issues.
Open questions include:
- Will other courts consistently treat AI inputs and outputs as protected work products?
- Will entering confidential information into AI platforms waive attorney-client privilege when separate from work product protections?
- Will different AI tools (especially those with varying data policies) be treated differently?
- Will sharing AI-generated materials with others, such as experts, affect protection?
- Will other courts adopt the “tool, not a person” framework?
For now, this decision is best viewed as an early data point and a key decision to cite, but not as a settled rule.
Practical Guidance for Client Use of AI in Litigation
In light of this decision and the evolving legal landscape, businesses and legal teams should take a thoughtful approach to AI use.
1. Use AI as Part of the Litigation Process
To qualify for work-product protection, client AI use should be tied to anticipated litigation, not routine business operations.
- Limit AI use to case-specific tasks, such as drafting, research, or document analysis
- Avoid mixing AI threads for general business purposes and threads with legal issues
- Involve legal counsel in directing AI use
2. Document the Purpose of AI Use
Create a clear record showing that relevant AI usage related to the dispute at hand is for litigation purposes.
- Reference AI use in engagement letters or litigation hold notices
- Maintain internal records of how AI tools are used and for what matters
- Ensure counsel provides direction or authorization
3. Route AI Use Through Counsel
Work-product protection is strongest when materials are created by or under the direction of legal counsel.
- Have attorneys supervise or conduct AI-related work
- Ensure employees or consultants using AI are acting at counsel’s direction
4. Avoid Risky Disclosures
Work-product protection can be lost if materials are shared improperly.
- Do not include direct AI-generated content in filings or communications with opposing counsel
- Limit access to AI-generated materials and label them appropriately
- Be cautious about uploading confidential information to public or learning AI tools
- Consider platforms with stronger data privacy protections
5. Stay Current as the Law Evolves
AI-related legal standards are changing quickly.
- Monitor new court decisions and regulatory developments
- Review AI platform terms of service and data practices
- Consider formal policies governing AI use in litigation
The Bottom Line
Warner v. Gilbarco, Inc. provides early support for the idea client use of that AI tools used in litigation are simply extensions of a user’s thought process, and that work-product protection can still apply. However, caution is essential. It is unclear how much the fact the plaintiff was serving as her lawyer impacted the decision.
The decision is limited in scope, and significant legal questions remain unresolved — especially regarding how AI platforms handle user data. Businesses and legal teams should use AI intentionally, under counsel’s direction, and with safeguards in place, such as using AI tools that do not learn from your inputs. When used thoughtfully, AI can be a powerful tool in litigation. The goal is to preserve the benefits while eliminating the vulnerabilities, and that requires deliberate, documented practice.
As courts continue to address the role of AI in litigation, our team can help you assess risk, implement practical safeguards, and ensure your use of AI tools aligns with current legal standards, especially as it relates to attorney-client privilege and work product protection. We are available to support policy development, litigation strategy, and privilege analysis. Please contact Steve Barham or your Chambliss attorney to learn more.

