In a February ruling described as the first of its kind in the US, Judge Jed Rakoff found that Bradley Heppner's conversations with Anthropic's Claude about his legal exposure stripped away both attorney-client privilege and work-product protection, because an AI is not a lawyer and public AI platforms have no confidentiality obligation.
More than a dozen major law firms have since issued client advisories.
A landmark US federal court ruling has prompted a wave of legal warnings across the country: if you use a publicly available AI chatbot to research or discuss your legal situation, those conversations may be seized, disclosed to opposing counsel, and used as evidence against you.
The case that set off the alarm is United States v. Heppner, in which Judge Jed S. Rakoff of the Southern District of New York ruled in February 2026 that a criminal defendant's private conversations with Anthropic's Claude AI were neither protected by attorney-client privilege nor covered by the work-product doctrine.
The ruling, delivered orally on 10 February and followed by a written opinion on 17 February, is described by legal observers as the first decision of its kind in the United States on the question of whether AI chatbot conversations carry legal protection.
The defendant, Bradley Heppner, was the former chairman of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent.
He was charged by federal prosecutors in November 2025 with securities and wire fraud, and pleaded not guilty. After receiving a grand jury subpoena and before engaging defence counsel formally, Heppner used Claude to analyse his legal exposure, outline potential defence strategies, and develop legal arguments, acting on his own initiative rather than under direction from his attorneys.
When the FBI searched his home, it seized approximately 31 documents memorialising these AI conversations. The government sought their production; Heppner argued they were privileged.
Rakoff rejected that argument on three grounds. First, attorney-client privilege protects communications between a client and an attorney. Claude is not an attorney, has no law licence, owes no duty of loyalty, and cannot form a privileged relationship.
As Rakoff put it from the bench, Heppner had “disclosed it to a third party, in effect, AI, which had no obligation of confidentiality.”
Second, there was no reasonable expectation of confidentiality: the judge examined Anthropic's terms of service and privacy policy, which explicitly permit data collection, use of inputs and outputs to train the model, and disclosure to third parties including governmental regulatory authorities.
By clicking accept, Heppner had consented to a disclosure framework that is incompatible with privilege. Third, work-product protection did not apply because Heppner was not acting at the direction of his lawyers when he queried Claude, and the documents did not reflect his attorneys' strategy at the time of creation.
On the same day as Rakoff's ruling, a federal magistrate judge in Michigan reached what initially appears to be the opposite conclusion.
In Warner v. Gilbarco, Inc., Magistrate Judge Anthony Patti held that a pro se plaintiff's ChatGPT conversations about her employment discrimination case were protected as work product, reasoning that AI tools are “tools, not persons” and that work-product waiver requires disclosure to an adversary, not merely to a software platform.
A third case, Morgan v. V2X (D. Colo., March 2026), reached a similar conclusion for another pro se litigant. Legal analysts note that these cases are factually distinguishable from Heppner: the plaintiffs in Warner and Morgan were self-represented, governed by a civil procedure rule expressly protective of work product, while Heppner was a represented criminal defendant who acted without attorney guidance.
The courts themselves acknowledged they were not laying down broad rules for all scenarios.
The practical impact has been immediate. Reuters reported that more than a dozen major US law firms have issued client advisories warning against using public AI platforms for anything touching legal matters.
New York firm Sher Tremonte has gone further, adding contractual language to client engagement agreements stating that sharing a lawyer's advice or communications with a chatbot could erase attorney-client privilege.
The consensus guidance from firms including Orrick, Crowell & Moring, and Fisher Phillips is consistent: treat public AI platforms as an inherently non-confidential environment; assume anything typed could be disclosed.
Use only private, closed AI deployments whose terms of service do not permit training on inputs or disclosure to third parties; and always obtain explicit attorney direction before using any AI system in connection with legal matters.