AI chatbots used in court are a growing concern, as lawyers in the United States warn clients not to treat them like trusted confidants. As more people turn to tools such as ChatGPT and Claude for advice, legal experts say those conversations may not carry the same protections as communications with a licensed attorney.
The warnings grew more urgent after a federal judge in New York ruled that AI-generated documents tied to a criminal case could not be shielded from prosecutors. That decision has pushed law firms to issue new guidance on the legal risks of using public AI tools for sensitive matters.
Lawyers say the core problem is simple: AI chatbots are not lawyers. Because of that, conversations with them generally do not qualify for attorney-client privilege under U.S. law. Several major law firms have advised clients to be careful about what they share with public AI platforms. Some have also warned that entering legal advice from an attorney into a chatbot could weaken or waive legal protections that would otherwise apply.
The case attracting attention involves Bradley Heppner, former chair of GWG Holdings and founder of Beneficent. Prosecutors charged him with securities fraud and wire fraud, and he pleaded not guilty. He used Anthropic’s Claude to prepare reports about his case, including material related to his legal defence.
In February 2026, U.S. District Judge Jed Rakoff ruled that Heppner had to hand over 31 AI-generated documents. The court found that no attorney-client relationship could exist between a user and a chatbot platform such as Claude. The ruling also stressed that sharing material with a lawyer later does not automatically make it privileged. That made the case an important early test of how courts may treat AI-assisted legal work.
US lawyers warn against AI confessions
Chatbot conversations may be used in court @BislaDiksha and @kripatistic have more pic.twitter.com/4EYxl8O8Dg
— WION (@WIONews) April 16, 2026
Law firms now race to define clearer boundaries for clients using AI. They advise clients to choose more secure systems and to word prompts carefully when using AI under a lawyer’s direction. Some firms argue that “closed” or enterprise AI tools may provide stronger privacy protections than public consumer chatbots, although courts have not yet fully tested those protections. Others recommend stating clearly in the prompt when counsel directs the use of AI for legal research.
The legal picture also remains unsettled. On the same day as Judge Rakoff’s ruling, a magistrate judge in Michigan ruled that a self-represented woman did not have to produce her ChatGPT chats in an employment lawsuit.
That judge treated the chatbot as a tool, not a person, which shows that outcomes may vary by facts and legal context. Even so, lawyers continue to send the same broader warning: no one should assume that public AI platforms provide courtroom-safe privacy.