Adam Bair

Claude for Lawyers: How a Working Lawyer Uses Claude Without Getting Sanctioned

Written by Adam Bair. Published 2026-05-02. AI and the Law

A working lawyer's desk with an open laptop, a stack of case-law printouts, a yellow legal pad, and a coffee cup in warm natural light.

The practical question is not whether Claude is good enough to use in legal work. It is whether a working lawyer can build a workflow around it that holds up to a sanctions order, a bar grievance, and a discovery request. The honest answer is yes, with conditions, and the conditions are not optional.

I am a Florida trial lawyer who has spent hundreds of hours building working AI workflows for legal research and writing. This article is for the practitioner audience: solo and small-firm lawyers who draft their own briefs, read their own cases, and carry their own bar card. It is not legal advice. It is a description of how the workflow actually runs and where it fails.

Educational only. Not legal advice.I am a Florida trial lawyer, licensed only in Florida. Reading this article does not create an attorney-client, fiduciary, or advisory relationship. AI tools change quickly. Bar opinions change less quickly but they change. Verification protocols that work today should be reviewed quarterly. If you are evaluating AI for use in your practice, the most useful starting points are ABA Formal Opinion 512, your state bar's most recent guidance, and the published sanction orders.

Why this question keeps coming up

The verification tax is the operational pain. A lot of lawyers tried general-purpose AI in 2023 or 2024, found that cite-checking the output ate the time savings, and quit. A different group has stayed on the sidelines because of the sanction orders that keep landing in the news cycle: Mata v. Avianca in 2023, Park v. Kimin 2024, and the steady drumbeat of cases tracked in Damien Charlotin's hallucination database, which sits north of fourteen hundred cases worldwide and grows by two or three a day.

Both groups are reading the situation correctly. General-purpose AI hallucinates. Stanford's RegLab studies put the hallucination rate on closed-form legal questions at sixty-nine to eighty-eight percent for general models, and even legal-specific tools like Lexis+ AI and Westlaw Precision come in at meaningful double-digit rates. The fear is not paranoid. The fear is empirically grounded.

The mistake is concluding that the answer is to stop using the tool. The answer is to use the tool inside a verification protocol that catches the hallucinations before they reach a filing.

What Claude actually does well in legal work

Claude is a writing companion. That is the load-bearing fact about the tool. It writes long-form prose with structure, it follows complex multi-step instructions, and it tolerates being told what voice to write in and what facts it is and is not allowed to use. For a working lawyer, those properties map onto a specific set of tasks.

First-draft long-form prose against a closed set of facts. Statement of Facts. Procedural history sections. Background sections in a memo. The lawyer feeds the tool the underlying record, the dictation, and the specific facts the section needs to cover, and the tool produces a draft in the requested voice. The voice can be tightened: many lawyers find that telling Claude to write in the manner of a specific appellate writer they admire produces better drafts than asking for a generic legal voice.

Argument refinement against a closed set of cases. The lawyer pulls the cases first, reads them, and uploads the relevant excerpts as the source material. Claude then helps with structure, transitions, and counterargument anticipation. It does not invent the cases. It works inside the corpus the lawyer has provided.

Deposition preparation.Long deposition transcripts can be uploaded and queried. The lawyer asks where a specific topic was covered, what the witness said about a specific event, or where the witness's testimony shifted from one section to another. The output points back to specific transcript pages, and the lawyer reads the actual transcript to confirm.

Brief restructuring and condensing. A draft that is two pages too long can be cut without losing substantive content. A draft that buries the lead can be reorganized. The tool does this faster than human editing because it can hold the whole document in context at once.

Counterargument generation.Asking the tool to brief the other side's strongest response to your argument, before you finalize, regularly catches gaps. The output is suggestive, not exhaustive, and it sometimes invents counterarguments the other side will not actually make. A lawyer reading the output critically can still get value from it.

What Claude actually does poorly in legal work

The same tool that drafts long-form prose well falls down hard on tasks that require it to know what cases exist.

Open-universe legal research. Asking Claude what the Eleventh Circuit has held about a specific evidentiary question, with no source material provided, is the workflow that produces fake citations. The tool does not have a reliable map of what cases exist. It produces case names that sound plausible because that is what generative models do. The output may be eighty percent right and twenty percent fabricated, and the lawyer cannot tell which is which without checking every cite. By the time the cites are checked, the time savings are gone.

Procedural questions where the answer depends on local court rules. General federal procedure questions are usually answered correctly. Specific local rules, judge-specific standing orders, and filing-format requirements are not consistently right. The tool sometimes confidently cites a rule number that does not exist or describes a procedure from a different jurisdiction.

Calculations that require precision. Date math, statute-of-limitations calculations, damages math. The tool can produce a draft of the calculation but the numbers are often wrong in ways that are subtle and easy to miss. Spreadsheet tools and a calculator are still the right answer.

Anything where the lawyer cannot verify the output before relying on it. This is the meta-rule. If the lawyer is using the tool because they do not have time to verify, the lawyer is using the tool wrong.

The verification protocol

The structural answer to the hallucination problem is to feed the tool a closed set of authorities and instruct it to use only those authorities. The phrase that gets used in this space is “closed universe.” Other names exist. The mechanic is the same regardless of name.

Step one. The lawyer pulls the cases. Westlaw, Lexis, Fastcase, Sedient, or whatever research tool the lawyer subscribes to. The lawyer reads the cases, identifies the relevant excerpts, and pulls the authoritative text directly from the research database. No AI involvement at this stage.

Step two. The lawyer assembles the corpus.The relevant excerpts get organized into a single source document. Format matters less than fidelity: whatever format the lawyer uses, the case-quoted text has to be byte-for-byte from the research database, with citation already in the lawyer's preferred format.

Step three. The lawyer instructs the tool. The corpus gets uploaded to Claude as project material. The instructions tell the tool what voice to write in, what structure to use, and a specific direction that the tool may not cite any case or authority that does not appear in the uploaded corpus. The instruction is not a suggestion; it is a rule that gets enforced by reading the output critically.

Step four. The tool drafts. The lawyer reviews the draft. Every citation in the output gets matched against the uploaded corpus. Anything that does not match the corpus gets cut, even if it sounds correct. The lawyer is the gate.

Step five. The lawyer cite-checks the corpus itself.The cases that came out of the research database are still subject to the lawyer's normal verification habits. Shepardize, KeyCite, or run the equivalent currency check. The corpus is the source of truth for the AI; the lawyer is the source of truth for the corpus.

That protocol takes longer to describe than to run. In practice it adds maybe twenty minutes to a substantial brief and removes the risk of a fabricated citation reaching the filing.

Confidentiality

ABA Formal Opinion 512 (July 2024) and Florida Bar Advisory Opinion 24-1 both address the confidentiality question for generative AI use in legal work. The headline rule is that lawyers retain their confidentiality obligations under Rule 1.6 regardless of whether an AI tool is involved, and that uploading client information to a third-party tool requires reasonable diligence about how that tool handles the information.

Practical effects:

Read the terms of service. Consumer-tier products usually retain inputs for training and have minimal commitments about deletion. Enterprise and API offerings from major providers usually offer zero-retention or business-tier protections, but the specific terms vary, and they change. The lawyer who is using AI in client matters needs to know what tier they are on and what that tier means for data handling.

Consider whether informed consent is required. Some bar opinions read the confidentiality rules to require informed client consent before uploading client information to a third-party AI tool, depending on the specifics. Other opinions read the rules less strictly. The conservative posture is to disclose AI use to clients and get consent, particularly for any matter where confidentiality is sensitive.

Watch for inadvertent privilege issues. Recent federal decisions have addressed whether client communications with AI tools are protected by attorney-client privilege; the answers have varied across rulings and the area is unsettled. The conservative posture is to assume that client-AI exchanges are not privileged unless and until a decision in the relevant jurisdiction holds otherwise. A separate companion article on this site addresses the privilege question in more detail.

Mind the metadata. Pasting a brief into Claude often pastes metadata as well, including comments, tracked changes, and document properties. Strip the document before uploading.

Brief drafting workflow

A practical brief-drafting workflow that holds up to the sanction-aware reader looks something like this.

The lawyer starts with the underlying record. Pleadings, deposition excerpts, key exhibits, the controlling cases. The corpus gets pulled directly from the research database and assembled into a project workspace.

The Statement of Facts comes first. Dictated narrative gets transcribed, refined into a cleaner draft, and then handed to Claude with the instruction to write a neutral but persuasive Statement of Facts in a specific voice, using only the facts in the uploaded record. The output is reviewed line by line; anything that drifts from the record gets corrected. Citations to the record (depositions, exhibits) are added by the lawyer.

The Argument section is built one issue at a time, in separate conversations. Each issue starts with the relevant cases uploaded as source material and the instruction to argue the issue using only those cases. The lawyer drafts the structure of the argument first; the tool fills in transitions, refines language, and helps with counterargument anticipation. Citations are checked against the uploaded corpus.

The Introduction comes last, after the rest of the brief is drafted. Claude can take the finished argument as input and produce a tight introduction that frames the case. The lawyer reads it critically; introductions are where the tool's tendency to overclaim is highest, and tight editing matters.

Voice work happens at the end. The em-dash habit and the few other tells of AI-generated prose can be removed by instructing the tool to produce a final pass that strips them. The lawyer's own read-through is the last gate.

Citation verification

Verification is mechanical. For every citation in a draft that came out of an AI tool, the lawyer does three things.

Confirm the case exists. Pull the citation in the research database. The case has to come up. If the database returns nothing, the citation is fabricated. Cut it.

Confirm the case stands for what the brief says it stands for. Read the relevant section of the opinion. The proposition cited has to match the proposition in the brief. AI tools sometimes attach a real case to a real-sounding proposition that the case does not actually support; the citation looks valid but the substance is wrong.

Confirm the case is good law. Shepardize, KeyCite, or run the equivalent currency check. A case that has been overruled is worse than a fabricated case because the lawyer who relies on it looks lazy rather than reckless.

The cite-check happens after the brief is otherwise finalized. Treating it as the last step concentrates the discipline; treating it as a step to do alongside drafting tends to produce shortcuts.

Deposition prep

Long deposition transcripts are one of the cleaner use cases for the tool because the source material is already a closed corpus. The transcript is uploaded; the lawyer asks specific questions about specific topics; the tool returns the relevant pages and quotes. The lawyer then reads the actual transcript pages.

Useful queries:

  • “Where did the witness discuss the meeting on March 15?”
  • “Where did the witness's account of the documents change between the first hour and the third hour?”
  • “What were the witness's specific words when describing the contract terms?”
  • “List every page where the witness used the phrase ‘I do not recall’ and the topic of the question.”

The output points to specific page-and-line citations. The lawyer reads the actual pages to confirm. The tool has saved the lawyer from manually scrolling through three hundred pages of transcript looking for the section.

What the bar opinions actually require

ABA Formal Opinion 512 sets a duty of “reasonable understanding of the capabilities and limitations” of any AI tool a lawyer uses. The phrase is doing a lot of work. It does not mean the lawyer needs to understand the architecture of large language models. It means the lawyer has to know that the tool can fabricate citations, has to know that the tool's confidentiality posture depends on the tier and terms, and has to use the tool in a way that accounts for both.

Florida Bar Advisory Opinion 24-1 organizes the duties into four pillars: confidentiality, oversight of nonlawyer assistance (the bar treats AI tools analogously), fees and costs, and lawyer advertising. The four-pillar framework is a clean checklist for a lawyer evaluating any new AI tool against the rules.

Other state bars have published opinions that map onto the same general framework with different emphases. The conservative move is to read the opinion in the lawyer's home state, the ABA opinion, and the opinion of any state where the lawyer is admitted. The framework is consistent across opinions even when the specific language differs.

When to invest in a real workflow

Most lawyers who try AI for legal work do it once or twice on a hard problem, get burned, and quit. The lawyers who stay with it tend to share three habits.

They invest a small amount of time up front in setting up a personal profile or project workspace that captures their voice, their preferred citation format, and their standing instructions. The setup pays back across every brief that follows.

They standardize the verification step. The cite-check protocol gets the same treatment as a calendar entry: it happens every time, in the same way, regardless of how rushed the brief is.

They keep a small library of prompts they reuse. Drafting a Statement of Facts in a specific voice, generating counterargument anticipation, condensing an argument by twenty percent. Reusable prompts compound across cases.

That set of habits is what a serious workflow looks like. Verification protocols matter. The setup pays for itself over the second and third brief, not the first. The lawyer who treats AI as a tool to be configured, not a chatbot to be queried, is the lawyer who gets the productivity gain without the sanction risk.

Frequently asked questions

Is Claude better than ChatGPT for legal writing?

For long-form legal prose specifically, most working lawyers who have tried both find Claude produces tighter first drafts with fewer of the obvious AI tells. ChatGPT has strengths in other areas including its canvas-style editing interface for collaborative document work. The honest answer is that the tools are close enough that the workflow matters more than the tool. A disciplined verification protocol with either tool beats casual use of the other.

Will using Claude on a brief get me sanctioned?

Using Claude on a brief will not get a lawyer sanctioned. Filing a brief that contains a fabricated citation will get a lawyer sanctioned. The relevant question is whether the lawyer's verification step catches the hallucinations before the brief is filed. The published sanction orders share a common feature: the lawyer relied on the tool's output without checking the citations.

Do I have to tell my client I am using AI?

Bar opinions on this question vary. The conservative posture is to disclose AI use to clients in the engagement letter or at the outset of representation, and to obtain informed consent before uploading client information to a third-party tool. The aggressive posture relies on the lawyer's general professional judgment about reasonable diligence. Reading the most recent opinion from your state bar is the right starting point.

What about confidentiality when I upload client documents?

Read the terms of service of the tier you are on. Consumer products usually retain inputs for training; enterprise and API offerings usually offer better protection. The duty of confidentiality under Rule 1.6 does not change because an AI tool is involved.

Can I use Claude for legal research?

Open-universe legal research is the workflow that produces fabricated citations. The reliable use of Claude in research is to feed it cases the lawyer has already pulled and verified, not to ask it what cases exist. For finding cases in the first place, traditional research databases are still the right tool.

How long does it take to set up a working AI workflow?

A first-pass workflow can be built in a few hours. A workflow that actually saves time across multiple briefs takes longer to refine because the lawyer learns by trying and adjusting. Most lawyers who stick with it report that the setup pays for itself by the second or third substantial brief.

Bottom line

Claude is a useful writing companion for a lawyer who builds the verification protocol around it. Without the protocol, the tool produces fabricated citations on a regular schedule. With the protocol, the tool produces drafts faster than the lawyer can produce them alone, with no fabrication risk reaching the filing.

The tool is not the question. The workflow is the question.


Written by Adam Bair.

Adam Bair is a Florida trial lawyer and AI educator. He writes about verification-first AI workflows for legal practice. Florida Bar profile.