Tutorial

How to Use AI for Legal Research Without Getting Sanctioned: A Practical Guide

AI Agent Brief may earn a commission through links on this page. This does not affect our rankings.

AI-assisted legal research is powerful. It can compress hours of case law review into minutes, surface relevant authorities you might have missed, and generate research memos that provide a strong starting foundation for legal analysis. It can also fabricate case citations with such confidence that experienced attorneys have submitted them to courts, resulting in sanctions, fines, and professional humiliation.

The difference between these outcomes isn’t luck — it’s workflow. This guide walks through a five-step process for using AI in legal research that maximises the productivity benefits while protecting you from the ethical and professional risks.

What You’ll Need

Before starting AI-assisted research on any client matter, ensure you have:

  • A legal-specific AI tool — not a general-purpose chatbot (more on why in Step 1)
  • Access to a citation verification service — Shepard’s (LexisNexis), KeyCite (Westlaw), or equivalent
  • A verification checklist (provided below) that you apply to every AI-generated output before it enters a filing or client deliverable
  • Familiarity with your jurisdiction’s AI disclosure requirements — check the standing orders of every court where you file

Estimated time to set up this workflow: 30 minutes. Estimated time saved per research task: 1–4 hours.

The single most important decision in AI-assisted legal research is the tool you use. General-purpose AI chatbots — ChatGPT, Claude, Gemini — are useful for brainstorming, summarising concepts, and exploring legal theories. They are dangerous for citation-dependent research because they generate plausible-sounding case citations that may not correspond to real cases.

This isn’t a theoretical risk. The fabricated citations in the Mata v. Avianca case came from ChatGPT. Subsequent incidents have reinforced the pattern: general-purpose AI can invent case names, docket numbers, and even judicial opinions that sound entirely legitimate but don’t exist.

Use legal-specific AI tools for research. These platforms are designed to ground their outputs in verified legal databases, dramatically reducing (though not eliminating) the hallucination risk:

CoCounsel (Thomson Reuters) draws from the Westlaw database — every citation links back to a verified primary or secondary source. Starting at approximately $225/user/month, it’s the most authoritative option for litigation-focused research.

Lexis+ AI (LexisNexis) grounds its outputs in the LexisNexis content library with real-time Shepard’s citation validation. If you’re already a LexisNexis subscriber, this is the natural upgrade path.

Clio Manage AI ($89–199/user/month) includes Vincent AI for research within the broader practice management platform — more accessible for small firms that can’t justify a standalone research AI subscription.

For a complete comparison, see our guide: Best AI Tools for Lawyers in 2026.

When general-purpose AI is acceptable: Brainstorming legal theories, generating initial outlines, understanding concepts in plain language, drafting non-filing client communications, and internal ideation — tasks where the output will be substantially reworked and no citations will be relied upon without independent verification.

Step 2: Frame Your Research Query Properly

The quality of AI research output depends heavily on how you frame the question. Vague prompts produce vague (and often unreliable) results. Specific, well-structured prompts produce useful, verifiable output.

Ineffective prompt: “What’s the law on non-compete agreements?”

Effective prompt: “Under New York law, what are the current standards for enforcing a non-compete agreement against a former employee in the financial services industry? Focus on cases from the past five years in New York state courts and the Second Circuit. Identify the key factors courts consider when determining reasonableness.”

The effective prompt works better because it specifies the jurisdiction, narrows the time frame, identifies the relevant courts, and asks for specific analytical factors rather than a general overview.

Best practices for legal research prompts:

Specify jurisdiction. Always state the jurisdiction explicitly. AI tools will draw from whatever sources they consider relevant if you don’t constrain them, which may include case law from irrelevant jurisdictions.

Define the time frame. Legal standards evolve. If you need current law, say “cases from 2020–2026” or “current standards as of 2026.” Without a time constraint, the AI may surface outdated authority.

Identify the specific legal question. “What are the elements of a breach of fiduciary duty claim under Delaware law?” is infinitely more useful than “Tell me about fiduciary duty.”

Request supporting authority. Explicitly ask the AI to cite specific cases, statutes, or secondary sources supporting each proposition. This gives you concrete starting points for the verification step.

State what you don’t need. If you’re looking for case law and don’t need statutory analysis, say so. Narrowing the scope improves both relevance and accuracy.

Step 3: Cross-Reference Every Citation

This is the non-negotiable step. Every case, statute, regulation, and secondary source cited in an AI-generated research output must be independently verified before it appears in any court filing, client memorandum, or other deliverable.

The verification workflow:

For case citations: Check that the case exists in Westlaw, LexisNexis, or another authoritative legal database. Verify the case name, citation, court, and date. Read the relevant portions of the opinion to confirm that the case actually stands for the proposition the AI claims it supports. Then Shepardise or KeyCite the case to confirm it hasn’t been overruled, reversed, or limited.

For statutory citations: Verify the statute exists and is current. Check for recent amendments. Confirm that the specific section or subsection cited is relevant to the proposition being supported.

For secondary sources: Verify the source exists, confirm the author and publication, and check that the cited passage supports the stated proposition.

Red flags to watch for:

  • Case names that don’t appear in any database. This is the hallmark of a hallucinated citation. If you can’t find the case, it almost certainly doesn’t exist — do not assume it’s simply not in your database.
  • Citations that are “almost right.” AI sometimes generates citations that closely resemble real cases but with incorrect dates, volume numbers, or reporter references. These near-misses are harder to catch than complete fabrications.
  • Cases that exist but don’t say what the AI claims. The AI may correctly identify a real case but mischaracterise its holding. Always read the relevant sections yourself.
  • Outdated authority presented as current. The AI may cite cases that have been overruled or statutes that have been amended. Shepardising/KeyCiting catches this.
  • Jurisdiction mismatch. The AI may cite authority from a jurisdiction that isn’t binding (or even persuasive) for your matter. Verify that the cited authority has the appropriate jurisdictional weight.

How much verification is enough? ABA Formal Opinion 512 acknowledges that the degree of verification should be proportional to the task. For court filings, every citation requires individual verification — there are no shortcuts. For internal research memos used as working documents, a sample-based verification approach (checking a representative subset) is reasonable, provided the memo is clearly marked as preliminary and will be fully verified before any citation is used in a filing.

Step 4: Document Your AI Usage

Creating an audit trail of your AI use serves multiple purposes: it demonstrates compliance with ethical obligations, supports your response to any future questions about your research process, and helps your firm track which AI tools are being used and how.

What to document:

  • Which AI tool was used (name and version)
  • The prompts or queries submitted
  • The date and time of the research
  • A summary of the AI’s output
  • What verification was performed and the results
  • Any corrections or modifications made to the AI output

How to document it: The simplest approach is a standardised form or template appended to your research memo or saved in the matter file. Some firms maintain a dedicated AI usage log. Some practice management platforms (including Clio) include built-in logging that captures AI interactions automatically.

Disclosure to courts: Check the standing orders of the court where you’re filing. A growing number of federal and state judges require affirmative disclosure of AI use in legal filings. Some require a statement that all citations have been independently verified. Compliance is straightforward if you’ve been documenting your AI use throughout the research process — the disclosure simply summarises what you’ve already recorded.

Step 5: Review and Integrate Into Your Brief

The final step is integrating AI-generated research into your actual work product. The critical principle: AI output is a starting point, never a finished product.

Use AI research as a foundation, not a framework. The AI may have identified relevant authorities and summarised key principles, but the legal analysis — how those authorities apply to your client’s specific facts, what arguments they support, what counterarguments they create — is your work as a lawyer. AI can accelerate the research phase; it cannot replace the analytical phase.

Rewrite rather than copy. Even when the AI’s summary of a case or legal principle is accurate, rewriting it in your own words ensures you understand the authority well enough to rely on it. If you can’t rewrite a proposition without referring back to the AI’s phrasing, you probably don’t understand it well enough to present it to a court.

Structure your own argument. AI-generated research memos often present information in a logical order, but that order may not align with the most persuasive structure for your specific argument. Reorganise the authorities to build your argument rather than following the AI’s default presentation.

Final pre-filing check. Before any document leaves your desk, confirm: every citation has been independently verified, every legal proposition is supported by the cited authority, the analysis reflects your own professional judgement, and any AI disclosure requirements have been satisfied.

Common Pitfalls

Over-reliance on a single AI tool. No AI tool is infallible, even the legal-specific ones. For critical research (dispositive motions, appellate briefs), consider running the same query through two different tools or supplementing AI research with traditional manual research to catch gaps.

Jurisdiction confusion. AI tools trained primarily on US federal law may apply federal standards when your matter is governed by state law, or vice versa. Always specify your jurisdiction and verify that cited authorities are jurisdictionally appropriate.

Outdated training data. AI models have knowledge cutoffs. Even legal-specific tools may not reflect very recent decisions, legislative changes, or regulatory updates. For time-sensitive matters, supplement AI research with a manual check of recent developments.

Assuming AI understands context. AI tools process text, not meaning. They may miss the factual nuances that make one line of authority more relevant than another to your specific case. The lawyer’s contextual understanding of the case remains essential.

Frequently Asked Questions

There is no blanket prohibition on AI-assisted research in any US jurisdiction — the ethical obligations are about how you use it, not whether you use it. However, some courts have specific standing orders that impose additional requirements (such as mandatory disclosure) for AI-assisted filings. Administrative proceedings, arbitrations, and international tribunals may have their own rules. Check the applicable rules and orders for every forum where you appear.

What if the AI identifies a case I would have missed?

This is precisely the value proposition of AI-assisted research — it can surface relevant authorities that traditional keyword-based research might miss. When this happens, treat the AI’s find exactly like any other research lead: verify the citation, read the case, confirm its relevance and current status, and then incorporate it into your analysis with full confidence. The fact that AI identified the case doesn’t make it less (or more) authoritative — it’s the case itself that matters.

Should I tell opposing counsel I used AI?

There is currently no general obligation to disclose AI use to opposing counsel (as distinct from disclosure to the court, which may be required by standing order). However, some firms are voluntarily disclosing AI use as a professional courtesy and to build trust. The trend is toward greater transparency. If you’re uncertain, the safer course is disclosure — it eliminates any suggestion of concealment and demonstrates confidence in your verification process.

Back to Best AI Tools for Lawyers in 2026: Contract Review, Legal Research, and Drafting Compared