Legal

Harrison & Cole LLP: From Sanctions Order to Documented AI Governance

A 28-attorney regional firm faced a fabricated-citation sanctions order after a junior associate's ChatGPT-drafted brief made it to filing. The Sprint that followed produced the verification protocol and supervision framework the firm should have had before the incident.

Outcome: $270K direct cost, Sprint recovery, malpractice premium contained at next renewal
Editorial photograph: leather-bound legal volumes on dark mahogany law-library shelves, lit by a single brass library lamp

The firm

Harrison & Cole LLP is a 28-attorney regional law firm based in a Southeastern metro, with $12 million in annual revenue. Forty years of practice spans corporate transactions, real estate, estate planning, and a small litigation group. The firm has six equity partners, four non-equity partners, eighteen associates, and a support staff of fourteen — a total headcount of 42.

Harrison & Cole had built a reputation as a technology-forward firm. They were early on cloud document management, early on e-signature workflows, and during 2023–24 the managing partner had been openly enthusiastic about generative AI as a productivity tool for the firm.

The incident

In late 2024, a second-year associate was working on an opposition motion in a commercial litigation matter. The deadline was Friday. The senior associate originally assigned to the matter had a family emergency and rolled off Wednesday afternoon. The junior associate inherited the work with 48 hours to filing.

She used ChatGPT to research case law and draft sections of the brief. The model produced well-written legal arguments, supported by what appeared to be relevant case citations from the relevant circuit. The associate incorporated the suggestions and sent the brief to the supervising partner Thursday evening.

The supervising partner, also under deadline pressure on a different matter, focused on the strategic argument structure rather than verifying each citation. The brief was filed Friday morning.

Opposing counsel checked the citations as a matter of course. Three of the cases didn't exist. The names looked authentic. The reporter citations were syntactically valid. The procedural posture descriptions were plausible. The cases themselves were fabricated.

Opposing counsel filed a motion for sanctions.

The cascading consequences

The judge's order was not subtle:

"Counsel's submission of fabricated case citations represents a fundamental breach of the obligations attorneys owe to this court."

Sanctions imposed: $15,000. The ruling was published. Legal media picked it up within a week.

What followed unfolded over the next nine months:

  • The state bar opened an ethics investigation into the supervising partner. Defense costs ran to $45,000 over six months before the bar issued a private reprimand.
  • The client whose matter was damaged retained new counsel and sued Harrison & Cole for malpractice. The firm's malpractice carrier engaged defense counsel; settlement landed at $125,000.
  • At renewal, the malpractice carrier increased premiums 85% — adding roughly $35,000 in annual cost — and added an AI Practice Rider requiring documented governance, verification protocols, and annual training records.
  • The supervising partner left the firm six months after the incident, citing a combination of bar investigation stress, client relationship damage, and internal recriminations.
  • The associate who drafted the brief left two months later.

Direct costs through nine months: $270,000+. Replacement costs for the partner (recruiting, transition, lost book of business) added an estimated $50,000 on top.

Why this happened

The firm's internal review surfaced four root causes:

  • No AI policy. The firm had no written guidance on AI usage. Associates made individual decisions about whether and how to use AI tools.
  • No verification protocol. Brief-checking was assumed to occur at the supervisor level, but there was no documented process for what verification looked like — particularly for AI-assisted work.
  • A training gap on AI behavior. Neither the associate nor the supervising partner understood AI hallucination as a phenomenon. They didn't know that LLMs can generate convincing fabricated citations because that wasn't part of any firm training they'd received.
  • Deadline culture. Legal work runs on crushing deadlines. The culture rewarded delivery; it had no mechanism for "the deadline is the problem" as an acceptable risk signal.

The decision

Once the immediate fallout stabilized, the managing partner met with the firm's COO, outside counsel, and the chair of the litigation group. The malpractice carrier had already signaled what the next renewal would require. The state bar matter was wrapping up. The malpractice suit had settled. But the firm now faced a recurring annual question: how do we satisfy the rider's documentation requirements every year, in a way the carrier will accept?

Two paths:

Path 1: Treat it as a one-time policy update. Write an AI policy, distribute it, hope nothing else happens.

Path 2: Build governance as an operational discipline. Get a documented program in place with a yearly cadence.

The managing partner picked Path 2. Two reasons:

  1. The malpractice carrier had been explicit: at next renewal, they would expect documented evidence of the rider's requirements. Not "we have a policy." Documented training records, documented verification logs, documented incident reporting.
  2. The supervising partner's departure had cost the firm a senior litigation lead. They could not afford another partner-level loss to a governance incident.

A peer at another regional firm referred them to Shadow AI Labs. The firm engaged for the AI Risk Sprint — $5,500, two weeks, fixed scope.

Inside the Sprint

Deliverable 01 — AI Tool Discovery (Days 1–4)

Browser telemetry across 42 employees, anonymous survey, document management system audit, and a manual review of recent filings for AI-assisted-language signatures. Findings:

  • 23 AI tools in active use across the firm — including the four sanctioned tools the partners had identified in their post-incident scramble
  • Four consumer LLM accounts used by associates and partners, including one paid Claude account funded by a partner's personal subscription used for client work
  • Two AI-powered legal research add-ons to the firm's existing Westlaw and Lexis subscriptions — features that had been silently activated by the vendors in early 2024 with no formal contract update
  • Three Chrome extensions submitting drafted text to external AI APIs without any vendor contract
  • One associate running a personal "custom GPT" trained on the firm's internal brief style — created with the best intentions and exposing internal work product to OpenAI in the process

The discovery established the firm wasn't dealing with a single rogue use case. AI use was diffuse and informal across the practice.

Deliverable 02 — Risk Classification Matrix (Days 3–6)

Each tool was mapped against NIST AI RMF risk severity and a custom criticality scale adapted for the legal practice context (which includes ethical risk to the supervising attorney as a dimension beyond standard data-classification frameworks):

SeverityCountPattern
Critical5Consumer AI used in document drafting or research with client work product
High7Sanctioned-eligible tools with documented gaps (no enterprise contract, BAA, or DPA on file)
Medium8AI features in existing legal-vendor tools requiring contract updates
Low3Productivity tools with limited client-data exposure

The Critical-severity classification included the entire pattern that had led to the citation incident — consumer LLM use for legal research with no verification protocol — across multiple associates.

Deliverable 03 — Malpractice Carrier Rider Gap Analysis (Days 5–9)

Side-by-side comparison of the new AI Practice Rider against the firm's documented controls. Five material gaps, ranked by carrier weight:

  1. No written AI Acceptable Use Policy (rider required)
  2. No verification protocol for AI-assisted research or drafting (rider required, with specific reference to citation verification post-Mata)
  3. No annual training records (rider required, three-year retention)
  4. No documented supervision framework for AI-assisted work (rider added in 2024)
  5. No incident reporting and remediation log (rider added in 2024)

All five were addressable before the next renewal — twelve months out — if remediation started within the quarter.

Deliverable 04 — AUP & Verification Protocol (Days 6–10)

The AUP was drafted with specific provisions for legal practice. Key elements:

  • Sanctioned tools list with explicit ethical-risk classifications: legal research tools (Westlaw AI, Lexis+ AI — both with citation verification built into the workflow), document drafting tools (firm's Microsoft Copilot E5 enterprise license, with documented exclusions for drafts intended for filing without supervisor verification), and productivity tools (Grammarly Business, sanctioned for correspondence and internal documents only).
  • Verification protocol as a separately-published procedure document:
    • Every citation in an AI-assisted draft must be checked in Westlaw or Lexis against the actual reporter; check-marks logged in the brief workflow tool
    • Every legal argument structure derived from AI output must be confirmed against at least one authoritative secondary source
    • Every brief that has any AI-assisted component must include a supervisor's signed attestation that the verification protocol was followed
  • No-blame reporting standard: any associate who suspected they'd relied on unverified AI output in a filed document could report it within 24 hours without disciplinary consequence. The standard was explicit in writing and carried partner-level endorsement.
  • Supervision framework requiring partner-level review of AI-assisted work product, with documented attestation requirements for any filing.

Deliverable 05 — 12-Month Remediation Roadmap (Days 9–13)

Sequenced action plan with owners, effort estimates, and acceptance criteria. Highlights:

  • Week 1: Distribute AUP firm-wide with required acknowledgment. Disable the four Critical-severity tools at the network and endpoint level. Stand up the verification log infrastructure in the existing brief workflow tool.
  • Month 1: Module 1 training (60 minutes, all attorneys and staff) — frames the AUP around the carrier rider and the post-Mata legal landscape. Initial verification protocol audit.
  • Month 3: Module 2 (role-segmented: associates, partners, support staff) — operational training on the verification protocol with worked examples drawn from the firm's actual practice areas.
  • Month 6: First quarterly governance committee meeting. Module 3 training delivered. AI Practice Rider mid-year evidence review submitted to carrier.
  • Month 9: Tabletop exercise of the incident response procedure. Pre-flight package assembled for the upcoming renewal.
  • Month 12: Renewal submission to malpractice carrier with full evidence package.

Deliverable 06 — Executive Readout (Day 14)

A 60-minute readout with the managing partner, COO, chair of the litigation group, and the firm's outside ethics counsel. The 22-page PDF report — including the 23-tool inventory, line-by-line rider mapping, and acceptance criteria for each remediation step — was delivered same-day. Three decisions were made during the meeting:

  1. Appoint a Compliance Officer at the partner level with documented responsibility for the AI governance program (the firm had no formal compliance officer; the COO had been handling compliance as part of an unrelated portfolio).
  2. Commit to the rider's annual renewal cadence with a documented Q3 pre-flight process starting in 2025.
  3. Authorize the AI Governance Implementation engagement to execute the 12-month roadmap with hands-on support through the renewal deadline.

The follow-on

Harrison & Cole engaged Shadow AI Labs for the AI Governance Implementation — a structured execution of the 12-month roadmap, $32,000. By the renewal deadline, all rider requirements were satisfied with documented evidence. The malpractice premium increase at the next renewal was 8%, against an unbounded prior baseline; the carrier removed two of the rider's special conditions based on the documentation submitted.

A Fractional retainer ($5,500/month) was authorized for the second year — quarterly governance committee chairing, the Q3 renewal pre-flight, and on-call advisory for any in-flight ethics matters involving AI-assisted work product.

The 12-month outcome

  • 156 AI Acceptable Use Policy acknowledgments collected and retained (full firm including new hires)
  • 84 verification-protocol log entries reviewed and audited across the litigation group's filings
  • Two near-misses self-reported via the incident hotline during the year — both reviewed, both categorized as policy-compliant after investigation, both documented in the incident log
  • Zero new sanctions, zero ethics complaints, zero malpractice notices related to AI-assisted work

The malpractice renewal package was submitted three weeks ahead of deadline. The carrier's renewal communication included a one-paragraph acknowledgment that Harrison & Cole's documentation represented "an example of the maturity level the AI Practice Rider was intended to incentivize."

The numbers

CategoryCost (cumulative)
Original incident (sanctions, bar defense, malpractice settlement, premium increase Year 1)$270,000
Partner replacement (recruiting + transition)$50,000
AI Risk Sprint$5,500
AI Governance Implementation$32,000
Microsoft Copilot E5 enterprise (42 seats × 12 mo)$36,000
Westlaw AI / Lexis+ AI seat upgrades$18,000
Verification log infrastructure$4,800
Compliance Officer time (0.25 FTE partner-level)~$95,000
Training time (42 × 90 min Year 1 + 30 min Year 2)~$22,000
Fractional retainer (Year 2)$66,000
Two-year total~$599,300

The counterfactual — another AI-driven incident with documented prior knowledge — would have meant carrier coverage denial under the rider's repeat-incident clause. A single uninsured malpractice judgment in a commercial litigation matter routinely exceeds the firm's annual revenue.

The managing partner's reflection

"The incident cost us a partner, an associate, and a quarter-million dollars before we engaged anyone. The Sprint cost us five thousand five hundred dollars and showed us, in a written report we could hand to our carrier, why the incident had happened and what we needed to change. The Implementation engagement gave us a 12-month plan with someone who'd done it before. We had spent forty years building a practice. We almost burned it down on a forty-eight-hour deadline because we didn't have a verification protocol. We have one now."

What we'd tell another law firm

1. The malpractice carrier is the forcing function, not the bar

State bar oversight is reactive — investigation follows incident. The malpractice carrier's AI Practice Rider is prospective — it dictates documentation requirements every year, with renewal as the deadline. The right time to engage is when you receive the rider language for the upcoming renewal, not when something goes wrong.

2. Verification protocol is non-negotiable

Every AI-assisted filing needs a documented verification step. Not "we trust our supervisors" — a written procedure with checked log entries. The cost of putting this in place is trivial. The cost of not having it, post-incident, is unbounded.

3. Supervision frameworks have to adapt

Traditional supervision assumes the work product comes from a human associate with verifiable training and known limitations. AI output requires different verification because the failure modes are different (hallucination, not negligence). Firms that haven't updated their supervision framework are operating on an assumption that no longer holds.

4. The Sprint produces what the carrier asks for

The deliverables are not novel — every law firm's compliance program has versions of these documents. What the Sprint produces is the current version, with the carrier's current rider language as the design target, in two weeks. For most firms, that timeline beats the alternative of doing it internally over six months while the calendar marches toward renewal.


Is your firm carrying the documentation the rider expects?

If your malpractice carrier added an AI Practice Rider to your last renewal — or signaled they'll add one to the next — the Sprint is the fastest path to documented governance that the rider will accept.

Take our free AI Risk Assessment to see where your firm sits relative to current rider language — or book a Discovery call to talk through your specific renewal timeline.


This case study is a composite based on real-world incidents — including the widely-publicized Mata v. Avianca matter and similar fact patterns from 2023–24. Firm name, attorney names, and specific procedural details have been modified to protect confidentiality while preserving the educational value of the scenario.

Note: This case study is a composite based on multiple real-world incidents. Details have been modified to protect confidentiality while preserving the educational value of the scenario.

Is your organization at risk?

Identify your shadow AI exposure before it becomes an incident.