Professional Services

Heartland Tax & Advisory: From Tax-Season Crisis to Carrier-Ready

A 60-employee CPA firm caught a staff accountant uploading client tax packages to ChatGPT mid-season. The two-week Sprint that followed got them carrier-ready by December renewal.

Outcome: $52K incident response, Sprint recovery, premium held flat at renewal
Editorial photograph: stacks of tax-code volumes and leather ledgers on dark wood shelving, lit by a brass banker's lamp

The firm

Heartland Tax & Advisory is a 60-employee regional CPA firm with $5.8 million in annual revenue. About 70% of that revenue is tied to tax season and extensions: individual returns, small-business preparation, and a growing audit practice serving non-profits and closely-held companies across the upper Midwest.

Heartland has six partners, three managers, fourteen senior accountants, twenty-three staff accountants, and a rotating bench of seasonal preparers who come on between January and April. The firm has been operating since 1987.

The incident

During the 2024 tax season — the second week of March, peak crunch — one of the staff accountants was working through a complex return for a small-business client. The package was thick: Form 1040, three Schedule Ks from pass-through entities, a stack of W-2s, brokerage statements, and supporting documentation for a partial sale of an interest in an LLC.

She was tired. The pile was tall. She uploaded the whole package to ChatGPT and asked it to "pull out the relevant numbers and put them in a summary I can use."

A partner walking past her desk caught a fragment of what she said to a colleague: "Look, you just upload everything, and it pulls out all the relevant numbers for you. It's amazing."

He recognized it immediately for what it was.

The 48-hour scramble

Heartland's response was fast and disciplined. Within 48 hours, the firm:

  • Halted all unauthorized AI use across the office
  • Engaged the firm's compliance attorney for an emergency consultation ($2,800)
  • Surveyed staff to determine the scope (7 employees had been using consumer AI tools across approximately 150 client engagements over six weeks)
  • Notified the cyber insurance carrier proactively
  • Sent advisory letters to 84 potentially-affected clients ($3,200 in letter preparation, mailing, and follow-up)
  • Documented the incident timeline for the cyber rider claim file

Three clients terminated the relationship over the next 30 days, citing data-protection concerns. Combined annual fees: $14,000. Their cyber carrier issued a formal advisory letter — no coverage denial, but documented prior knowledge that would factor into the next renewal.

Direct response costs: $52,000.

Why this happened

Once the immediate scramble settled, the partners did a quiet root-cause review. The findings weren't surprising in retrospect:

  • No AI policy. Staff had no written guidance about which tools were permitted, which were forbidden, or how to think about the difference.
  • No discovery process for shadow IT. The firm's IT manager — a part-time role supported by an external MSP — had visibility into approved software but no telemetry on what staff were actually using day-to-day.
  • Tax-season pressure made shortcuts feel justified. Staff weren't being reckless. They were trying to meet deadlines that made every minute matter. ChatGPT looked like a productivity tool, not a vendor agreement.
  • The cyber rider had silently added an AI Security clause at the December renewal. No one at Heartland had read it. The clause specifically required documented AI governance, vendor BAAs for any tool processing client financial data, and annual employee training — none of which the firm had.

The managing partner summarized it: "We had built the firm around quality control for tax positions. We had no quality control for the tools we used to do the work."

The decision

After cleanup, the managing partner faced two paths.

Path 1: DIY governance. Write a policy from a template. Do internal training. Hope it stuck.

Path 2: Bring in a specialist. Have someone who'd done this before run the engagement and produce documented evidence the carrier could see.

She picked Path 2 for three reasons:

  1. Tax season 2025 was nine months out. No time for trial-and-error governance during peak.
  2. The carrier wanted documented remediation before the December 2024 renewal — and "documented" meant a written record, not "we updated the employee handbook."
  3. "We're CPAs, not security architects. The most expensive thing in this firm is partner time — and partner time spent learning AI governance from scratch is partner time we're not spending serving clients."

Her cyber insurance broker referred her to Shadow AI Labs. She engaged the firm for the AI Risk Sprint — a fixed-scope, two-week, $5,500 engagement.

Inside the Sprint

Deliverable 01 — AI Tool Discovery (Days 1–4)

Browser telemetry deployment, an anonymous employee survey, procurement audit, and SSO log review. The discovery surfaced 31 AI tools in active use across the firm — well beyond the 5 the partners had known about. Findings included:

  • 3 Chrome extensions submitting data to AI APIs that weren't under any vendor contract (these had been installed by individual staff to "summarize long emails")
  • 2 SaaS tools the firm already paid for whose AI features had been silently activated by the vendor in late 2024 — features that processed firm data through external AI infrastructure
  • 4 consumer AI accounts (ChatGPT, Claude, Gemini) used by senior staff with documented exposure to client data
  • 1 internal "custom GPT" a senior manager had built using a personal OpenAI account, which had ingested anonymized client examples that turned out to be less anonymized than he thought

The discovery was the moment the partners realized "this isn't a one-person problem."

Deliverable 02 — Risk Classification Matrix (Days 3–6)

Each of the 31 tools was mapped against NIST AI RMF risk severity and business criticality. The matrix:

SeverityCountPattern
Critical4Consumer AI accessed with client tax data
High6BAA-or-equivalent gaps in tools handling billable data
Medium12Vendor AI features active without a documented contract update
Low9Sanctioned-eligible tools with limited PHI/PII exposure

Critical-severity tools were flagged for immediate removal. High-severity tools went into a remediation queue with target dates.

Deliverable 03 — Cyber Insurance Rider Gap Analysis (Days 5–9)

Side-by-side comparison of the carrier's AI Security Rider against Heartland's documented controls. Two material gaps:

  1. No documented AI tool inventory. The rider required an annually-refreshed list of all AI tools in use, classified by data type processed. Heartland had nothing on file.
  2. No formal employee training program. The rider required annual, role-segmented training with completion records retained for three years. Heartland's prior "training" had been a one-page memo at the post-incident scramble.

Both were addressable before December renewal — but only if remediation started in Q3 and the Implementation phase was sequenced correctly.

Deliverable 04 — AUP & Training Outline (Days 6–10)

The AUP was drafted to Heartland's operational reality — not a generic professional-services template. Specific provisions reflected:

  • The fundamental difference between year-round office work (where Microsoft Copilot E5 is sanctioned for general productivity, with normal data-classification rules) and tax-season operations (where temporary preparers are onboarded weekly, where time pressure is structurally different, and where the data being handled has the highest sensitivity classification in the firm).
  • A no-blame reporting standard: any employee who suspected they'd inadvertently transmitted client data to an unsanctioned tool could report it within 24 hours without disciplinary consequence. The standard was explicit in writing.
  • A vendor procurement review checklist that any new AI tool — including new AI features in existing tools — had to clear before staff use.

Training was outlined as four modules totaling 70 minutes, LMS-ready, role-segmented for partners, year-round staff, seasonal preparers, and administrative staff.

Deliverable 05 — 90-Day Remediation Roadmap (Days 9–13)

Sequenced action plan with owners, effort estimates, and acceptance criteria for each step. Highlights:

  • Week 1: Kill the 4 Critical tools at the network and endpoint level (Cisco Umbrella DLP, ~$3,600/year for 60 seats). Distribute AUP with required acknowledgment.
  • Day 30: Roll out Microsoft Copilot E5 firm-wide as the sanctioned alternative. Disable AI features in the two SaaS tools where the firm doesn't need them; document where the firm chooses to keep them on.
  • Day 60: Complete Modules 1 and 2 of training. Submit AUP and training records to the cyber carrier's renewal questionnaire.
  • Day 90: Charter a governance committee with quarterly cadence. Module 3 training delivered. First quarterly review held with documented minutes.

Deliverable 06 — Executive Readout (Day 14)

A 60-minute readout with the managing partner, IT director, outside counsel, and the partner who had caught the original incident. The PDF report (22 pages, with full appendices for the 31-tool inventory and carrier rider line-by-line mapping) was delivered same-day. Three decisions were made during the meeting:

  1. Appoint a Security Officer — a role that had not previously existed at the firm. A senior manager took it on as a 0.3 FTE assignment with corresponding compensation adjustment.
  2. Commit to a documented pre-flight cadence for cyber renewal each year, starting in September.
  3. Authorize the AI Governance Implementation engagement to execute the remediation roadmap with hands-on support through the December renewal deadline.

The follow-on

Heartland engaged Shadow AI Labs for the AI Governance Implementation — a six-week structured execution of the roadmap, $28,000. By the December 2024 carrier renewal, all rider requirements were satisfied with documented evidence. The renewal premium held flat versus the 30–40% increase Heartland's broker had predicted given the prior incident.

The Implementation engagement also surfaced two scope items that Heartland chose to bring in-house rather than outsource: ongoing quarterly governance committee meetings (run by the new Security Officer) and the annual training refresh cycle (run by HR using the materials Shadow AI Labs had drafted).

Tax season 2025

Ran clean.

  • Microsoft Copilot E5 deployed across 60 seats and a rotating bench of seasonal preparers
  • AUP acknowledgments at 96% (the remaining 4% were temporary preparers whose engagements ended before their training deadlines; HR adjusted seasonal onboarding to put training in the first three days)
  • Three near-misses self-reported via the incident hotline during the season — all categorized within four business hours, none triggering breach notification thresholds
  • The Security Officer presented at the Q1 governance committee meeting with documented metrics

The managing partner authorized a renewed Fractional retainer with Shadow AI Labs for the 2025 renewal cycle — quarterly check-ins, the September pre-flight, and on-call advisory for any in-flight incidents.

The numbers

CategoryYear 1 cost
Incident response (the original near-miss)$52,000
AI Risk Sprint$5,500
AI Governance Implementation$28,000
Cisco Umbrella DLP (60 seats × 12 mo)$3,600
Microsoft Copilot E5 incremental seats$24,000
Staff training time (60 × 70 min)~$14,000
Security Officer time (0.3 FTE)~$45,000
Total Year 1~$172,000

The counterfactual — doing nothing — would have shown up at December renewal as a 30–40% premium increase ($24,000–$32,000 of recurring annual cost), and a second incident with documented prior knowledge would almost certainly have hit a coverage denial under the rider's repeat-incident clause.

The managing partner's reflection

"I had spent fifteen years thinking compliance was paperwork — something you did to satisfy outside parties. The Sprint showed me it's an operational discipline. We didn't just buy a policy. We got a working system, a quarterly cadence to keep it working, and a written record we can hand to the carrier every December. Worth every dollar of the engagement."

What we'd tell another CPA firm

1. The forcing function is the December renewal, not the incident

Most firms wait until an incident happens. By that point, the cyber carrier has already documented the gap and the next renewal is going to hurt regardless of what you do afterward. The right time to engage on AI governance is in Q3 — before the renewal questionnaire arrives, while there's still time to remediate gaps with the carrier-required documentation.

2. Seasonal staffing is not a side concern

Tax season operations are fundamentally different from year-round office work, and most AI governance templates ignore that. The AUP needs to address temporary preparers explicitly: how onboarding handles training, what the data-handling rules are when a seasonal hire is on a one-month engagement, who owns the offboarding process when they roll off.

3. The discovery is the eye-opener

Most managing partners think they know what tools their staff are using. Most are wrong by an order of magnitude. The discovery deliverable is where the conversation shifts from "we should probably look at this" to "we have actual exposure right now."

4. Pre-flight, not post-mortem

The Sprint produces the same artifacts (inventory, risk classification, rider gap analysis, AUP, training plan, roadmap, readout) whether you run it before an incident or after. The difference is whether you get to hand them to the carrier as evidence of mature governance — or as evidence of remediation after a documented gap. The first is much cheaper than the second.


Ready to be carrier-ready by your next renewal?

If your cyber insurance renewal is in the next six months and your AI governance documentation isn't where the rider expects it to be, the Sprint is the fastest path to documented evidence.

Take our free AI Risk Assessment to see where your firm sits relative to current carrier rider language — or book a Discovery call to talk through your specific renewal timeline.


This case study is a composite based on real-world engagement patterns. Firm name and specific operational details have been modified to protect confidentiality while preserving the educational value of the scenario.

Note: This case study is a composite based on multiple real-world incidents. Details have been modified to protect confidentiality while preserving the educational value of the scenario.

Is your organization at risk?

Identify your shadow AI exposure before it becomes an incident.