B2B SaaS

Threadline Analytics: Closing the SOC 2 Type II AI Gap Before the Auditor Did

A 65-person B2B SaaS company in SOC 2 Type II prep discovered the gap that would have failed the audit: their product's AI features and their team's internal AI tools were unmanaged under the same trust services criteria. The Sprint closed both gaps in time for the audit window.

Outcome: $5,500 Sprint, $24K Implementation, clean SOC 2 Type II report, two enterprise contracts unblocked
Editorial photograph: server rack with cabling in a dimly lit data center corridor, single amber status LED glowing as accent

The company

Threadline Analytics is a 65-employee B2B SaaS company headquartered in Austin, with engineering hubs in two additional cities. The product is a workflow analytics platform serving mid-market customers — primarily operations teams at companies in the 200 to 5,000-employee range — with about $11 million in annual recurring revenue.

The company is Series B-funded, founded in 2020, and reached SOC 2 Type I certification in 2023. The Type II certification was scheduled for Q3 2025: a six-month audit window covering the company's controls from Q1 through Q3.

The forcing function

In late 2024, two of Threadline's largest enterprise customers had renewed contracts with new language in their security questionnaires:

"Describe your organization's use of artificial intelligence and machine learning, including (a) AI/ML components in your product or service, (b) any third-party AI/ML services that process customer data, and (c) internal AI tool use by your team. Provide evidence of governance controls covering each."

A third enterprise prospect — a $480K ARR opportunity — had paused their security review in November 2024 with a similar question, asking for documentation Threadline did not have.

The CEO, the CTO, and the Head of Security & Compliance (a former auditor who had joined six months earlier) met to scope the response. The Head of Compliance — Maya — laid out the situation:

  1. Their product had AI features. A recently-launched "AI-assisted insight summarization" capability used OpenAI's API to process customer workflow data. The integration had shipped in October 2024 without an updated subprocessor disclosure to existing customers.
  2. Their team used internal AI tools. GitHub Copilot was authorized for engineering, but Maya had no inventory of what else was in use across marketing, sales, customer success, and product.
  3. SOC 2 Type II would cover both scopes. The auditor would look at controls covering AI subprocessors (a TSC 9.0 confidentiality control), AI tool use by the team that processed customer data (multiple controls under TSC 9.0 and TSC 6.0), and AI-related changes to product (TSC 8.1, change management).
  4. The audit window opened in 90 days. Any controls not in place by Q1 wouldn't be observable during the audit period.

The CEO's question to Maya was direct: "What does it take to be ready by the window?"

The decision

Maya evaluated three paths:

Path 1: Tighten the engineering scope only. Get the product-side AI subprocessor and change-management controls in place; treat internal AI tool use as a separate, later workstream.

Path 2: Hire a SOC 2 consultant to add AI scope to their existing audit-prep engagement. Their existing SOC 2 consulting firm offered AI add-ons but had limited specialist depth.

Path 3: Engage a specialist firm to run a structured Sprint covering both scopes. Get a unified deliverable package the existing SOC 2 consultant could integrate into the broader audit prep.

She picked Path 3 for two reasons:

  1. The TSC controls the auditor would test covered both scopes (product-side AI subprocessors and team-internal AI tools). Splitting them across two workstreams would create coordination overhead during a 90-day pre-flight window.
  2. The Sprint deliverable structure — inventory, risk classification, gap analysis, AUP, roadmap, readout — mapped almost one-to-one onto the evidence the auditor would request. Producing those documents during Sprint and then handing them to the SOC 2 consultant for integration was more efficient than asking the SOC 2 consultant to reinvent the AI scope from a generalist starting point.

A peer CTO referred her to Shadow AI Labs. The company engaged for the AI Risk Sprint — $5,500, two weeks.

Inside the Sprint

Deliverable 01 — AI Tool Discovery (Days 1–4)

Browser telemetry across 65 employees, an anonymous survey, procurement audit, SSO log review, and a code-base scan of the production application for AI/ML library imports and external-service calls. Findings broke into two scopes:

Internal AI tool use across the team (16 tools):

  • 4 AI-powered customer success tools (meeting recorders, call summary generators, response suggestion tools) — three with customer data exposure
  • 3 marketing tools (content generators, SEO assistants) — one of which had been used to draft customer-facing collateral that mentioned specific customer names
  • 2 sales productivity tools (email draft assistants, account research) — both with prospect data exposure
  • 2 product analytics tools with AI features that had been silently activated by the vendor in 2024
  • GitHub Copilot (sanctioned), Cursor (sanctioned), one consumer ChatGPT account used by a senior engineer for design sketching, two Chrome extensions submitting browsed page content to AI APIs

Product-embedded AI dependencies (4 services):

  • OpenAI GPT-4 API for the "AI-assisted insight summarization" feature — processing customer workflow data
  • Pinecone vector database (associated with the same feature)
  • Anthropic Claude API used in an internal feature flag for ~12% of customer traffic (an A/B test the product team had been running)
  • A small fine-tuned model running on a GPU instance for a separate feature, with training data drawn from a sample of customer workflows that had not been explicitly consented for that purpose

The product-side findings were the more material discovery for the SOC 2 audit. The Anthropic Claude usage in particular was a feature flag the engineering team had stood up two months earlier without going through the company's standard subprocessor review.

Deliverable 02 — Risk Classification Matrix (Days 3–6)

Each tool and dependency was mapped against NIST AI RMF severity and a SaaS-specific criticality dimension (split into customer-data-exposure risk and product-trust risk):

SeverityCountPattern
Critical3Product-embedded AI subprocessors without customer disclosure + 1 internal CS tool with customer data
High5Sanctioned-eligible tools with documented gaps
Medium7Productivity tools with limited customer-data exposure
Low5General-use tools sanctioned at vendor enterprise tier

The Critical-severity classification was where the SOC 2 audit risk concentrated.

Deliverable 03 — SOC 2 Type II AI-Scope Gap Analysis (Days 5–9)

Side-by-side comparison of relevant TSC controls against Threadline's documented state, scoped specifically to AI:

Trust Services Criteria touchpoints identified:

  • CC1.5 (commitment to integrity and ethical values): AUP coverage
  • CC2.3 (communication with external parties): subprocessor disclosure
  • CC3.4 (selection of risk responses): AI risk assessment
  • CC4.1 (selection, development, performance): vendor management for AI subprocessors
  • CC6.1 (logical and physical access): access controls for AI tool usage
  • CC8.1 (change management): AI feature deployment and rollback procedures
  • C1.1 (confidential information): customer data handling in AI tool context
  • C1.2 (disposal of confidential information): AI service data retention controls

Of the 8 control areas, Threadline had documented evidence for 2 (logical access controls and a subset of change management). Six had no AI-specific evidence. The deliverable mapped each gap to a specific remediation item with an estimated effort and a target date inside the audit window.

Deliverable 04 — AUP, Subprocessor Framework & Change Management Provisions (Days 6–10)

The AUP was drafted with two distinct sections:

Section A — Internal AI tool use. Covered the standard scope: sanctioned tools, prohibited use, data classification, reporting, and training. Tuned for a SaaS company's role structure (engineering, product, sales, marketing, customer success, support).

Section B — Product AI development practices. Specific to engineering and product teams:

  • Subprocessor review requirement before any new AI service is integrated into production
  • Customer disclosure obligations when AI subprocessors are added (with example DPA/MSA language)
  • Change management procedure specifically for AI-related feature releases (TSC 8.1 mapping)
  • Customer data classification rules for AI inference (which categories of customer data may be sent to which AI services, with auditable logging)
  • Training data sourcing standard (explicit consent or contractual basis for using customer data in model training)

Maya's existing SOC 2 consultant reviewed the AUP draft and integrated it into the broader policy framework as a chapter rather than a separate document.

Deliverable 05 — 90-Day Pre-Audit Roadmap (Days 9–13)

Sequenced to land all controls before the audit window opened:

  • Week 1: AUP distributed firm-wide with required acknowledgment. Anthropic Claude feature flag paused pending subprocessor review and customer disclosure. AI tool inventory baseline locked.
  • Week 2: Subprocessor disclosure updates sent to all customers (the OpenAI dependency, the Pinecone dependency, and the Anthropic Claude dependency once the feature flag review concludes).
  • Month 1: Vendor change management procedure operationalized. Training Module 1 delivered. Change management procedure updated to include AI-specific approval gates.
  • Month 2: Engineering training (Module 2) delivered on subprocessor review and change management for AI features. The fine-tuned model retrained on properly-consented training data; original model deprecated.
  • Month 3: First quarterly governance review held. Pre-audit evidence package assembled and shared with the SOC 2 consultant. Audit window opens at the start of Month 4.

Deliverable 06 — Executive Readout (Day 14)

A 60-minute readout with the CEO, the CTO, Maya as Head of Security & Compliance, the VP of Engineering, the company's outside SOC 2 consultant, and the head of customer success (who would own the customer disclosure communications). The 24-page PDF report — including the 20-item inventory across both scopes, line-by-line TSC mapping, and acceptance criteria for each remediation step — was delivered same-day. Four decisions were made during the meeting:

  1. Authorize the Implementation engagement to execute the 90-day roadmap.
  2. Add an AI subprocessor approval gate to the engineering change management process, with the Head of Security & Compliance as the approver.
  3. Update the customer-facing subprocessor list and the standard DPA template to reflect the three AI subprocessors.
  4. Defer the Anthropic Claude feature flag pending subprocessor review (took it down before the audit window opened; reintroduced two months later with full disclosure).

The follow-on

Threadline engaged Shadow AI Labs for the AI Governance Implementation — $24,000 over eight weeks. The engagement covered:

  • Subprocessor disclosure communications to all customers (drafted and reviewed; legal redlines coordinated with outside counsel)
  • DPA template updated with AI subprocessor language
  • Change management procedure updates and engineering team training
  • Training infrastructure stood up in the existing LMS for ongoing acknowledgment cycles
  • Pre-audit evidence package assembled in the format the SOC 2 consultant requested

The Fractional retainer was scoped at $4,500/month for the post-audit phase — covering ongoing subprocessor review as new AI vendors are evaluated, the annual surveillance review, and on-call advisory for any new enterprise customer AI questionnaires that come in with novel asks.

The SOC 2 Type II audit

The audit window ran Q1 through Q3 of 2025. The auditor's AI-related fieldwork covered all eight TSC control areas the Sprint had identified. Threadline had documented evidence for each.

The audit report, issued in October 2025, was clean on the AI scope. Two minor findings elsewhere were unrelated to AI. The auditor included a comment in the management letter that Threadline's AI subprocessor disclosure practices and change management gates "represent a mature posture for a Series B company in this product category."

The customer impact

  • The paused $480K ARR enterprise prospect — the one whose security review had stalled in November 2024 over AI questions — restarted in February 2025 and closed in April. Their security team accepted the subprocessor disclosure and DPA language as part of the contract.
  • Two additional enterprise prospects came in during the audit window with similar AI scope questions. Both reviewed the same documentation set and closed.
  • Three existing enterprise customers requested expanded disclosures as part of their own SOC 2 audit cycles; all three were handled with the same documentation package and minor customer-specific addenda.

The numbers

CategoryYear 1 cost
AI Risk Sprint$5,500
AI Governance Implementation$24,000
Subprocessor disclosure legal review$6,500
Engineering time on remediation (60 person-hours)~$18,000
Compliance lead time on the engagement~$22,000
Training time (65 × 60 min + engineering supplement)~$11,500
Fractional retainer (Year 1 partial, 8 months)$36,000
Total Year 1 investment~$123,500

The counterfactual — entering the audit window without remediation — would have resulted in a qualified SOC 2 Type II report (or unqualified report with multiple AI-scope exceptions), depending on auditor latitude. For a mid-market SaaS company selling to enterprise, a qualified SOC 2 report is a sales blocker that takes 6–12 months to remediate in a re-audit cycle. The $480K ARR opportunity that closed in April 2025 alone exceeded the full Year 1 investment by 4x.

Maya's reflection

"I joined Threadline knowing the Type II audit was coming, and I knew AI scope was going to be the hard part. What I didn't know was how many AI subprocessor relationships our engineering team had set up without going through subprocessor review — and not because anyone was being cavalier, but because the review process predated AI and didn't have specific gates for it. The Sprint surfaced what I needed to surface in a way I could hand directly to our SOC 2 consultant. The engagement paid for itself when our paused enterprise deal restarted. Everything after that was bonus."

What we'd tell another B2B SaaS company

1. SOC 2 auditors are explicitly looking at AI scope in 2025

The TSC criteria haven't changed, but how auditors interpret them in light of AI subprocessor relationships has changed materially in the last 12 months. Generic SOC 2 preparation that worked in 2022–23 may not satisfy a 2025 auditor without specific AI evidence.

2. The product-embedded AI scope usually surprises the compliance lead

Most compliance leads we work with at SaaS companies have a reasonable handle on internal AI tool use by the team. The product-side discovery — what subprocessors the engineering team has actually integrated, with or without going through standard procurement — is where the audit risk concentrates.

3. Subprocessor disclosure is a documentation problem before it's a customer relations problem

The customer-facing disclosure work (DPA updates, subprocessor list communications) is mechanical once the internal inventory is documented. The hard part is reaching consensus on what gets disclosed and how, particularly when an AI vendor has been part of the product for months before the disclosure conversation starts.

4. Change management for AI features needs its own gate

The standard engineering change management process — peer code review, CI, staged rollout — wasn't designed with AI subprocessor introduction in mind. A specific approval gate ahead of any new AI service integration prevents the audit-surprise pattern that Threadline ran into.


Heading into SOC 2 Type II with AI scope in the audit window?

If your company has AI features in production, internal AI tools across your team, or both — and your SOC 2 Type II audit window opens in the next 6 to 12 months — the Sprint produces the evidence package the auditor will look for, in a format your existing SOC 2 consultant can integrate.

Take our free AI Risk Assessment to see where your company sits relative to 2025 SOC 2 AI scope expectations — or book a Discovery call to talk through your audit timeline.


This case study is a composite based on real-world engagement patterns with B2B SaaS companies in SOC 2 Type II preparation. Company name, product details, and specific engineering decisions have been modified to protect confidentiality while preserving the educational value of the scenario.

Note: This case study is a composite based on multiple real-world incidents. Details have been modified to protect confidentiality while preserving the educational value of the scenario.

Is your organization at risk?

Identify your shadow AI exposure before it becomes an incident.