A pattern we've been hearing from compliance consultants and fractional-CISO firms in 2026:
"We were running a routine SOC 2 Type II prep for a 90-employee SaaS client. Halfway through the readiness assessment, the engineering team mentioned they're using ChatGPT, Claude, and Cursor with customer data. Now I have AI scope creep, and I don't have the framework expertise to handle it cleanly. What do I do?"
This post is for compliance consultants, vCISO firms, and audit-prep practitioners running into the same pattern. The honest answer: AI scope creep in trust-services audits is the new norm in 2026, and most consultants are unprepared. Here's a practitioner's view of how to handle it without losing the client or scoping yourself into work you can't deliver.
Why AI is showing up in trust-services audits
Three structural shifts collided in late 2025 and early 2026.
Auditors started asking. AICPA's TSP 100 (Trust Services Criteria) doesn't have an explicit AI section, but the Common Criteria — particularly CC6.1 (logical and physical access), CC6.7 (data transmission and disposal), and CC8.1 (change management) — apply to AI tool usage as much as any other system. Big-Four and mid-tier audit firms updated their work programs in Q4 2025 to add AI-specific evidence requests under existing criteria. If your client uses AI tools, the auditor is going to ask about controls.
Customers started asking. Enterprise customers updated their vendor due-diligence questionnaires to include AI governance questions. SOC 2 reports are referenced directly in DD responses, so auditees pressure auditors to address AI in the report — either explicitly or through Common Criteria coverage.
ISO/IEC 42001 published. As an AI-specific management system standard with structural similarities to ISO 27001, 42001 created an obvious cross-reference. Compliance consultants who built ISO 27001 practices found themselves asked about 42001 alignment, often without the framework background.
The result: AI is in scope for trust-services audits whether you planned for it or not.
What the new evidence requests look like
Examples from actual 2026 SOC 2 Type II work programs we've reviewed (sanitized):
"Provide an inventory of artificial intelligence tools and machine learning systems in use across in-scope production environments, including the data classifications processed and the access controls applied to each."
"For each AI-enabled SaaS service procured during the audit period, provide evidence of vendor risk assessment specifically addressing AI data processing, including any model training, output retention, and inference logging."
"Demonstrate that change management controls (CC8.1) cover updates to AI models, prompt engineering changes, and AI agent permission changes that affect production systems."
"Provide policy documentation governing employee use of generative AI tools, including data classification restrictions, training records, and any technical controls preventing the submission of restricted data to non-sanctioned AI services."
These aren't trick questions. They're reasonable evidence requests under the existing Common Criteria. But most SaaS companies under 200 employees have no documented answer because:
- They didn't have an AI tool inventory before this audit cycle.
- They haven't done vendor risk assessments on AI-enabled features added by existing vendors mid-contract.
- Their change management controls don't cover prompt engineering or agent permissions.
- They have no AI-specific policy.
The consultant's dilemma
If you're running the audit prep, you have three options:
Option 1: Tell the client to descope AI. Possible for SOC 2 (you can argue AI use is outside the system boundary), but increasingly impossible for HIPAA covered entities, financial services subject to GLBA, or any client where customer DD has already raised AI questions. Descoping signals you don't take it seriously, which damages the audit credibility.
Option 2: Take it on yourself. Workable if you have AI framework background (NIST AI RMF, ISO 42001, OWASP LLM Top 10) and the time to build the inventory, run the vendor reviews, draft the AUP, and document the change management updates. Most consultants don't — and even if they do, billing for the AI work usually requires a scope amendment that risks the relationship.
Option 3: Refer to a specialist. Bring in someone whose entire practice is AI security and governance. They produce the evidence the auditor wants, you keep the audit relationship, the client gets a clean SOC 2.
Option 3 is the path most experienced compliance consultants take in adjacent specialties (penetration testing, technical architecture review, regulatory law). AI specialty work is going the same way.
What good AI specialty work looks like
If you're going to refer AI scope to a specialist — whether internal or external — here's what good work product looks like, so you can evaluate quality:
A documented AI Tool Inventory
Not a list of "ChatGPT, Claude, Copilot." A structured inventory with: tool name, category, business owner, data types observed in use, sanctioning status, vendor data handling posture (training reuse, retention, audit logging), risk classification, and recommended remediation. Mapped explicitly to NIST AI RMF Map function.
For a 100-employee company, expect 14–22 tools. Not 3.
A Risk Classification Matrix
Each tool classified Critical / High / Medium / Low using two axes: (a) data sensitivity of observed use, (b) vendor data handling posture. Aligned to NIST AI RMF Measure function. Drives the remediation roadmap.
If you see "all tools rated High" or "no classification provided," the work is too shallow for audit defense.
Vendor DPA Review for Embedded AI
Specifically: review of Salesforce Einstein, HubSpot AI, Zoom AI Companion, Microsoft 365 Copilot, Slack AI, and any other production SaaS that added AI features post-contract. Documentation of (a) what changed in vendor terms, (b) what data your client now exposes, (c) whether contract amendment is needed.
This is the most-skipped piece of AI compliance work and the highest-risk gap. Auditors are asking about it. Most consultants don't have time to do it well.
Acceptable Use Policy and Training Records
A documented AUP defining sanctioned tools, prohibited uses, registration process, and incident reporting. Plus training delivery (typically 30 minutes) with HRIS-tracked completion. Both are CC6 / CC7 / HIPAA Workforce Training evidence.
A generic AUP downloaded from a regulator template usually fails auditor review because it doesn't reflect actual tools in use.
Mapping to the Existing Audit Framework
The deliverable should explicitly cross-reference SOC 2 Common Criteria, HIPAA Security Rule sections, ISO 27001 Annex A controls, and (where relevant) ISO 42001 clauses. The auditor doesn't want to do that mapping themselves; the consultant who did the AI work should.
If the AI specialty deliverable is just a generic AI risk report with no cross-reference to your audit framework, it doesn't help you in the audit. Push back on the specialist or pick a different one.
How to structure the engagement
Whether you take it on yourself or refer it out, structure matters.
Scope it as a discrete deliverable, not an open-ended advisory engagement. Two weeks, fixed scope, fixed price is the right shape for AI assessment work. Anything longer drifts into implementation, which is a different engagement.
Run it parallel to the audit prep, not after. AI work surfaces other gaps (vendor governance, policy gaps, training gaps) that often require remediation before the audit window closes. Discovery at week 3 of audit prep gives you 6–8 weeks to remediate. Discovery at week 8 gives you 1 week, which is not enough.
Use the AI specialist as an extension of your team for the engagement. Your client's relationship is with you. The specialist works under your oversight, produces deliverables in your audit framework, and (if you prefer) under your branding. White-label structures handle this cleanly.
What to look for in an AI specialist
Three things that matter, in order:
-
Framework expertise, not vendor familiarity. Anyone can list "ChatGPT, Claude, Gemini" — that's not specialty work. The specialist should be fluent in NIST AI RMF, ISO 42001, OWASP LLM Top 10, and the cross-mapping to SOC 2 / HIPAA / ISO 27001 controls.
-
Productized scope, not custom advisory. A specialist who quotes "let's talk about what you need" wastes your client's two-week window. A specialist who has a fixed-scope, fixed-price deliverable knows what they're doing.
-
Documentation that survives auditor review. Ask for a sample deliverable. If it's a 4-page PowerPoint with stock images, walk away. The deliverable should be 20+ pages, structured like a real assessment, with framework mappings, risk matrices, and remediation roadmaps.
The economics of referral
Two patterns make sense for compliance consultants:
Co-branded referral: AI specialist runs the engagement under joint branding (your firm + theirs). You earn 20% on the engagement close. Client sees both firms; relationship stays with you. Best when you want the AI specialist visible as a recognized expert.
White-label: AI specialist runs the engagement entirely under your firm's branding. Specialist invisible to client. You earn 30% margin on the engagement. Best when you want to position as a full-service compliance practice.
Either structure means the work gets done well, your audit defense holds, and you don't have to staff an AI specialty practice you can't sustain.
What this means for your practice
The pattern won't reverse. Trust-services audits will keep adding AI scope. Customer DD will keep adding AI questions. ISO 42001 will keep coming up in enterprise contracts. The compliance consultants who set up referral relationships with AI specialists in 2026 will defend audits cleanly. The ones who try to wing it with generic ChatGPT familiarity will lose audits and clients.
Set up the referral relationship now, before the next audit cycle hits.
Shadow AI Labs runs a partner program for compliance consultants, vCISO firms, and audit-prep practices. Co-branded or white-label, with productized 2-week deliverables built to map cleanly to SOC 2, HIPAA, ISO 27001, and ISO 42001 frameworks. Schedule a 20-min partner intro or email partners@shadowailabs.com.
Built by Peter Kwidzinski — AMD Fellow, founding contributor to Caliptra (open-source hardware root of trust now used across the cloud-and-silicon industry), 20+ years in platform security architecture.




