SOC 2 Type II covers a 12-month (typically) review period. Your auditor isn't checking whether you have a vulnerability management policy, they're checking whether your vulnerability management program actually operated as designed, consistently, throughout the entire period. That's a fundamentally different test, and it catches more companies than you'd expect.
This post covers CC7.1 (the primary vulnerability management control) in practical terms, what scan frequency satisfies auditor expectations, how to structure remediation SLA documentation, what Vanta and Drata automate and what they don't, and the most common findings that derail Type II audits.
CC7.1: What the Control Actually Says
The AICPA Trust Services Criteria CC7.1 states: "To meet its objectives, the entity uses detection and monitoring procedures to identify (1) changes to configurations that result in the introduction of new vulnerabilities, and (2) susceptibilities to newly discovered vulnerabilities."
That's intentionally broad. Auditors interpret it through Points of Focus (supplemental guidance), which include:
- Implements Procedures to Detect Changes: You must have a documented process for detecting new vulnerabilities, which auditors interpret as a regular scanning cadence plus a mechanism for ingesting newly published CVEs.
- Implements Detection Policies: Written policies must exist that define what you scan, how often, and what happens when vulnerabilities are found.
- Monitors Infrastructure and Software: Auditors look for evidence that monitoring covered your actual infrastructure, not just a subset of it.
- Conducts Vulnerability Scans: Explicit evidence of scans running, not just a policy that says they should run.
"We had a policy that said monthly scans. During the audit period, we could only produce evidence of 8 scans in 12 months. That's a finding, the control didn't operate as designed." - A real exchange from a SOC 2 Type II audit debrief.
Required Scan Frequency: What Auditors Actually Accept
CC7.1 does not specify a minimum scan frequency. That's where practitioners often get confused. Auditors will test whether you scanned at the frequency your own policy defines, and whether that frequency is reasonable given your risk profile. Here's what's generally accepted by tier:
| Asset Tier | Examples | Minimum Acceptable | Recommended |
|---|---|---|---|
| Tier 1: Critical | Production DB, Auth servers, Payment systems | Monthly | Weekly or continuous |
| Tier 2: High | App servers, Internal tools with PII, VPNs | Quarterly | Monthly |
| Tier 3: Standard | Dev/test environments, Internal workstations | Quarterly | Quarterly |
| Tier 4: Low | Isolated lab systems, Offline assets | Semi-annual | Quarterly |
The practical implication: if your policy says monthly scans for production systems and you can produce 11 of 12 monthly scan reports during the audit period, most auditors will not issue a finding. One missed scan with documented context (maintenance window, system migration) is usually accepted. A pattern of gaps (8 of 12, 6 of 12) will be findings.
Remediation SLA Documentation: The Evidence Auditors Want
The second major CC7.1 evidence requirement is demonstrating that you have a documented, enforced remediation SLA, and that findings are actually remediated within it. Here's what auditors will request:
- Written SLA policy: documented remediation timeframes by severity. You need a policy document that specifies, at minimum: Critical = X days, High = Y days, Medium = Z days. Common acceptable tiers: Critical 30 days, High 90 days, Medium 180 days.
- Evidence of tracking: your vulnerability management system or ITSM export showing open findings with discovered date, severity, and closed/remediated date.
- SLA compliance rate: auditors will calculate what percentage of findings during the review period were remediated within your stated SLA. A common acceptable threshold is 90%+ for Critical/High. Falling below 80% is a finding.
- Exception documentation: for findings that exceeded SLA, you need a documented risk acceptance or exception with business owner sign-off. Undocumented overruns are automatic findings.
- Penetration test evidence: most Type II audits require at least annual penetration testing evidence. The pen test report must be dated within the review period, and any critical/high findings must have remediation evidence.
What Vanta and Drata Check vs. What Human Auditors Check
Compliance automation platforms like Vanta and Drata have become nearly ubiquitous in SOC 2 Type II programs. They're excellent at some things and completely blind to others. Understanding the gap prevents surprise findings during your actual audit.
What Vanta/Drata Automate Well
- Verifying that a vulnerability scanner is connected and producing output (agent checks)
- Confirming that scans ran within your defined frequency (scan timestamp tracking)
- Pulling evidence artifacts from integrated tools (AWS SecurityHub, Tenable, Qualys)
- Tracking open findings and alerting on SLA breaches
- Generating formatted evidence packages for auditors
- Verifying that background checks, security training, and access review tasks were completed (people controls)
What Human Auditors Test That Automation Misses
- Scope coverage: Did your scanner actually cover all in-scope systems? Auditors may compare your scanner scope against your asset inventory from your CMDB or cloud provider. Shadow IT and recently deployed assets frequently create gaps.
- Remediation quality: Was the vulnerability actually fixed, or just marked closed? Auditors may ask for a re-scan report showing the finding no longer present.
- Risk acceptance process: Were exceptions properly authorized? Auditors will interview the person who signed off on a risk acceptance to verify the process was followed.
- Completeness of findings: Vanta tracks what your scanner reports. It doesn't know what your scanner missed. Auditors experienced with your scanner type know what classes of vulnerabilities your tool is poor at detecting.
- Third-party risk: CC7.1 extends to third-party software in your environment. Auditors may ask how you track CVEs in your SaaS dependencies, open source libraries, and vendor-managed infrastructure.
Common Findings and How to Prevent Them
Finding 1: Scan Coverage Gaps
Symptom: Scanner covers IP ranges defined two years ago; new cloud VPC not included.
Fix: Quarterly scope reconciliation, compare cloud asset inventory against scanner scope configuration. Automate asset discovery integration if your scanner supports it (Tenable.io and Qualys both have cloud connectors that auto-discover assets).
Finding 2: SLA Overrun Without Documentation
Symptom: 15 critical findings remediating late with no exception documentation.
Fix: Create a formal exception process with a simple template: CVE ID, reason for delay, business owner, risk acceptance date, target remediation date. Store in your ticketing system. Even a Jira ticket with these fields is sufficient.
Finding 3: Inconsistent Scan Schedule
Symptom: Policy says monthly, scanner shows 8 timestamps in 12 months.
Fix: Schedule scans as recurring jobs in your scanner, not manually triggered events. Set up alerting when scheduled scans don't complete. Keep scan completion logs separate from the findings report.
Finding 4: No Evidence of Penetration Test Findings Remediated
Symptom: Annual pen test report shows 3 high findings. Audit shows no remediation tickets or re-test evidence.
Fix: Every pen test finding should generate a tracked ticket with remediation date and status. Commission a partial re-test after remediation for critical/high findings. The re-test report is your evidence artifact.