SOC 2 Compliance

SOC 2 Type II Vulnerability Management: What Your Auditor Will Actually Check

CC7.1 is vague by design. Your auditor has a lot of discretion. This is exactly what experienced SOC 2 auditors look for in your vulnerability management program, and the evidence gaps that trigger findings.

CVEasy AI Research Team · February 28, 2026 · 10 min read
SOC 2 audit evidence

Auditors test whether your controls actually operated during the review period, not whether you have a policy document that says they should.

SOC 2 Type II covers a 12-month (typically) review period. Your auditor isn't checking whether you have a vulnerability management policy, they're checking whether your vulnerability management program actually operated as designed, consistently, throughout the entire period. That's a fundamentally different test, and it catches more companies than you'd expect.

This post covers CC7.1 (the primary vulnerability management control) in practical terms, what scan frequency satisfies auditor expectations, how to structure remediation SLA documentation, what Vanta and Drata automate and what they don't, and the most common findings that derail Type II audits.

Type I vs Type II distinction: SOC 2 Type I tests whether controls are suitably designed at a point in time. Type II tests whether they operated effectively over the full review period. This means missing even one quarterly scan during your review period can be a Type II finding, regardless of how good your policy documentation is.

CC7.1: What the Control Actually Says

The AICPA Trust Services Criteria CC7.1 states: "To meet its objectives, the entity uses detection and monitoring procedures to identify (1) changes to configurations that result in the introduction of new vulnerabilities, and (2) susceptibilities to newly discovered vulnerabilities."

That's intentionally broad. Auditors interpret it through Points of Focus (supplemental guidance), which include:

"We had a policy that said monthly scans. During the audit period, we could only produce evidence of 8 scans in 12 months. That's a finding, the control didn't operate as designed." - A real exchange from a SOC 2 Type II audit debrief.

Required Scan Frequency: What Auditors Actually Accept

CC7.1 does not specify a minimum scan frequency. That's where practitioners often get confused. Auditors will test whether you scanned at the frequency your own policy defines, and whether that frequency is reasonable given your risk profile. Here's what's generally accepted by tier:

Scan Frequency by Asset Tier: SOC 2 Auditor Expectations
Asset Tier Examples Minimum Acceptable Recommended
Tier 1: Critical Production DB, Auth servers, Payment systems Monthly Weekly or continuous
Tier 2: High App servers, Internal tools with PII, VPNs Quarterly Monthly
Tier 3: Standard Dev/test environments, Internal workstations Quarterly Quarterly
Tier 4: Low Isolated lab systems, Offline assets Semi-annual Quarterly

The practical implication: if your policy says monthly scans for production systems and you can produce 11 of 12 monthly scan reports during the audit period, most auditors will not issue a finding. One missed scan with documented context (maintenance window, system migration) is usually accepted. A pattern of gaps (8 of 12, 6 of 12) will be findings.

Remediation SLA Documentation: The Evidence Auditors Want

The second major CC7.1 evidence requirement is demonstrating that you have a documented, enforced remediation SLA, and that findings are actually remediated within it. Here's what auditors will request:

  1. Written SLA policy: documented remediation timeframes by severity. You need a policy document that specifies, at minimum: Critical = X days, High = Y days, Medium = Z days. Common acceptable tiers: Critical 30 days, High 90 days, Medium 180 days.
  2. Evidence of tracking: your vulnerability management system or ITSM export showing open findings with discovered date, severity, and closed/remediated date.
  3. SLA compliance rate: auditors will calculate what percentage of findings during the review period were remediated within your stated SLA. A common acceptable threshold is 90%+ for Critical/High. Falling below 80% is a finding.
  4. Exception documentation: for findings that exceeded SLA, you need a documented risk acceptance or exception with business owner sign-off. Undocumented overruns are automatic findings.
  5. Penetration test evidence: most Type II audits require at least annual penetration testing evidence. The pen test report must be dated within the review period, and any critical/high findings must have remediation evidence.
Evidence format that auditors prefer: A timestamped export from your vulnerability scanner or ITSM showing: CVE ID, severity, asset, date discovered, date remediated (or "open"), and SLA compliance flag. A spreadsheet is acceptable. A PDF of scan results is not, auditors want structured data they can analyze. If you can export a CSV with these fields, you're in good shape.

What Vanta and Drata Check vs. What Human Auditors Check

Compliance automation platforms like Vanta and Drata have become nearly ubiquitous in SOC 2 Type II programs. They're excellent at some things and completely blind to others. Understanding the gap prevents surprise findings during your actual audit.

What Vanta/Drata Automate Well

What Human Auditors Test That Automation Misses

The most common Type II finding we see: Companies pass their Vanta checks and still receive findings because their scanner didn't cover all in-scope cloud assets. AWS Lambda functions, new ECS containers, and recently added subnetworks frequently fall outside scanner scope. Auditors compare your cloud asset inventory to your scan scope, the delta is your gap.

Common Findings and How to Prevent Them

Finding 1: Scan Coverage Gaps

Symptom: Scanner covers IP ranges defined two years ago; new cloud VPC not included.
Fix: Quarterly scope reconciliation, compare cloud asset inventory against scanner scope configuration. Automate asset discovery integration if your scanner supports it (Tenable.io and Qualys both have cloud connectors that auto-discover assets).

Finding 2: SLA Overrun Without Documentation

Symptom: 15 critical findings remediating late with no exception documentation.
Fix: Create a formal exception process with a simple template: CVE ID, reason for delay, business owner, risk acceptance date, target remediation date. Store in your ticketing system. Even a Jira ticket with these fields is sufficient.

Finding 3: Inconsistent Scan Schedule

Symptom: Policy says monthly, scanner shows 8 timestamps in 12 months.
Fix: Schedule scans as recurring jobs in your scanner, not manually triggered events. Set up alerting when scheduled scans don't complete. Keep scan completion logs separate from the findings report.

Finding 4: No Evidence of Penetration Test Findings Remediated

Symptom: Annual pen test report shows 3 high findings. Audit shows no remediation tickets or re-test evidence.
Fix: Every pen test finding should generate a tracked ticket with remediation date and status. Commission a partial re-test after remediation for critical/high findings. The re-test report is your evidence artifact.

Build your evidence package now, not at audit time: CVEasy AI maintains a timestamped, auditor-ready log of every CVE in your environment, when it was discovered, its current status, your SLA tier, and remediation date. At audit time, you export a filtered CSV and hand it to your auditor. That single artifact typically satisfies the bulk of CC7.1 evidence requirements. See how it works →

Ready to take control of your vulnerabilities?

CVEasy AI runs locally on your hardware. Seven layers of risk intelligence. AI remediation in seconds.

Get Started Free Learn About BASzy AI

Related Articles