Kubernetes Security Container Scanning

Kubernetes Security: From Cluster to Container Vulnerability Management

Kubernetes clusters are becoming the dominant deployment target for modern applications, and the attack surface they expose is fundamentally different from traditional infrastructure. Here is a practitioner's guide to securing them.

CVEasy AI Research Team · March 15, 2026 · 11 min read
Kubernetes security scanning

Your Kubernetes cluster is not a single server. It is a distributed system with its own API server, its own identity layer, its own networking model, and dozens of moving parts that traditional vulnerability scanners were never designed to assess. A Nessus scan that covers your VM fleet tells you nothing about whether your etcd is encrypted, whether your RBAC policies grant excessive permissions, or whether the container images running in production contain known CVEs.

Kubernetes security requires a different mental model. The attack surface spans the cluster control plane, the node operating system, the container runtime, the images themselves, and the network policies that govern traffic between pods. Each layer has its own class of vulnerabilities, and each requires its own scanning and hardening strategy.

The scale problem: A single Kubernetes cluster can run thousands of pods, each pulling from a different container image, each with its own dependency tree. Traditional asset-based scanning models break down completely at this scale. You need automation that understands Kubernetes-native abstractions.

The Kubernetes Attack Surface: Five Layers

Understanding where vulnerabilities live in a Kubernetes environment requires thinking in layers. Each layer has distinct vulnerability classes and different remediation approaches.

Layer 1: Container Images

Container images are the most prolific source of CVEs in any Kubernetes environment. A typical production cluster runs hundreds of unique images, each containing an operating system layer, language runtimes, application dependencies, and the application code itself. Every one of those layers can introduce known vulnerabilities.

The numbers are sobering. Research from Sysdig's 2025 Container Security Report found that 87% of container images in production contain at least one high or critical severity CVE. The average image contains 127 known vulnerabilities. Most of these are inherited from base images that development teams never audit.

Layer 2: Cluster Configuration (RBAC and API Server)

RBAC misconfigurations are the most common Kubernetes security finding in penetration tests, and they rarely show up in traditional vulnerability scans. The Kubernetes API server grants fine-grained permissions through Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings. When these are overly permissive, an attacker who compromises a single pod can escalate to cluster-admin.

Common RBAC misconfigurations include:

# Dangerous: ClusterRole with wildcard access
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: too-permissive
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]

# Better: Scoped Role with minimum necessary permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: app-reader
  namespace: production
rules:
- apiGroups: ["apps"]
  resources: ["deployments"]
  verbs: ["get", "list", "watch"]

Layer 3: Network Policies

By default, Kubernetes allows all pod-to-pod communication within a cluster. This means a compromised pod in the staging namespace can reach the database pods in the production namespace without any restriction. Network policies are the Kubernetes-native mechanism for implementing micro-segmentation.

The reality in most organizations: fewer than 30% of production Kubernetes clusters have any network policies applied. Of those that do, many have policies that are too broad to be meaningful.

Effective network policy strategy follows three principles:

  1. Default deny all ingress and egress. Start with a deny-all policy in every namespace, then explicitly allow the traffic flows your applications require.
  2. Label-based selection. Use pod labels to define allowed communication paths. This creates a dynamic firewall that automatically applies to new pods matching the selector.
  3. Namespace isolation. Prevent cross-namespace traffic except where explicitly required. Most applications should only communicate within their own namespace and to shared infrastructure services.

Layer 4: Secrets and etcd Security

Kubernetes secrets are base64-encoded by default, not encrypted. Anyone with read access to the etcd datastore or to the Kubernetes API can read every secret in the cluster. This includes database credentials, API keys, TLS certificates, and service account tokens.

Critical hardening measures for secrets include:

Layer 5: Admission Controllers

Admission controllers are the gatekeepers of your Kubernetes cluster. They intercept API requests after authentication and authorization but before the object is persisted to etcd. This is where you enforce security policies at deployment time, rather than discovering violations after workloads are running.

Essential admission controller configurations:

Building a Kubernetes Vulnerability Scanning Pipeline

Effective Kubernetes security scanning operates at multiple points in the software lifecycle. Waiting until runtime to discover vulnerabilities is too late. By then, the vulnerable image has been deployed, secrets have been exposed, and misconfigurations are already exploitable.

Stage 1: Build-Time Image Scanning

Integrate image scanning into your CI/CD pipeline so that vulnerable images never reach your container registry. Tools like Trivy, Grype, and Snyk Container can scan images during the build phase and fail the pipeline if critical CVEs are detected.

# GitHub Actions example: Trivy scan on every PR
- name: Scan container image
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: 'myapp:${{ github.sha }}'
    format: 'sarif'
    severity: 'CRITICAL,HIGH'
    exit-code: '1'  # Fail the build on findings

Build-time scanning catches the low-hanging fruit, but it has limitations. EPSS and KEV data change daily. An image that was clean at build time may contain actively exploited vulnerabilities a week later. This is why runtime scanning is equally important.

Stage 2: Registry Scanning

Scan every image in your container registry on a scheduled basis, not just at push time. Most container registries (Harbor, ECR, GCR, ACR) offer built-in vulnerability scanning, but the quality varies significantly. Consider running your own scanner against the registry API for consistent coverage across multi-cloud environments.

Stage 3: Runtime Scanning and Configuration Auditing

Runtime scanning identifies what is actually running in your cluster right now. This is where tools like kube-bench (CIS Kubernetes Benchmark), kubeaudit, and Falco provide visibility that image scanning alone cannot.

Prioritizing Kubernetes Vulnerabilities

The volume of CVEs in a Kubernetes environment makes prioritization essential. A cluster running 200 unique container images might surface 25,000 individual CVE findings. You cannot patch all of them, and you do not need to.

Effective prioritization in Kubernetes environments requires additional context beyond what CVSS provides:

  1. Is the vulnerable package reachable? Many CVEs in container images affect libraries that are installed but never loaded at runtime. Reachability analysis (available in tools like Snyk and Endor Labs) dramatically reduces false positives.
  2. Is the container exposed to the network? A vulnerable image running in an isolated pod with no ingress is lower risk than the same image behind a LoadBalancer service.
  3. What EPSS and KEV data say. Cross-reference every CVE finding with EPSS probability scores and the CISA KEV catalog. A CVE with EPSS 0.94 in a network-exposed pod is a genuine emergency. The same CVE with EPSS 0.001 in an isolated batch job can wait.
  4. Does the pod have elevated privileges? A vulnerability in a pod running as root with hostPID: true can lead to full node compromise. The same vulnerability in a restricted pod has a much smaller blast radius.
CVEasy AI handles this natively. Import your Trivy or Grype scan results and CVEasy AI automatically enriches every finding with EPSS scores, KEV status, and your asset criticality context. The TRIS™ score computes a prioritized remediation queue that accounts for reachability, exposure, and real-world exploitation data. Get early access →

Common Kubernetes CVEs: Patterns and Lessons

Studying historical Kubernetes vulnerabilities reveals recurring patterns that inform defensive strategy:

Hardening Checklist for Production Clusters

A minimum-viable hardening baseline for production Kubernetes clusters:

  1. Enable Pod Security Standards at the restricted level for all production namespaces
  2. Implement default-deny network policies in every namespace
  3. Encrypt etcd at rest with AES-GCM
  4. Scan all images in CI/CD and block deployments with critical CVEs
  5. Run kube-bench weekly against the CIS Kubernetes Benchmark
  6. Audit RBAC quarterly and remove wildcard permissions
  7. Use distroless or minimal base images to reduce CVE surface area
  8. Enable audit logging on the API server and ship logs to your SIEM
  9. Require image signing and verify signatures at admission
  10. Rotate service account tokens and use short-lived credentials via projected volumes

The Bottom Line

Kubernetes security is not a single tool or a single scan. It is a layered strategy that spans build-time image scanning, cluster configuration auditing, runtime monitoring, and intelligent prioritization of findings. The organizations that get this right treat Kubernetes as its own security domain with its own policies, its own scanning cadence, and its own remediation workflows.

The organizations that get breached are the ones still running kubectl apply with images tagged :latest, no network policies, and cluster-admin bound to the default service account. The gap between these two postures is not budget. It is process.

Ready to take control of your vulnerabilities?

CVEasy AI runs locally on your hardware. Seven layers of risk intelligence. AI remediation in seconds.

Get Started Free Learn About BASzy AI

Related Articles