🕸️ Ada Research Browser

pod-security-violation.md
← Back

Runbook: Pod Security Violation (Kyverno)

Alert

Severity

Warning -- Policy violations in Enforce mode block resource creation. Violations in Audit mode allow the resource but generate a compliance report. Both require investigation to determine whether the violation is a legitimate security concern or a misconfiguration.

Impact

Investigation Steps

  1. Check for recent policy violations:
kubectl get policyreport -A
kubectl get clusterpolicyreport
  1. Get details on violations in a specific namespace:
kubectl get policyreport -n <namespace> -o yaml
  1. List all ClusterPolicies and their enforcement mode:
kubectl get clusterpolicies -o custom-columns='NAME:.metadata.name,ACTION:.spec.validationFailureAction,READY:.status.conditions[0].status'
  1. Check Kyverno admission controller logs for the specific denial:
kubectl logs -n kyverno deployment/kyverno-admission-controller --tail=200 | grep -i "denied\|violation\|blocked"
  1. Check the events in the namespace where the violation occurred:
kubectl get events -n <namespace> --sort-by='.lastTimestamp' | grep -i "kyverno\|policy"
  1. If the violation was on a specific pod/deployment, check what triggered it:
kubectl describe deployment <name> -n <namespace>
kubectl get replicaset -n <namespace> -o yaml | grep -A 20 "securityContext"
  1. Check the specific policy that was violated:
kubectl get clusterpolicy <policy-name> -o yaml
  1. Check the Kyverno background controller for existing resource violations:
kubectl logs -n kyverno deployment/kyverno-background-controller --tail=100

Resolution

Violation: disallow-privileged-containers

The pod spec requests privileged mode. Fix the deployment:

# Correct security context
spec:
  containers:
    - name: app
      securityContext:
        privileged: false
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
        runAsNonRoot: true
        capabilities:
          drop:
            - ALL

Violation: require-run-as-nonroot

The container is running as root. Fix by setting the security context:

spec:
  securityContext:
    runAsNonRoot: true
  containers:
    - name: app
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000

Violation: restrict-image-registries

The image is not from the approved Harbor registry. Fix by pulling from Harbor:

spec:
  containers:
    - name: app
      image: harbor.sre.internal/<project>/<image>:<tag>

Violation: disallow-latest-tag

The image uses the :latest tag or has no tag. Fix by pinning an explicit version:

spec:
  containers:
    - name: app
      image: harbor.sre.internal/team-alpha/my-app:v1.2.3

Violation: require-resource-limits

The pod is missing CPU or memory limits. Fix by adding resource constraints:

spec:
  containers:
    - name: app
      resources:
        requests:
          cpu: 100m
          memory: 128Mi
        limits:
          cpu: 500m
          memory: 512Mi

Violation: require-labels

Required labels are missing. Ensure all resources have standard labels:

metadata:
  labels:
    app.kubernetes.io/name: my-app
    app.kubernetes.io/instance: my-app
    app.kubernetes.io/version: v1.2.3
    app.kubernetes.io/managed-by: Helm
    sre.io/team: team-alpha

Legitimate exception needed (platform components)

If a platform component genuinely needs an exception (e.g., NeuVector enforcer requires privileged access):

  1. Create a PolicyException:
apiVersion: kyverno.io/v2beta1
kind: PolicyException
metadata:
  name: <component>-exception
  namespace: <namespace>
spec:
  exceptions:
    - policyName: <policy-name>
      ruleNames:
        - <rule-name>
  match:
    any:
      - resources:
          kinds:
            - Pod
          namespaces:
            - <namespace>
  1. Document the exception and its justification in the component's README
  2. Add the exception to Git and let Flux reconcile it

Prevention

Escalation