By Ali Elborey

Policy-as-Code That Actually Blocks Risk: Rego guardrails for Terraform + Kubernetes, enforced in CI and at admission time

policy-as-coderegoopaterraformkubernetessecuritydevsecopsgatekeeperconftestadmission-control

Last month, we shipped a Terraform change that created an S3 bucket with public read access. Our security scanner found it. We got a report. But the bucket was already live. Data was already exposed.

We had scans. We still shipped a bad config.

The problem wasn’t the scanner. The problem was enforcement. Findings were reports, not gates. They didn’t block deployments. They didn’t stop bad code from merging. They just told us what was wrong after it was already wrong.

This is why teams are moving from “scan and report” to “define the rule and enforce it everywhere.” Policy-as-code—often via OPA/Rego—is the practical way to keep cloud-native deployments compliant without relying on manual reviews or hoping someone reads a report.

What Policy-as-Code Means

Policy-as-code means your security rules are versioned like code and run automatically. You write policies in Rego (Open Policy Agent’s language), store them in git, and enforce them at the right gates: in CI before code merges, and in your cluster when resources are created.

Rules are versioned like code and run automatically. They don’t depend on someone remembering to check a dashboard. They don’t depend on manual reviews. They run every time, in the same way, with the same logic.

This article shows you how to write two high-value policies and enforce them in two places: in CI (pre-merge) and in the cluster (admission control). We’ll keep it scoped. We’ll focus on policies that actually block risk.

The Enforcement Model: CI + Admission

You need enforcement in two places because they catch different problems.

CI Stops Bad Changes Early

CI runs before code merges. It catches problems when developers are still working on the change. It fails PR checks with clear messages. It prevents bad code from entering your repository.

This is your first line of defense. It’s fast feedback. Developers see the failure immediately. They fix it before the code is merged. Bad configs never make it into your main branch.

Admission Prevents Manual Drift

Admission control runs in your cluster when resources are created. It catches problems that CI might miss: manual deployments, out-of-band changes, configs applied directly via kubectl, or resources created by operators or controllers.

CI can’t catch everything. Someone might deploy directly to production. An operator might create resources. A controller might reconcile state. Admission control is your safety net. It enforces policies even when CI didn’t run.

Why Both Matter

This aligns with OWASP CI/CD security focus areas: preventing insecure configurations from entering the pipeline (CI) and preventing them from reaching runtime (admission). You need both layers. CI is fast feedback. Admission is runtime protection.

Together, they create defense in depth. CI catches most problems early. Admission catches what CI misses. Both use the same policies. Both enforce the same rules. Consistency matters.

Three Policies That Matter

Let’s write three policies that actually block risk. These are real policies you can use today.

Policy 1: Terraform IAM Guardrail (Least Privilege)

This policy blocks overly broad IAM permissions in Terraform. It denies any IAM policy that grants * actions or * resources without explicit justification.

package terraform.iam

deny[msg] {
    resource := input.resource_changes[_]
    resource.type == "aws_iam_role_policy"
    policy := json.unmarshal(resource.change.after.policy)
    statement := policy.Statement[_]
    
    # Block wildcard actions
    statement.Action == "*"
    msg := sprintf("IAM policy '%s' uses wildcard action '*'. Use specific actions instead.", [resource.address])
}

deny[msg] {
    resource := input.resource_changes[_]
    resource.type == "aws_iam_role_policy"
    policy := json.unmarshal(resource.change.after.policy)
    statement := policy.Statement[_]
    
    # Block wildcard resources
    statement.Resource == "*"
    msg := sprintf("IAM policy '%s' uses wildcard resource '*'. Use specific resource ARNs instead.", [resource.address])
}

deny[msg] {
    resource := input.resource_changes[_]
    resource.type == "aws_iam_role_policy"
    policy := json.unmarshal(resource.change.after.policy)
    statement := policy.Statement[_]
    
    # Block overly broad service permissions
    statement.Action == "s3:*"
    msg := sprintf("IAM policy '%s' grants s3:* permissions. Use specific S3 actions (e.g., s3:GetObject, s3:PutObject).", [resource.address])
}

# Allow exception for specific use cases (documented in allow-list)
allow {
    resource := input.resource_changes[_]
    resource.type == "aws_iam_role_policy"
    resource.address in data.exceptions.iam_wildcards
}

This policy checks Terraform plans before they’re applied. It blocks IAM policies that are too permissive. It forces developers to be specific about permissions.

Policy 2: Kubernetes Pod Security Baseline

This policy enforces pod security standards. It requires containers to run as non-root and use a read-only root filesystem.

package kubernetes.podsecurity

deny[msg] {
    input.kind == "Pod"
    container := input.spec.containers[_]
    not container.securityContext.runAsNonRoot
    msg := sprintf("Container '%s' must run as non-root user. Set securityContext.runAsNonRoot: true", [container.name])
}

deny[msg] {
    input.kind == "Pod"
    container := input.spec.containers[_]
    not container.securityContext.readOnlyRootFilesystem
    msg := sprintf("Container '%s' must use read-only root filesystem. Set securityContext.readOnlyRootFilesystem: true", [container.name])
}

deny[msg] {
    input.kind == "Pod"
    container := input.spec.containers[_]
    container.securityContext.privileged == true
    msg := sprintf("Container '%s' cannot run in privileged mode. Remove securityContext.privileged or set to false", [container.name])
}

deny[msg] {
    input.kind == "Pod"
    container := input.spec.containers[_]
    not container.securityContext.allowPrivilegeEscalation == false
    msg := sprintf("Container '%s' must disable privilege escalation. Set securityContext.allowPrivilegeEscalation: false", [container.name])
}

# Allow exception for system namespaces
allow {
    input.metadata.namespace == "kube-system"
}

allow {
    input.metadata.namespace == "kube-public"
}

This policy enforces the Kubernetes pod security baseline. It prevents containers from running with excessive privileges. It reduces the attack surface.

Policy 3: Approved Registries and Signature Requirements

This policy ensures containers only come from approved registries and are signed.

package kubernetes.images

approved_registries := {
    "gcr.io",
    "docker.io/your-org",
    "quay.io/your-org",
    "registry.example.com"
}

deny[msg] {
    input.kind == "Pod"
    container := input.spec.containers[_]
    image := container.image
    registry := get_registry(image)
    not registry in approved_registries
    msg := sprintf("Container '%s' uses unapproved registry '%s'. Only images from approved registries are allowed.", [container.name, registry])
}

deny[msg] {
    input.kind == "Pod"
    container := input.spec.containers[_]
    image := container.image
    not has_digest(image)
    msg := sprintf("Container '%s' must use image digest instead of tag. Use format: image@sha256:...", [container.name])
}

get_registry(image) := registry {
    parts := split(image, "/")
    count(parts) > 1
    registry := parts[0]
}

get_registry(image) := "docker.io" {
    parts := split(image, "/")
    count(parts) == 1
}

has_digest(image) {
    contains(image, "@sha256:")
}

This policy restricts where container images can come from. It requires image digests instead of tags. It prevents supply chain attacks.

Enforce in CI

CI enforcement runs before code merges. It uses conftest (or similar tools) to evaluate policies against your Terraform plans and Kubernetes manifests.

Setup conftest

First, install conftest:

# macOS
brew install conftest

# Linux
wget https://github.com/open-policy-agent/conftest/releases/download/v0.45.0/conftest_0.45.0_Linux_x86_64.tar.gz
tar xzf conftest_0.45.0_Linux_x86_64.tar.gz
sudo mv conftest /usr/local/bin

Store your policies in a policies/ directory:

policies/
├── terraform/
│   └── iam.rego
└── kubernetes/
    ├── podsecurity.rego
    └── images.rego

Run conftest in CI

Add this to your GitHub Actions workflow:

name: Policy Check

on:
  pull_request:
    paths:
      - 'terraform/**'
      - 'k8s/**'

jobs:
  policy-check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      
      - name: Install conftest
        run: |
          wget https://github.com/open-policy-agent/conftest/releases/download/v0.45.0/conftest_0.45.0_Linux_x86_64.tar.gz
          tar xzf conftest_0.45.0_Linux_x86_64.tar.gz
          sudo mv conftest /usr/local/bin
      
      - name: Check Terraform policies
        if: contains(github.event.pull_request.changed_files, 'terraform/')
        run: |
          cd terraform
          terraform init
          terraform plan -out=tfplan.binary
          terraform show -json tfplan.binary > tfplan.json
          conftest test tfplan.json -p ../policies/terraform/
      
      - name: Check Kubernetes policies
        if: contains(github.event.pull_request.changed_files, 'k8s/')
        run: |
          conftest test k8s/**/*.yaml -p policies/kubernetes/

This workflow runs conftest on every PR. It checks Terraform plans and Kubernetes manifests. It fails the PR if any policy is violated.

Clear Failure Messages

When a policy fails, conftest shows clear messages:

$ conftest test tfplan.json -p policies/terraform/

FAIL - tfplan.json - terraform.iam - IAM policy 'aws_iam_role_policy.example' uses wildcard action '*'. Use specific actions instead.
FAIL - tfplan.json - terraform.iam - IAM policy 'aws_iam_role_policy.example' uses wildcard resource '*'. Use specific resource ARNs instead.

2 tests, 0 passed, 2 failures

The developer sees exactly what’s wrong. They know which resource failed and why. They can fix it before merging.

Enforce at Admission

Admission control runs in your cluster. It uses Gatekeeper (or OPA Gatekeeper) to evaluate policies when resources are created.

Install Gatekeeper

Install Gatekeeper in your cluster:

kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/release-3.14/deploy/gatekeeper.yaml

Wait for Gatekeeper to be ready:

kubectl wait --for=condition=Ready pod -l control-plane=controller-manager -n gatekeeper-system --timeout=90s

Create ConstraintTemplate

A ConstraintTemplate defines the policy logic. Here’s one for pod security:

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8srequiredpodsecurity
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredPodSecurity
      validation:
        type: object
        properties:
          exemptNamespaces:
            type: array
            items:
              type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequiredpodsecurity
        
        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          not container.securityContext.runAsNonRoot
          msg := sprintf("Container '%s' must run as non-root user", [container.name])
        }
        
        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          not container.securityContext.readOnlyRootFilesystem
          msg := sprintf("Container '%s' must use read-only root filesystem", [container.name])
        }
        
        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          container.securityContext.privileged == true
          msg := sprintf("Container '%s' cannot run in privileged mode", [container.name])
        }
        
        violation[{"msg": msg}] {
          input.review.object.metadata.namespace in data.parameters.exemptNamespaces
        }

This ConstraintTemplate defines the Rego policy that Gatekeeper will enforce. It checks pod security settings.

Create Constraint

A Constraint applies the template to specific resources:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredPodSecurity
metadata:
  name: pod-must-be-secure
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
  parameters:
    exemptNamespaces:
      - "kube-system"
      - "kube-public"

Apply both:

kubectl apply -f constrainttemplate.yaml
kubectl apply -f constraint.yaml

Test the Enforcement

Try to create a pod that violates the policy:

apiVersion: v1
kind: Pod
metadata:
  name: bad-pod
spec:
  containers:
  - name: app
    image: nginx:latest
    securityContext:
      runAsUser: 0  # Runs as root
      privileged: true

Gatekeeper will reject it:

$ kubectl apply -f bad-pod.yaml

Error from server (Forbidden): error when creating "bad-pod.yaml": 
admission webhook "validation.gatekeeper.sh" denied the request: 
[pod-must-be-secure] Container 'app' must run as non-root user
[pod-must-be-secure] Container 'app' cannot run in privileged mode

The pod is blocked. It never reaches the cluster. The policy is enforced.

Make It Sustainable

Policy enforcement only works if it’s sustainable. Here’s how to keep it that way.

Exceptions via Reviewed Allow-Lists

Sometimes you need exceptions. Document them in an allow-list that’s reviewed:

# policies/exceptions.yaml
exceptions:
  iam_wildcards:
    - "aws_iam_role_policy.system_admin"  # Documented: System admin role requires broad permissions
    - "aws_iam_role_policy.legacy_app"     # Documented: Legacy app, migration planned Q2 2026
  
  privileged_containers:
    - namespace: "monitoring"
      name: "node-exporter"  # Documented: Requires host access for metrics

Reference the allow-list in your policies:

allow {
    resource := input.resource_changes[_]
    resource.address in data.exceptions.iam_wildcards
}

Exceptions require documentation. They require review. They’re temporary. They have migration plans.

Policy Test Cases

Test your policies. Write test cases that verify they work:

# policies/terraform/iam_test.rego
package terraform.iam

test_deny_wildcard_action {
    input := {
        "resource_changes": [{
            "type": "aws_iam_role_policy",
            "address": "aws_iam_role_policy.example",
            "change": {
                "after": {
                    "policy": json.marshal({
                        "Statement": [{
                            "Action": "*",
                            "Resource": "*"
                        }]
                    })
                }
            }
        }]
    }
    deny[msg] with input as input
    msg == "IAM policy 'aws_iam_role_policy.example' uses wildcard action '*'. Use specific actions instead."
}

Run tests with conftest:

conftest test --policy policies/terraform/ policies/terraform/iam_test.rego

Tests ensure policies work as expected. They catch regressions. They document behavior.

Versioning and Rollout Strategy

Version your policies. Use semantic versioning. Tag releases:

git tag -a policies-v1.0.0 -m "Initial policy set"
git push origin policies-v1.0.0

Roll out policies gradually. Start with warnings. Move to enforcement:

# Phase 1: Warn only
warn[msg] {
    # policy logic
}

# Phase 2: Enforce
deny[msg] {
    # same policy logic
}

Monitor policy violations. Track metrics. Adjust policies based on data.

What You’ve Learned

You now know how to:

  • Write Rego policies for Terraform and Kubernetes
  • Enforce policies in CI using conftest
  • Enforce policies at admission using Gatekeeper
  • Handle exceptions sustainably
  • Test and version policies

Policy-as-code isn’t just scanning. It’s enforcement. It’s gates that block bad configs. It’s consistency across environments. It’s defense in depth.

Start with one policy. Enforce it in CI. Then add admission. Then add more policies. Build it incrementally. Make it sustainable.

The goal isn’t perfect policies on day one. The goal is policies that actually block risk, enforced automatically, everywhere they need to be.

Discussion

Join the conversation and share your thoughts

Discussion

0 / 5000