Policy-as-Code Meets GitOps — Hardening Multi-Tenant Kubernetes Clusters with Kyverno and OPA
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Git Repo │ │ Argo CD │ │ Kubernetes │
│ │ │ │ │ Cluster │
│ ┌─────────────┐ │ │ ┌──────────────┐ │ │ │
│ │ Policies │ │───▶│ │ Policy App │ │───▶│ ┌─────────────┐ │
│ │ (Kyverno/ │ │ │ │ │ │ │ │ Kyverno │ │
│ │ OPA) │ │ │ │ │ │ │ │ Policies │ │
│ └─────────────┘ │ │ └──────────────┘ │ │ └─────────────┘ │
│ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌──────────────┐ │ │ ┌─────────────┐ │
│ │ App Manifests│ │───▶│ │ App Deploy │ │───▶│ │ OPA │ │
│ │ │ │ │ │ │ │ │ │ Gatekeeper │ │
│ └─────────────┘ │ │ └──────────────┘ │ │ └─────────────┘ │
└─────────────────┘ └──────────────────┘ │ │
│ │ │ ┌─────────────┐ │
│ │ │ │ Policy │ │
│ │ │ │ Enforcement │ │
│ │ │ │ & Auditing │ │
│ │ │ └─────────────┘ │
│ │ └─────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Developer │ │ GitOps │ │ Security │
│ Workflow │ │ Automation │ │ Compliance │
│ │ │ │ │ │
│ 1. Push Code │ │ 1. Detect │ │ 1. Validate │
│ 2. Create PR │ │ Changes │ │ Policies │
│ 3. Policy Check │ │ 2. Apply │ │ 2. Enforce │
│ 4. Deploy │ │ Policies │ │ Rules │
│ │ │ 3. Deploy Apps │ │ 3. Audit │
└─────────────────┘ └──────────────────┘ └─────────────────┘
Kubernetes clusters are getting bigger. More teams. More applications. More complexity. And with that growth comes a real problem: how do you keep everything secure when you can’t watch every deployment?
Most companies start with RBAC. It’s a good first step. But RBAC alone won’t stop someone from deploying a container with root privileges or mounting the host filesystem. You need something stronger.
That’s where Policy-as-Code comes in. Instead of hoping your teams follow security guidelines, you write policies that enforce them automatically. And when you combine this with GitOps, you get something powerful: security policies that live in Git and get enforced the moment someone tries to deploy something.
The Problem with Traditional Approaches
RBAC controls who can do what. But it doesn’t control what they can do. A developer with deployment permissions can still create a pod that runs as root or mounts sensitive host directories.
Here are the common issues I see:
- Privileged containers: Pods running with
privileged: trueget almost unlimited access to the host - Host path mounts: Containers mounting
/etc,/var, or other sensitive directories - Unapproved images: Deploying containers from random registries without scanning
- Resource abuse: Teams consuming all cluster resources without limits
- Network exposure: Services accidentally exposed to the entire cluster
These aren’t theoretical problems. They happen in real clusters every day.
Policy-as-Code: The Better Way
Policy-as-Code means writing your security rules as code. Not documentation. Not checklists. Actual code that runs automatically.
Two main tools dominate this space: Kyverno and OPA Gatekeeper. Both work, but they approach the problem differently.
Kyverno: Kubernetes-Native Policies
Kyverno policies are written in YAML. They feel like Kubernetes resources because they are. If you know Kubernetes, you can write Kyverno policies.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-image-signature
spec:
validationFailureAction: enforce
background: true
rules:
- name: verify-image-signature
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Only signed images from approved registries are allowed"
pattern:
spec:
containers:
- name: "*"
image: "registry.company.com/*:*"
This policy ensures all containers come from your company’s registry. Simple. Clear. Effective.
OPA Gatekeeper: Rego-Powered Policies
OPA uses Rego, a purpose-built policy language. It’s more powerful but has a steeper learning curve.
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg}] {
required := input.parameters.labels
provided := input.review.object.metadata.labels
missing := required[_]
not provided[missing]
msg := sprintf("Missing required label: %v", [missing])
}
Both tools integrate with GitOps. Argo CD can deploy your policies from Git repositories. Flux can too. The policies become part of your infrastructure-as-code.
Building a Secure GitOps Pipeline
Here’s how you set this up in practice.
Step 1: Policy Repository Structure
policies/
├── kyverno/
│ ├── security/
│ │ ├── require-image-signature.yaml
│ │ ├── block-privileged-pods.yaml
│ │ └── enforce-resource-limits.yaml
│ └── governance/
│ ├── require-labels.yaml
│ └── namespace-quotas.yaml
├── opa/
│ ├── templates/
│ │ ├── required-labels.yaml
│ │ └── namespace-isolation.yaml
│ └── constraints/
│ ├── team-labels.yaml
│ └── network-policies.yaml
└── argocd/
├── policy-app.yaml
└── tenant-apps.yaml
Step 2: Argo CD Application for Policies
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: security-policies
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/company/policies
targetRevision: HEAD
path: kyverno
destination:
server: https://kubernetes.default.svc
namespace: kyverno
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
This deploys your Kyverno policies automatically. When you update policies in Git, Argo CD applies them to your cluster.
Step 3: Multi-Tenant Namespace Structure
apiVersion: v1
kind: Namespace
metadata:
name: team-frontend
labels:
team: frontend
environment: production
cost-center: engineering
annotations:
kyverno.io/network-policy: "default-deny"
---
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: inject-team-labels
spec:
rules:
- name: add-team-labels
match:
any:
- resources:
kinds:
- Pod
- Service
- Deployment
mutate:
patchStrategicMerge:
metadata:
labels:
team: "{{request.namespace}}"
managed-by: kyverno
This automatically adds team labels to all resources in each namespace. No manual work required.
Real-World Policy Examples
Let’s look at some policies that actually solve problems.
Preventing Unapproved Images
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: block-unapproved-images
spec:
validationFailureAction: enforce
background: true
rules:
- name: check-image-registry
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Only images from approved registries allowed"
pattern:
spec:
containers:
- name: "*"
image: "registry.company.com/*:*"
- name: block-latest-tag
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Image tags must be specific versions, not 'latest'"
deny:
conditions:
any:
- key: "{{request.object.spec.containers[?image(@, ':latest')]}}"
operator: AnyIn
value: ["true"]
This policy does two things: ensures all images come from your company registry and blocks the use of :latest tags.
Enforcing Resource Quotas Per Tenant
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-resource-quotas
spec:
rules:
- name: add-resource-limits
match:
any:
- resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- name: "*"
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "100m"
memory: "128Mi"
This automatically adds resource limits to any pod that doesn’t have them. No more resource-hungry containers bringing down the cluster.
Network Isolation Between Tenants
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: default-deny-network-policy
spec:
rules:
- name: create-default-deny
match:
any:
- resources:
kinds:
- Namespace
generate:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
name: default-deny-all
namespace: "{{request.object.metadata.name}}"
synchronize: true
data:
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
This creates a default-deny NetworkPolicy for every new namespace. Teams must explicitly allow the traffic they need.
Continuous Compliance and Auditing
Policies are great, but you need to know they’re working. Here’s how to audit your cluster.
Kyverno Policy Verification
# Check if policies are working
kubectl kyverno apply policies/ --resource=deployment.yaml
# Test specific policies
kubectl kyverno test policies/test-cases/
# Generate policy reports
kubectl get policyreports -A
OPA Policy Testing
# Test policies locally
opa test policies/opa/
# Evaluate policies against cluster state
kubectl get pods -o yaml | opa eval -d policies/opa/ -i /dev/stdin data.pods.violations
CI/CD Integration
name: Policy Validation
on:
pull_request:
paths:
- 'manifests/**'
- 'policies/**'
jobs:
validate-policies:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Kyverno CLI
run: |
curl -s https://api.github.com/repos/kyverno/kyverno/releases/latest | \
jq -r '.assets[].browser_download_url' | \
grep kyverno-cli | \
wget -i - -O kyverno
chmod +x kyverno
- name: Validate policies
run: |
./kyverno apply policies/ --resource=manifests/
- name: Test policies
run: |
./kyverno test policies/test-cases/
This runs policy validation on every pull request. No policy violations make it to production.
The Future of Kubernetes Governance
Policy-as-Code isn’t just about security. It’s about making Kubernetes manageable at scale. When you have hundreds of teams deploying thousands of applications, you need automation.
Here’s what I see coming:
Better Policy Languages: Tools like CUE and Jsonnet are making policy writing easier. Less YAML, more logic.
Policy Composition: Instead of writing one giant policy, you’ll compose smaller, reusable policies. Like functions in code.
Runtime Policy Updates: Policies that can change based on cluster state. Dynamic security rules that adapt to threats.
Policy Marketplace: Shared policy libraries. Why write your own image scanning policy when you can use one that’s been battle-tested?
AI-Assisted Policy Writing: Tools that suggest policies based on your cluster’s behavior. “I noticed you always block privileged containers. Want me to write a policy for that?”
Getting Started
Don’t try to implement everything at once. Start small:
-
Pick one tool: Kyverno or OPA. Learn it well before adding complexity.
-
Start with obvious policies: Block privileged pods. Require resource limits. These solve real problems immediately.
-
Use audit mode first: Let policies warn instead of block. See what they catch before enforcing them.
-
Integrate with GitOps: Deploy policies from Git. Make them part of your infrastructure.
-
Measure everything: Track policy violations. See what’s actually happening in your cluster.
The goal isn’t perfect security. It’s better security with less work. Policy-as-Code gives you that. Your teams can deploy faster because they know the policies will catch mistakes. Your security team can sleep better because they know the policies are always running.
And when something goes wrong, you have a clear audit trail. The policy was in Git. The violation was logged. The fix is already deployed.
That’s the power of Policy-as-Code meets GitOps. Security that scales with your cluster.
Join the Discussion
Have thoughts on this article? Share your insights and engage with the community.