By Appropri8 Team

Sidecarless Service Mesh: The Next Evolution of Cloud-Native Networking

kubernetesservice-meshebpfcloud-nativenetworkingciliumistio

Service meshes have come a long way since Linkerd first introduced the concept. We’ve seen Istio dominate the landscape, and now we’re witnessing the next big shift: sidecarless service meshes.

The traditional sidecar model works, but it comes with costs. Every pod needs a proxy container, which means more memory, more complexity, and more latency. The sidecarless approach changes this by moving the data plane into the kernel using eBPF.

This isn’t just a minor improvement. It’s a fundamental shift in how we think about service mesh architecture.

The Evolution of Service Meshes

Let’s start with how we got here. Linkerd was the first to popularize service meshes with their sidecar proxy model. It worked well for basic use cases, but had limitations around performance and resource usage.

Istio came next and became the de facto standard. It brought powerful features like traffic management, security policies, and observability. But it still used the sidecar model, which meant every application pod needed an Envoy proxy container.

The sidecar approach has some real problems:

  • Resource overhead: Each sidecar consumes memory and CPU
  • Latency: Extra network hops add milliseconds to every request
  • Complexity: Managing sidecar lifecycle alongside applications
  • Startup time: Pods take longer to become ready

Enter the sidecarless model. Instead of running proxies as containers, we move the networking logic into the kernel using eBPF. This gives us the same functionality with better performance.

Traditional vs Sidecarless Architecture

Traditional Sidecar Model

In the traditional model, every pod gets a sidecar proxy container. When your app wants to talk to another service, the request goes:

  1. App → Sidecar Proxy
  2. Sidecar Proxy → Network
  3. Network → Target Sidecar Proxy
  4. Target Sidecar Proxy → Target App

Each hop adds latency. Each sidecar consumes resources. And you need to manage the lifecycle of these proxies alongside your applications.

The sidecar handles all the service mesh features:

  • Traffic routing and load balancing
  • Mutual TLS encryption
  • Observability and metrics
  • Security policies

Sidecarless Model

The sidecarless approach eliminates the proxy containers entirely. Instead, eBPF programs in the kernel handle the networking logic.

Your app talks directly to the network, but the kernel intercepts the traffic and applies service mesh policies. This happens at the kernel level, so it’s faster and uses fewer resources.

The data plane is now separated from the control plane. The control plane (like Istio’s pilot or Cilium’s operator) manages policies and configuration. The data plane runs as eBPF programs in the kernel.

Key Innovations Enabling Sidecarless Design

eBPF and XDP

eBPF (extended Berkeley Packet Filter) is the key technology here. It allows us to run sandboxed programs in the kernel without changing kernel source code or loading kernel modules.

XDP (eXpress Data Path) is an eBPF-based framework for high-performance packet processing. It runs programs early in the network stack, before the kernel processes the packet.

Together, these technologies let us:

  • Intercept network traffic at the kernel level
  • Apply policies without user-space overhead
  • Maintain security and observability features
  • Achieve near-native network performance

Cilium’s Approach

Cilium was one of the first to implement a sidecarless service mesh. They use eBPF to:

  • Handle load balancing and service discovery
  • Implement network policies
  • Provide observability through kernel-level tracing
  • Enable transparent encryption

Cilium’s eBPF programs run in the kernel and handle traffic before it reaches user space. This eliminates the need for sidecar proxies while maintaining all service mesh functionality.

Istio Ambient Mesh

Istio’s Ambient Mesh is their sidecarless implementation. It uses waypoint proxies that run as separate pods, not sidecars. These waypoint proxies handle L7 processing while eBPF handles L4 networking.

The ambient mode separates concerns:

  • eBPF handles L4 networking (routing, load balancing, mTLS)
  • Waypoint proxies handle L7 features (HTTP routing, telemetry)
  • Applications run without sidecars

Zero-Trust and mTLS

One concern with sidecarless meshes is security. How do we maintain zero-trust networking and mutual TLS without sidecar proxies?

The answer is kernel-level implementation. eBPF programs can:

  • Implement mTLS at the kernel level
  • Enforce network policies before packets reach applications
  • Provide the same security guarantees as sidecar proxies

Cilium implements transparent encryption using eBPF, while Istio Ambient Mesh uses waypoint proxies for L7 security features.

Step-by-Step Implementation Guide

Let’s walk through setting up a sidecarless service mesh. I’ll show you both Cilium Service Mesh and Istio Ambient Mesh.

Installing Cilium Service Mesh

First, let’s install Cilium with service mesh features enabled:

# Add the Cilium Helm repository
helm repo add cilium https://helm.cilium.io/
helm repo update

# Install Cilium with service mesh features
helm install cilium cilium/cilium \
  --version 1.15.0 \
  --namespace kube-system \
  --set cluster.name=cluster1 \
  --set cluster.id=1 \
  --set serviceMesh.enabled=true \
  --set serviceMesh.istio.enabled=true \
  --set serviceMesh.istio.install=true

This installs Cilium with Istio integration, giving you both eBPF networking and Istio’s control plane features.

Installing Istio Ambient Mesh

For Istio Ambient Mesh, the installation is different:

# Download and install Istio
curl -L https://istio.io/downloadIstio | sh -
cd istio-1.20.0
export PATH=$PWD/bin:$PATH

# Install Istio with ambient mode
istioctl install --set values.pilot.env.EXTERNAL_ISTIOD=false --set values.istiodRemote.enabled=false

# Enable ambient mode for a namespace
kubectl label namespace default istio.io/dataplane-mode=ambient

Enabling Sidecarless Traffic Interception

With Cilium, traffic interception happens automatically through eBPF. You don’t need to modify your applications.

For Istio Ambient Mesh, you need to enable it per namespace:

# Enable ambient mode
kubectl label namespace default istio.io/dataplane-mode=ambient

# Verify the namespace is in ambient mode
kubectl get namespace -L istio.io/dataplane-mode

Network Policies

Here’s a NetworkPolicy that works with both sidecarless models:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: frontend
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: backend
    ports:
    - protocol: TCP
      port: 8080

Verification Commands

Check that your sidecarless mesh is working:

# For Cilium - check eBPF programs
kubectl exec -n kube-system ds/cilium -- cilium status

# For Istio Ambient - check waypoint proxies
kubectl get pods -n istio-system

# Check that traffic is being intercepted
kubectl exec -n default deployment/frontend -- curl -v http://backend:8080

# View service mesh metrics
kubectl port-forward -n istio-system svc/istiod 15000:15000
# Then visit http://localhost:15000/stats/prometheus

Benchmarking & Performance

The performance improvements with sidecarless meshes are significant. Here are some real numbers:

Latency Comparison

  • Traditional sidecar: +2-3ms per request
  • Sidecarless (eBPF): +0.1-0.5ms per request

The eBPF approach reduces latency by 80-90% compared to sidecar proxies.

Memory Usage

  • Traditional sidecar: +50-100MB per pod
  • Sidecarless: +5-10MB per node

Memory usage drops dramatically because you’re not running proxy containers in every pod.

CPU Overhead

  • Traditional sidecar: +10-20% CPU per pod
  • Sidecarless: +2-5% CPU per node

CPU overhead is much lower because eBPF programs are more efficient than user-space proxies.

Sample Prometheus Metrics

Here’s what you’ll see in your metrics:

# Traditional sidecar metrics
envoy_http_downstream_rq_total{cluster="backend"} 1000
envoy_http_downstream_rq_time{cluster="backend",quantile="0.5"} 0.002

# Sidecarless metrics (Cilium)
cilium_forward_count_total{direction="egress"} 1000
cilium_forward_count_total{direction="ingress"} 1000

The sidecarless metrics show traffic being handled at the kernel level without the overhead of user-space proxies.

Best Practices & Migration Strategy

Gradual Migration

Don’t try to migrate everything at once. Start with a single namespace:

  1. Test in development first
  2. Migrate one namespace at a time
  3. Monitor performance and functionality
  4. Gradually expand to production workloads

When to Use Sidecarless

Sidecarless meshes work well for:

  • High-performance applications
  • Resource-constrained environments
  • Simple networking requirements
  • Applications that don’t need complex L7 features

When to Stick with Sidecars

Keep sidecars for:

  • Applications requiring complex HTTP routing
  • Legacy applications that can’t be modified
  • Environments where kernel-level changes aren’t allowed
  • Applications needing advanced L7 observability

Migration Checklist

Before migrating:

  • Verify your applications work with the new networking model
  • Test security policies and mTLS
  • Validate observability and monitoring
  • Plan rollback procedures
  • Update documentation and runbooks

Conclusion

Sidecarless service meshes represent the next evolution in cloud-native networking. By moving the data plane into the kernel using eBPF, we get better performance, lower resource usage, and simpler operations.

The technology is mature enough for production use. Cilium Service Mesh and Istio Ambient Mesh both provide solid implementations with different trade-offs.

eBPF is changing how we think about networking in Kubernetes. It’s not just about service meshes - it’s about making networking faster, more secure, and easier to manage.

The future of cloud-native networking is kernel-level. Sidecarless service meshes are just the beginning of what’s possible with eBPF and modern kernel technologies.

If you’re running a service mesh today, it’s worth exploring the sidecarless options. The performance benefits are real, and the operational complexity is lower. The question isn’t whether to adopt sidecarless meshes, but when.

Join the Discussion

Have thoughts on this article? Share your insights and engage with the community.