Search
Varnish Helm Chart

Setting up Varnish Clustering with Helm

Introduction

Varnish Cluster is a solution for increasing cache hit rate in a Varnish Enterprise deployment and reducing load on the origin service. It’s dynamic, scalable, and can be enabled with just a few lines of VCL.

This guide covers how to enable a dynamic cluster using the official Varnish Enterprise Helm chart.

Before we begin

The Varnish Enterprise Installation Guide and the official Varnish Enterprise Helm chart must be used to install and deploy Varnish Enterprise.

Configuration

Create a cluster token

A token (32-byte hex or other format) must be generated and stored as a Kubernetes secret so that every pod in the cluster shares the same token.

kubectl create secret generic varnish-cluster-secret --from-literal=token=$(openssl rand -hex 32)

Prepare the overrides

The Varnish Enterprise Helm chart ships with defaults. An override file allows users to customize only what is needed. In the override file (cluster-values.yaml in this case) we will:

  • Inject cluster token into VCL.
  • Wire in custom VCL for clustering.
  • Add a headless service for CoreDNS-based peer discovery.
  • Optionally, expose Varnish service (e.g. LoadBalancer, Ingress, etc.)
server:

  # Inject the cluster token into VCL as an env var
  extraEnvs:
    CLUSTER_TOKEN:
      valueFrom:
        secretKeyRef:
          name: varnish-cluster-secret
          key: token

  # Custom VCL to enable cluster
  vclConfig: |
    vcl 4.1;
    import activedns;
    include "cluster.vcl";

    backend origin {
      .host = "your-origin.example.com";
      .port = "80";
    }

    sub vcl_init {
      # Turn on trace headers for debug (optional)
      cluster_opts.set("trace", "true");
      # For dynamic cluster, create a DNS group with the domain name for headless service
      new cluster_group = activedns.dns_group(
        "{{ .Release.Name }}-peers.{{ .Release.Namespace }}.svc.cluster.local"
      );
      # Subscribe the cluster director to the DNS group
      cluster.subscribe(cluster_group.get_tag());
      # Set the cluster token
      cluster_opts.set("token", "${CLUSTER_TOKEN}");
    }

    sub vcl_backend_fetch {
      set bereq.backend = origin;
    }

  # Optional: expose via LoadBalancer for quick testing
  # Omit or replace with your own exposure method if you already have an Ingress, NodePort, etc.
  service:
    type: LoadBalancer

# Headless service so pods can discover each other
extraManifests:
  - name: peers-headless-svc
    checksum: true
    data: |
      apiVersion: v1
      kind: Service
      metadata:
        name: {{ .Release.Name }}-peers
        namespace: {{ .Release.Namespace }}
      spec:
        clusterIP: None
        selector:
          app.kubernetes.io/name: varnish-enterprise
          app.kubernetes.io/instance: {{ .Release.Name }}
        ports:
          - name: http
            port: 6081
            targetPort: 6081

Deploy or Upgrade

Apply the changes defined in the override file on top of the chart defaults:

helm upgrade varnish-enterprise varnish/varnish-enterprise -f cluster-values.yaml

Validation

Check cluster metrics

The following command displays metrics such as cluster_stats.error_token and cluster_stats.self_identified:

kubectl exec -it <any-pod-name> -- varnishstat -1 -f 'KVSTORE.cluster_stats.*'

Refer to the cluster observability section for more details about the metrics.

List backends

Run the following command to verify you have one backend entry per peer (including self).

kubectl exec -it <any-pod-name> -- varnishadm backend.list '*.*'

Check varnishlog

Cluster logs are prefixed with Cluster: and are logged with the VCL_Log tag. They can be observed with the following varnishlog command:

$ kubectl exec -it <any-pod-name> -- varnishlog -g raw -q 'VCL_Log ~ "^Cluster:"' -i VCL_Log
       146 VCL_Log        b Cluster: Not self-routing, this is a primary node
       ...
        18 VCL_Log        b Cluster: Self-routing to primary node

®Varnish Software, Wallingatan 12, 111 60 Stockholm, Organization nr. 556805-6203