For cache invalidation requests to be effective, they need to be sent to all nodes in a cluster, otherwise some nodes may still keep the object in their cache. In kubernetes and dynamic clusters, we rely on discovery and broadcaster to discover the pods and send to all of them.
This page explains the base configuration to quickly get started with it, but your VCL should also contain invalidation logic.
The Varnish Enterprise Installation Guide and the official Varnish Enterprise Helm chart must be used to install and deploy Varnish Enterprise.
The following values.yaml does a few things:
kubernetes updates of the cluster and updating /etc/nodes/nodes.conf/etc/nodes/nodes.confvarnish-vsm volume in the both of them so they can read and write /etc/nodes/nodes.confpeers-headless-svc service so that the broadcaster containers can be contacted on port 8088, and the varnish ones on their HTTP listening portdiscovery to read the kubernetes API and list the pods.global:
imagePullSecrets:
- name: varnish-pull-secret
server:
vclConfig: |
# replace this with your VCL and invalidation logic
vcl 4.1;
backend default none;
# this just clears the cache completely, without any access-control
sub vcl_recv {
if (req.method == "BAN") {
ban("obj.status != 0");
return(synth(200));
}
}
extraContainers: |
- name: {{ .Release.Name }}-broadcaster
image: quay.io/varnish-software/varnish-broadcaster:1.6.0
env:
- name: VARNISH_BROADCASTER_EXTRA
value: "-confwatch 1s -cfg /etc/nodes/nodes.conf"
volumeMounts:
- name: {{ .Release.Name }}-varnish-vsm
mountPath: /etc/nodes
- name: {{ .Release.Name }}-discovery
image: quay.io/varnish-software/varnish-discovery:1.5.0
env:
- name: VARNISH_DISCOVERY_FLAGS
value: |
k8s
-server https://$(KUBERNETES_SERVICE_HOST)/
-group {{ .Release.Name }}-peers
-port {{ .Values.server.http.port }}
-nodefile /etc/nodes/nodes.conf
volumeMounts:
- name: {{ .Release.Name }}-varnish-vsm
mountPath: /etc/nodes
extraManifests:
# Headless service so pods can discover each other
- name: peers-headless-svc
checksum: true
data: |
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-peers
namespace: {{ .Release.Namespace }}
spec:
clusterIP: None
selector:
app.kubernetes.io/name: varnish-enterprise
app.kubernetes.io/instance: {{ .Release.Name }}
ports:
- name: http-varnish
port: {{ .Values.server.http.port }}
targetPort: {{ .Values.server.http.port }}
- name: http-broadcaster
port: 8088
targetPort: 8088
- name: clusterrolebinding
data: |
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "varnish-enterprise.fullname" . }}-clusterrolebinding
roleRef:
kind: ClusterRole
name: {{ include "varnish-enterprise.fullname" . }}-clusterrole
subjects:
- kind: ServiceAccount
name: {{ include "varnish-enterprise.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
- name: clusterrole
data: |
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "varnish-enterprise.fullname" . }}-clusterrole
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
You can test it by sending an HTTP request to one of the broadcaster containers, and it will reply with a report of the responses from all the pods:
$ curl varnish-enterprise-peers:8088 -X BAN
{
"method": "BAN",
"uri": "/",
"ts": 1768007111,
"nodes": {
"varnish-enterprise-7fd48b749b-86tbr": 200,
"varnish-enterprise-7fd48b749b-bj29v": 200,
"varnish-enterprise-7fd48b749b-sznpm": 200
},
"rate": 100,
"done": true
}