Sometimes it’s necessary to manage VCL outside of the Varnish deployment lifecycle. For example, when your VCL is managed by separate teams or when Varnish Enterprise is used for multi-tenancy. In such cases, Varnish Enterprise Helm Chart can be configured to use an external ConfigMap as its source for VCL.
Do note that changes made to external ConfigMap are not automatically updated in a Pod by Kubernetes. It is necessary to run a kubectl rollout
after a ConfigMap is updated for the changes to take effect:
$ kubectl rollout restart statefulset/varnish-enterprise # for StatefulSet
$ kubectl rollout restart deployment/varnish-enterprise # for Deployment
$ kubectl rollout restart daemonset/varnish-enterprise # for DaemonSet
To use an external ConfigMap, first create a ConfigMap with the desired VCL in the same namespace as Varnish Enterprise. For example, save the following content to configmap.yaml
and apply it with kubectl apply -f configmap.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: varnish-vcl
namespace: varnish
data:
default.vcl: |
vcl 4.1;
backend default {
.host = "www.example.com";
.port = "80";
}
sub vcl_backend_fetch {
set bereq.http.Host = "www.example.com";
}
Once ConfigMap is added, configure server.vclConfig
, server.extraVolumes
, and server.extraVolumeMounts
:
server:
vclConfig: "" # This value must be set to an empty string
extraVolumes:
- name: varnish-vcl
configMap:
name: varnish-vcl
extraVolumeMounts:
- name: varnish-vcl
mountPath: /etc/varnish/default.vcl
subPath: default.vcl # Set this to the key name of the file in a ConfigMap
The server.extraVolumeMounts[].mountPath
must match that of server.vclConfigPath
.