Search
Varnish Helm Chart

External VCL ConfigMap

Sometimes it’s necessary to manage VCL outside of the Varnish deployment lifecycle; for example, when your VCL is managed by separate teams or when Varnish Enterprise is used for multi-tenancy. This page will help you create external ConfigMaps when using both a single and multiple VCL files.

ConfigMap for single VCL file

With one VCL file, you can configure Varnish Helm Chart to use an external ConfigMap using server.extraVolumes and server.extraVolumeMounts.

Step 1: Create a ConfigMap

The ConfigMap must be in the same namespace as the Varnish Enterprise deployment.

apiVersion: v1
kind: ConfigMap
metadata:
  name: varnish-vcl
  namespace: varnish
data:
  default.vcl: |
    vcl 4.1;

    backend default {
      .host = "www.example.com";
      .port = "80";
    }

    sub vcl_backend_fetch {
      set bereq.http.Host = "www.example.com";
    }

Step 2: Configure Varnish Enterprise

server.vclConfig must be empty and the VCL must be mounted as default.vcl.

server:
  vclConfig: "" # Leave this value empty

  extraVolumes:
    - name: varnish-vcl
      configMap:
        name: varnish-vcl

  extraVolumeMounts:
    - name: varnish-vcl
      mountPath: /etc/varnish/default.vcl
      subPath: default.vcl

ConfigMap for multiple VCL files

To use multiple VCL files, a cmdfile can be used to load VCL files into labels, which are then dispatched by the main VCL per matching host. This approach will require at least three configuration files; a cmdfile, a main VCL, and a tenant VCL.

Varnish Helm Chart v1.2.0+

Varnish Helm Chart (v1.2.0) has an ability to inline cmdfile and specify additional VCL files from within values file. Using external ConfigMaps for multi-tenancy using multiple VCL files can be vastly simplified.

In this case, both cmdfile, and a main VCL file is tied to Varnish deployment lifecycle, but tenant can be managed externally.

Step 1: Create a tenant ConfigMap

For each tenant, create a ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: varnish-vcl-tenant1
  namespace: varnish
data:
  tenant1.vcl: |
    vcl 4.1;

    backend default {
      .host = "www.example.com";
      .port = "80";
    }

    sub vcl_backend_fetch {
      set bereq.http.Host = "www.example.com";
    }

Step 2: Configure Varnish Enterprise

The cmdfile and a main VCL can be configured with server.vclConfigs and server.cmdfileConfig. The cmdfile is automatically loaded when it is defined. Changes that are made to main.vcl via Helm values will automatically rollout Varnish.

server:
  vclConfig: |
    vcl 4.1;

    backend default {
      .host = "127.0.0.1";
      .port = "8090";
    }

  vclConfigs:
    main.vcl: |
      vcl 4.1;

      import std;

      backend default {
        .host = "127.0.0.1";
        .port = "8090";
      }

      sub vcl_recv {
        set req.http.host = std.tolower(req.http.host);

        if (req.http.host ~ "(^|\.)example\.com(\:[0-9]+)?$") {
          return (vcl(label_tenant1));
        }

        return (synth(404, "Not Found"));
      }
  
  cmdfileConfig: |
    vcl.load vcl_tenant1 /etc/varnish/tenant1.vcl
    vcl.label label_tenant1 vcl_tenant1
    vcl.load vcl_main /etc/varnish/main.vcl
    vcl.use vcl_main

  extraVolumes:
    - name: varnish-vcl-tenant1
      configMap:
        name: varnish-vcl-tenant1

  extraVolumeMounts:
    - name: varnish-vcl-tenant1
      mountPath: /etc/varnish/tenant1.vcl
      subPath: tenant1.vcl

Varnish Helm Chart before v1.2.0

Step 1: Create a main ConfigMap

As Varnish Helm Chart (v1.1.0) loads default.vcl before the cmdfile, the main VCL must not be named default.vcl, unlike the single VCL scenario. In this example, we’ll be using main.vcl throughout:

apiVersion: v1
kind: ConfigMap
metadata:
  name: varnish-vcl
  namespace: varnish
data:
  cmds.cli: |
    vcl.load vcl_tenant1 /etc/varnish/tenant1.vcl
    vcl.label label_tenant1 vcl_tenant1
    vcl.load vcl_main /etc/varnish/main.vcl
    vcl.use vcl_main
  main.vcl: |
    vcl 4.1;

    import std;

    backend default {
      .host = "127.0.0.1";
      .port = "8090";
    }

    sub vcl_recv {
      set req.http.host = std.tolower(req.http.host);

      if (req.http.host ~ "(^|\.)example\.com(\:[0-9]+)?$") {
        return (vcl(label_tenant1));
      }

      return (synth(404, "Not Found"));
    }

Step 2: Create a tenant ConfigMap

For each tenant, create a ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: varnish-vcl-tenant1
  namespace: varnish
data:
  tenant1.vcl: |
    vcl 4.1;

    backend default {
      .host = "www.example.com";
      .port = "80";
    }

    sub vcl_backend_fetch {
      set bereq.http.Host = "www.example.com";
    }

Step 3: Configure Varnish Enterprise

Unlike single VCL, in case of multi-tenant, vclConfig must be present and ConfigMaps must be mounted into paths, as specified in cmdfile. In this example, we’re mounting cmdfile to /etc/varnish/cmds.cli.

server:
  vclConfig: |
    vcl 4.1;

    backend default {
      .host = "127.0.0.1";
      .port = "8090";
    }

  extraVolumes:
    - name: varnish-vcl
      configMap:
        name: varnish-vcl
    - name: varnish-vcl-tenant1
      configMap:
        name: varnish-vcl-tenant1

  extraVolumeMounts:
    - name: varnish-vcl
      mountPath: /etc/varnish/main.vcl
      subPath: main.vcl
    - name: varnish-vcl
      mountPath: /etc/varnish/cmds.cli
      subPath: cmds.cli
    - name: varnish-vcl-tenant1
      mountPath: /etc/varnish/tenant1.vcl
      subPath: tenant1.vcl

  extraArgs:
    - "-I /etc/varnish/cmds.cli"

Post Update

As Kubernetes doesn’t track changes made to ConfigMap, run kubectl rollout when ConfigMap is updated for the changes to take effect.

$ kubectl rollout restart statefulset/varnish-enterprise  # for StatefulSet
$ kubectl rollout restart deployment/varnish-enterprise   # for Deployment
$ kubectl rollout restart daemonset/varnish-enterprise    # for DaemonSet