Search
Varnish Helm Chart

Setting up Varnish Enterprise for Varnish Controller Router

Introduction

As Varnish Controller routes traffic based on either an HTTP redirect or a DNS-based redirect, it is required that each individual Varnish Enterprise instance is directly accessible from outside the internet. This sort of setup can be tricky under Kubernetes, as Kubernetes usually does not expose each Pod individually to the public network, but rather as a group of services.

There are multiple ways to assign Public IP address to a Pod directly. For example:

One of the requirements of such setup is that the setup must be able to expose the publicly accessible address through Pod metadata (e.g. status.podIP). How to set up a Public IP Pod is beyond the scope of this article, as these setups are very specific to how Kubernetes clusters are set up.

If none of the Public IP Pod options are available, one way to make it work is to either run Varnish Enterprise in a hostNetwork setup, or use a single replica deployment.

Using hostNetwork

Using hostNetwork allows Varnish Enterprise to be exposed directly to the internet through the IP address of the node. This lets us skip the Kubernetes service layer and expose each individual Pod directly. The main caveats to this approach are:

  • Only one Varnish Enterprise instance can run per node
  • The Varnish Enterprise port must be available on the node Varnish Enterprise will be run on

The default server.affinity value of the Varnish Enterprise Helm Chart already enforces the node requirement. The cluster administrator only needs to make sure the Varnish Enterprise port (default: 6081) is available on the cluster.

To configure Varnish Enterprise Helm Chart with host networking, configures server.hostNetwork:

server:
  hostNetwork: true

As Varnish Enterprise Helm Chart needs to access other services in Kubernetes (such as varnish-controller-agent), hostNetwork is pre-configured with dnsPolicy: ClusterFirstWithHostNet to make sure service DNS is always resolvable from within the Pod. In this setup, server.baseUrl is automatically configured.

Once deployed, Varnish Enterprise can be accessed through the IP address of the node and the configured port. To configure the port, set server.http.port, server.http.hostPort, server.tls.port, and server.tls.hostPort accordingly:

server:
  hostNetwork: true

  http:
    port: 80
	hostPort: 80

  # If TLS is also used:
  tls:
    port: 443
	hostPort: 443

In some cluster configurations, you may also need to add NET_BIND_SERVICE capabilities to bind ports lower than 1024. This can be done by configuring server.securityContext.capabilities.add:

server:
  # ...

  securityContext:
    capabilities:
      add:
        - NET_BIND_SERVICE

Using a single replica

Another approach to preparing Varnish Enterprise for Varnish Controller Router is to deploy Varnish Enterprise as a single replica with multiple Helm releases. This allows each instance of Varnish Enterprise to be exposed to the internet through Kubernetes services and does not require a port to be available on the node. The main caveats to this approach are:

  • Inability to use auto-scaling
  • Varnish Enterprise may be bottlenecked by Kubernetes services

To configure Varnish Enterprise Helm Chart with a single replica with a LoadBalancer service, configure server.replicas and server.service.type:

server:
  replicas: 1

  service:
    type: LoadBalancer

This setup also requires a means to refer to an individual Varnish Enterprise instance through a publicly stable name, regardless of whether HTTP-based routing or DNS-based routing is used. This name needs to be configured as a Base URL through server.baseUrl. Due to decoupling of Service and Pod in Kubernetes, Varnish Enterprise cannot figure out its publicly accessible IP address on its own.

Instead, under a single replica deployment, this can be archived using a pre-allocated IP address through a LoadBalancer service or by utilizing an external DNS to assign each instance a unique DNS name.

Using LoadBalancer with a pre-allocated IP address

In the following example, we will use a LoadBalancer service with a pre-allocated IP address to expose an individual Varnish Enterprise instance. Each instance will have its own values-N.yaml override, with a common values.yaml.

Under a common values.yaml, configure server.replicas, server.baseUrl, and server.service:

server:
  replicas: 1

  service:
    type: LoadBalancer

Under each individual values-N.yaml, configure instance-specific values:

server:
  baseUrl: "http://10.0.121.1"  # replace this to match service.loadBalancerIP

  service:
    loadBalancerIP: 10.0.121.1  # replace this with a pre-allocated public IP address

Once values.yaml and values-N.yaml are configured, deploy each individual instance as separate releases:

helm install -f values.yaml -f values-1.yaml varnish-enterprise-1 varnish/varnish-enterprise
helm install -f values.yaml -f values-2.yaml varnish-enterprise-2 varnish/varnish-enterprise
# ...
helm install -f values.yaml -f values-N.yaml varnish-enterprise-N varnish/varnish-enterprise

Using ExternalDNS

If a pre-allocated IP address for LoadBalancer is not available, it is possible to use ExternalDNS to provide a publicly stable name through the use of DNS and an external DNS provider. Unlike DNS support in Varnish Controller Router, ExternalDNS publishes an external service IP address to an external DNS provider, of which Varnish Controller Router can use to resolve an address to a Varnish Enterprise instance.

Please refer to the ExternalDNS Helm Chart for instructions on how to install ExternalDNS. Make sure ExternalDNS is configured to provide DNS for Service resources:

# in values.yaml for external-dns
sources:
  - ingress
  - service

Similar to the “Using LoadBalancer” section, in this setup, we will also use a LoadBalancer, but instead of a pre-allocated IP address, we will use an external domain as configured through ExternalDNS. Each instance will have its own values-N.yaml override, with a common values.yaml.

Under a common values.yaml, configure server.replicas, server.baseUrl, and server.service:

server:
  replicas: 1

  service:
    type: LoadBalancer

Under each individual values-N.yaml, configure instance-specific values with a service annotation for ExternalDNS to set up a DNS for it:

server:
  baseUrl: "http://varnish-N.example.com"

  service:
    annotations:
	  external-dns.alpha.kubernetes.io/hostname: "varnish-N.example.com"

Once values.yaml and values-N.yaml are configured, deploy each individual instance as separate releases:

helm install -f values.yaml -f values-1.yaml varnish-enterprise-1 varnish/varnish-enterprise
helm install -f values.yaml -f values-2.yaml varnish-enterprise-2 varnish/varnish-enterprise
# ...
helm install -f values.yaml -f values-N.yaml varnish-enterprise-N varnish/varnish-enterprise