Deploy Varnish and Varnish High Availability in a multilocation environment Tutorial

World

Introduction

The purpose of this tutorial is to describe how to set up a multilocation Varnish Cache Plus (VCP) and Varnish High Availability (VHA) environment. It will point to each components installation-documentation, describe how VHA works in this setup and general considerations regarding such an environment/setup.

When Varnish receives a new request from a client and the object is cached by Varnish, VHA will check if the object exists in the cache on the other Varnish hosts. If it does not exist, VHA will populate the other caches. In a multilocation environment, VHA is clever enough to know that it should only replicate to one of the instances in the other locations. The other locations will then take care of the replication locally.

This tutorial is written for CentOS 7, but packages and prebuilt Varnish images are also available for Ubuntu.

Prerequisites

In order to complete this guide, you will need the following:

  • A fully functional environment consisting of two or more Varnish instances in two or more locations.
  • A Varnish Plus license, trial license or prebuilt Varnish images from one of the cloud providers providing our software.
  • Optional: If you want to terminate https in front of Varnish, you can use Hitch. Hitch is documented here: Hitch and Letsencrypt tutorial

Prebuilt Varnish images are available from the following providers’ marketplaces (Both Ubuntu and Redhat):

Further reading:

Step 1 - Install VCP and VHA

Note: if you chose the cloud option, skip this section about software repositories and installation.

Get access

First you need access to the repositories. For this you need to contact our sales-department. You will get acess to a free trial: https://info.varnish-software.com/varnish-plus-trial. Information about how to set up the repository-configuration will be provided for you.

When you have gained access to repository and configured it, continue following the steps below.

Refresh the package metadata and install VCP and VHA

yum clean all
# install Varnish Cache Plus
sudo yum install varnish-plus
# install Varnish High Availability:
sudo yum install varnish-plus-ha

Further reading:

Step 2 - Configure VCP

Generate vha.vcl

The program vha-generate-vcl is then used to generate the VHA-specific VCL that will be called vha.vcl:

vha-generate-vcl --token TOKEN > /etc/varnish/vha.vcl

where TOKEN is the secret token you will also use for DAEMON_OPTS when configuring VHA in Step 3.

Finally, you need to include vha.vcl in your own VCL (below “vcl 4.0”):

include "vha.vcl";

Enable Backend SSL

This is built into VCP and is enabled in the VCL like this:

backend default {
	.host = "backend.example.com";
	.port = "https"; # This defaults to https when SSL
	.ssl = 1; # Turn on SSL support
	.ssl_sni = 1; # Use SNI extension
	.ssl_verify_peer = 1; # Verify the peer's certificate chain
}

Further reading:

Step 3 - Configure VHA

The first requirement is to describe the Varnish nodes that will need to be replicated. This is done in /etc/varnish/nodes.conf, with every line specifying the name and address of the node.

Clusters are declared as sections, the nodes inside the section belongs to that cluster. Note that if you declare at least one cluster, you MUST place all nodes inside a section.

An example file with cluster could be:

# international set-up
[Europe]
eu-1 = http://192.168.0.1:8080
eu-2 = http://192.168.0.2:8080

[Asia]
as-1 = https://192.168.1.1:443

[North_America]
usa-1 = 192.168.2.1
usa-2 = 192.168.2.2
usa-3 = 192.168.2.3

The parameters files holds two variables of importance that must be changed: ENABLE and DAEMON_OPTS. This file is located in:

/etc/varnish/vha-agent.params

ENABLE has to be set to 1 to allow vha-agent to be started.

Make the following changes to DAEMON_OPTS:

NAME needs to be replaced with the hostname of the current node (same name as in nodes.conf). TOKEN has to be replaced with a secret string that will be the same for all nodes.

Further reading:

Step 4 - Restart the services

Now when the configuration should be complete, restart VCP and VHA.

# Restart Varnish
service varnish restart
# Restart VHA:
service vha-agent restart

Step 5 - Check that replication works

You can check that requests are actually transiting through a Varnish instance using varnishlog, filtering with the -q argument:

varnishlog -q "ReqHeader ~ x-vha"

The different headers are:

  • x-vha-origin: contains the address, name and cluster of the VHA agent that initiated the replication request. The address is used with vmod_goto to set the backend to that originating node.
  • x-vha-done: lists all the clusters that have been/will be processed, to avoid replicating twice to the same cluster. For cross-cluster replication, the current cluster is suffixed with “*“, sign that local replication is allowed for this cluster.
  • x-vha-fetch: is only used by the VCL part, and denotes that the request is in its Varnish-to-Varnish phase (as opposed to VHA-to-Varnish). This allows to disable esi processing by Varnish and ensure that ESI fragments are replicated individually.
  • x-vha-token: added by VHA if the -T option is specified. It contains a secret token that will be verified by the receiving Varnish to make sure the request can be trusted.

Another way to monitor VHA is to have a look at /var/lib/vha-agent/vha-status

Further reading:

Further considerations and reading

Autodiscovery:

  • We also now have a solution for autodiscovering VHA-nodes. Read more about Varnish-Discovery here: Varnish Discovery

Monitoring:

  • The Varnish Administration Console is a great tool for monitoring your Varnish instances. VAC Documentation
  • Varnish Custom Statistics gives you granular and detailed cache and performance metrics on the contents of your cache and its associated traffic. VCS Documentation