Once Varnish is installed, you will need to configure the varnishd
program using a set of options and runtime parameters. When Varnish
is installed via packages, in the cloud, or on Docker, conservative
default values are set for you.
Chances are that these defaults aren’t to your liking and will need to be tweaked. An overview of all options and parameters can be found at http://varnish-cache.org/docs/6.0/reference/varnishd.html.
In this subsection, we’ll talk about common parameters and how you can modify their values.
When Varnish is installed via packages, or in the cloud, the
systemd service manager will be used to run varnishd and to provide
configuration options.
The configuration for the Varnish service can be found in
/lib/systemd/system/varnish.service:
[Unit]
Description=Varnish Cache, a high-performance HTTP accelerator
After=network-online.target
[Service]
Type=forking
KillMode=process
# Maximum number of open files (for ulimit -n)
LimitNOFILE=131072
# Locked shared memory - should suffice to lock the shared memory log
# (varnishd -l argument)
# Default log size is 80MB vsl + 1M vsm + header -> 82MB
# unit is bytes
LimitMEMLOCK=85983232
# Enable this to avoid "fork failed" on reload.
TasksMax=infinity
# Maximum size of the corefile.
LimitCORE=infinity
ExecStart=/usr/sbin/varnishd -a :6081 -f /etc/varnish/default.vcl -s malloc,256m
ExecReload=/usr/sbin/varnishreload
[Install]
WantedBy=multi-user.target
What you can see is that Varnish is listening on port 6081 for
incoming connections, that the /etc/varnish/default.vcl VCL file is
loaded, and that 256 MB of memory is allocated for object storage.
The unit files in /lib/systemd/system/are not to be edited. Instead,
systemd allows you to override these files by creating appropriate
files in /etc/systemd/system/.
If you want to modify some of these settings, you can run the following commands:
sudo systemctl edit varnish
Here’s an example where we set the object allocation to 512 MB instead of the standard 256 MB value.
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd -a :6081 -f /etc/varnish/default.vcl -s malloc,512m
Please note that you need to explicitly clear
ExecStartbefore setting it again, as it is an additive setting.
This will create /etc/systemd/system/varnish.service.d/override.conf,
which will not interfere with package upgrades.
To view the unit file including the override:
$ sudo systemctl cat varnish
# /etc/systemd/system/varnish.service
[Unit]
Description=Varnish Cache, a high-performance HTTP accelerator
After=network-online.target
[Service]
Type=forking
KillMode=process
# Maximum number of open files (for ulimit -n)
LimitNOFILE=131072
# Locked shared memory - should suffice to lock the shared memory log
# (varnishd -l argument)
# Default log size is 80MB vsl + 1M vsm + header -> 82MB
# unit is bytes
LimitMEMLOCK=85983232
# Enable this to avoid "fork failed" on reload.
TasksMax=infinity
# Maximum size of the corefile.
LimitCORE=infinity
ExecStart=/usr/sbin/varnishd -a :6081 -f /etc/varnish/default.vcl -s malloc,256m
ExecReload=/usr/sbin/varnishreload
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/varnish.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd -a :6081 -f /etc/varnish/default.vcl -s malloc,512m
Restart Varnish to make changes take effect:
sudo systemctl restart varnish.
The varnishd configuration for our official Docker container doesn’t
use systemd. It is Docker that runs the varnishd process in the
foreground of the container.
The Dockerfile uses an entrypoint file to define how Varnish should run. This is what it looks like:
varnishd \
-F \
-f /etc/varnish/default.vcl \
-a http=:80 \
-a proxy=:8443,PROXY \
-p feature=+http2 \
-s malloc,$VARNISH_SIZE \
"$@"
- The `varnishd` program is started in the foreground thanks to the `-F`
option.
- `varnishd` will listen for incoming connections on port `80` for
regular *HTTP*, and this listening port is named `http`.
- `varnishd` will listen for incoming connections on port `8443` for
*HTTP* using the *PROXY protocol*, and this listening port is named
`proxy`.
- `HTTP/2` is supported thanks to the `-p feature=+http2` parameter.
- `varnishd` will allocate a fixed amount of memory to object storage.
The size is defined by the `$VARNISH_SIZE` environment variable, which
defaults to `100M`.
- Any additional runtime parameter that is added in the `docker run`
command will be attached to `varnishd`, thanks to `"$@"`.
The minimal configuration required to run this Docker container is done using the following command:
docker run -p 80:80 varnish
Let’s say you want to override the default.vcl file, name the
container varnish, set the cache size to 1G, make the default_ttl
an hour, and reduce the ban_lurker_age to ten seconds. This is the
command you’ll use for that:
docker run --name varnish -p 80:80 \
-v /path/to/default.vcl:/etc/varnish/default.vcl:ro \
-e VARNISH_SIZE=1G\
varnish \
-p default_ttl=3600 \
-p ban_lurker_age=10
Certain aspects of the configuration are handled by Docker:
-p parameter allows you to forward the exposed ports of the
container to ports on your host system.-v parameter allows you to perform a bind mount and make a
local VCL file available in the container.-e parameter allows you to set an environment variable. In
this case the VARNISH_SIZE variable is set to 1G.Thanks to "$@" in the entrypoint file, all additional positional
arguments will be attached to the varnishd process. This means that
you can just add any supported Varnish runtime parameter to
docker run, which will be added to varnishd.
In this case we added -p default_ttl=3600 and -p ban_lurker_age=10,
which will translate into varnishd runtime parameters. This provides
enormous flexibility and doesn’t require the creation of custom images.
We’ve featured the -a option a number of times, but there is still a
lot to be said about the listening address option in Varnish. If
-a is omitted, varnishd will listen on port 80 on all interfaces.
Here’s the syntax for -a:
-a <[name=][address][:port][,PROTO][,user=<user>][,group=<group>][,mode=<mode>]>
name= field allows you to name your listening addresses.address part allows you to define an IPv4 address, an IPv6
address, or the path to a Unix domain socket (UDS).:port part allows you to set the port on which this address is
supposed to listen.PROTO field defines the protocol that is used; by default this
is HTTP, but it can also be set to PROXY to support the PROXY
protocol.user, group, and mode are used
to control and define permissions on the socket file.-a listening addresses can be used.a0, a1,
a2, etc.Let’s throw in an example configuration that uses (nearly) all of the syntax:
varnishd -a uds=/var/run/varnish.sock,PROXY,user=varnish,group=varnish,mode=660 \
-a http=:80 \
-a proxy=localhost:8443,PROXY
Let’s break this one down:
There is a listening address named uds that listens for incoming
requests over a Unix domain socket. The socket file is
/var/run/varnish.sock and is accessible to the varnish user and
varnish group. Because of mode=660, the varnish user has read and
write access, as do all users in the varnish group. All other users
have no access to the socket file. The protocol that is used for
communication over this UDS is the PROXY protocol.
There is also a listening address named http, which accepts regular
HTTP connections for all interfaces on port 80.
And finally, there’s a listening address named proxy that only accepts
connections on the localhost loopback interface over port 8443. And
again, the PROXY protocol is used.
This setup is often used when Hitch is installed on the server to terminate TLS. Regular HTTP connections directly arrive on Varnish. But Hitch takes care of HTTPS requests and forwards the decrypted data to Varnish using HTTP over the PROXY protocol.
Hitch can either choose to connect to Varnish using a Unix domain socket (UDS), or via
localhostover TCP/IP on port8443.
The -s option defines how varnishd will store its objects. If the
option is omitted, the malloc storage backend will be used, which
stores objects in memory. The default storage size is 100M.
A pretty straightforward example is one where we assign 1G of memory to Varnish for object storage:
varnishd -s malloc,1G
You can also name your storage backends, which makes it easier to
identify them in VCL or in varnishstat. Here’s how you do that:
varnishd -s memory=malloc,1G
If you don’t name your storage backends, Varnish will use names like
s0, s1, s2, etc.
If an object is larger than the memory size, you’ll see the following
errors appear in the varnishlog output:
ExpKill LRU_Fail
FetchError Could not get storage
Varnish notices that there is not enough space available to store the object, so it starts to remove the least recently used (LRU) objects. This action fails because there is not enough space in the cache to free up.
Additionally, the full content cannot be fetched from the backend and
stored in cache, hence the Could not get storage error.
In the end, the object will only be partially served.
It might sound surprising, but there’s also a secondary storage backend in use. It’s called transient storage and holds short-lived objects.
Varnish considers an object short-lived when its
TTL + grace + keep is less than the shortlived runtime parameter. By
default this is ten seconds.
Transient storage is also used for temporary objects. An example is uncacheable content that is held there until it is consumed by the client. This is to avoid letting a slow client occupy a backend for too long.
By default, transient storage uses an unlimited malloc backend.
This is something that should be kept in mind when sizing your Varnish
server.
However, transient storage can be limited by adding a storage
backend that is named Transient.
Here’s an example:
varnishd -s Transient=malloc,500M
In this example, we’re limiting transient storage to 500M.
Limiting the transient storage can negatively affect short-lived objects. If whatever is delivered is bigger than the transient storage size, objects will only be partially delivered, as they don’t fully fit when streaming is enabled. When streaming is disabled, it will lead to an HTTP 503 error.
There is also file storage available in Varnish. This type of object storage will store objects in memory backed by a file.
This is initiated as specified below:
vanishd -s file,/path/to/storage,100G
In this case, objects are stored in /path/to/storage. This file is
100G in size.
Although disk storage is used for this kind of object storage, the
file stevedore is not persistent. A restart will empty the entire
cache.
The performance of this stevedore also varies quite a lot, as you
depend on the write speed of your disk. As your varnishd process runs,
you will incur an increasing amount of fragmentation on disk, which will
further reduce the performance of the cache.
Our advice is to not use the
filestevedore at a large scale, and use the MSE stevedore instead.
The Massive Storage Engine (MSE) is a Varnish Enterprise stevedore that combines memory and disk storage to offer fast and persistent storage.
We will talk about MSE in detail in one of the next sections. Let’s limit this discussion to configuration.
Here’s how you set up MSE:
vanishd -s mse,/var/lib/mse/mse.conf
As you can see the mse stevedore can refer to a configuration file
that holds more details about the MSE configuration.
Here’s what such a configuration file can look like:
env: {
id = "mse";
memcache_size = "5G";
books = ( {
id = "book";
directory = "/var/lib/mse/book";
database_size = "2G";
stores = ( {
id = "store";
filename = "/var/lib/mse/store.dat";
size = "100G";
} );
} );
};
This configuration will allocate 5G of memory for object storage.
There is also 100G of persistent storage available, which is located
in /var/lib/mse/store.dat.
All metadata for persisted objects is stored in /var/lib/mse/book,
which is 2G in size.
Using vmod_mse, you can let VCL decide where objects should be
persisted.
MSE is highly optimized and doesn’t suffer from the same delays and
fragmentation as the file stevedore.
If you set memcachesize = "auto" in your MSE configuration, the
memory governor will be activated. This will dynamically size your
cache, based on the memory it needs for other parts of Varnish.
The memory governor will also be activated when you haven’t specified a configuration file for MSE:
varnishd -s mse
The memory governor will not limit the size of the cache, but the
total size of the varnishd process. The total size is determined by
the memory_target runtime parameter, which is set to 80% by default.
The memory_target can also be set to an absolute value. This is very
convenient as it allows you to bound the memory of Varnish as a whole
and not worry about unexpected overhead.
MSE is one of the most powerful features of Varnish Enterprise, but as mentioned, we’ll do an MSE deep-dive later in this chapter.
Although VCL is the most powerful feature of Varnish, you can still decide to stick with the built-in VCL.
In that case, Varnish doesn’t know what the backend host and port are.
The -b option allows you to set this, but it is mutually exclusive
with the -f option that is used to set the location of the VCL file.
Here’s how you use -b:
varnishd -b localhost:8080
This example lets Varnish connect to a backend that is hosted on the
same machine, but on port 8080.
You can also use a UDS to connect, as illustrated below:
varnishd -b /path/to/backend.sock
The Varnish CLI, which is accessible via varnishadm, or via a socket
connection in your application code, has a set of parameters that can be
configured.
The -T option is the primary varnishd option to open up access to
the CLI. This option defines the listening address for CLI
requests.
Authentication to the varnishd CLI port is protected with a
challenge and a secret key. The location of the secret key is defined by
the -S parameter.
Here’s an all-in-one example containing both options:
varnishd -T :6082 -S /etc/varnish/secret
Anyone can create a socket connection to the Varnish server on port
6082. The secret that is in /etc/varnish/secret will be required to
satisfy the challenge that the varnishd CLI imposes.
However, in most cases, you don’t actually need to specify -S or -T
because if they are not given, varnishd will just generate them
randomly. But, in that case, how can varnishadm know about these
parameters? Very simply, if varnishadm isn’t given a -S/-T
combination, it’ll look at the varnishd workdir to figure those values
out.
The workdir is the value of the -n parameter, which defaults to
/var/lib/varnish/$HOSTNAME. This is why, in the default case,
varnishadm needs no extra parameters to access the Varnish CLI. The
-S and -T parameters are mainly there to configure external access.
Besides the basic varnishd startup options, there is a large
collection of runtime parameters that can be extended.
The full list can be found at http://varnish-cache.org/docs/6.0/reference/varnishd.html#list-of-parameters.
It is impossible to list them all, but a very common example is enabling HTTP/2. Here’s how you do that:
varnishd -p feature=+http2
These are feature flags, but we can also assign values. For example, we
can redefine what are considered short-lived objects, by setting the
shortlived runtime parameter:
varnishd -p shortlived=15
This means that objects with a TTL less than 15 seconds are considered short-lived and will end up in transient storage.
By adding runtime parameters to the systemd
override.conffile for Varnish, you can persist the new value of these parameters. It is also possible to set them viavarnishadm param.setat runtime, but these aren’t persisted, and will be lost upon the next restart.