Search

The Varnish CLI

The Varnish CLI

The Varnish CLI is a command-line interface offered by varnishd to perform a series of management tasks. The Varnish CLI has its own protocol that is accessible via TCP/IP.

Varnish also ships with a varnishadm program that facilitates CLI access.

Tasks that can be performed via the CLI are:

  • Listing backends
  • Administratively setting the backend health
  • Ban objects from the cache
  • Display the ban list
  • Show and clear panics
  • Show, set, and reset runtime parameters
  • Display process id information
  • Perform liveliness checks to varnishd
  • Return status information
  • Starting and stopping the Varnish child process
  • Manage VCL configurations

This is what the Varnish CLI looks like when called using the varnishadm program:

$ varnishadm
200
-----------------------------
Varnish Cache CLI 1.0
-----------------------------
Linux,5.4.39-linuxkit,x86_64,-junix,-sdefault,-sdefault,-hcritbit
varnish-6.0.7 revision 525d371e3ea0e0c38edd7baf0f80dc226560f26e

Type 'help' for command list.
Type 'quit' to close CLI session.

> CLI commands can be run inside the `varnishadm` shell, but they can
> also be appended as arguments to the `varnishadm` program.

Backend commands

The backend.list command lists all available backends, and also provides health information, as you can see in the example below:

varnish> backend.list
200
Backend name                   Admin      Probe                Last updated
boot.default                   probe      Healthy             7/8 Tue, 05 Jan 2021 12:34:08 GMT
boot.static-eu                 probe      Healthy (no probe)   Tue, 05 Jan 2021 12:34:08 GMT
boot.static-us                 probe      Healthy             7/8 Tue, 05 Jan 2021 12:34:08 GMT

If you add a -p option to the command, you’ll start seeing more detailed information on the health probe:

varnish> backend.list -p
200
Backend name                   Admin      Probe                Last updated
boot.default                   probe      Healthy             8/8
  Current states  good:  8 threshold:  3 window:  8
  Average response time of good probes: 0.004571
  Oldest ================================================== Newest
  -----------------------------------------------------44444444444 Good IPv4
  -----------------------------------------------------XXXXXXXXXXX Good Xmit
  -----------------------------------------------------RRRRRRRRRRR Good Recv
  ---------------------------------------------------HHHHHHHHHHHHH Happy
 Tue, 05 Jan 2021 12:34:08 GMT
boot.static-eu                 probe      Healthy (no probe)   Tue, 05 Jan 2021 12:34:08 GMT
boot.static-us                 probe      Healthy             8/8
  Current states  good:  8 threshold:  3 window:  8
  Average response time of good probes: 0.004552
  Oldest ================================================== Newest
  -----------------------------------------------------44444444444 Good IPv4
  -----------------------------------------------------XXXXXXXXXXX Good Xmit
  -----------------------------------------------------RRRRRRRRRRR Good Recv
  ---------------------------------------------------HHHHHHHHHHHHH Happy
 Tue, 05 Jan 2021 12:34:08 GMT

If you have a large number of backends, listing detailed information for all of them can become unmanageable. You can narrow down the scope by supplying a backend pattern to the backend.list command.

The following example only lists backends that start with static. Evidently, the boot.static-eu and the boot.static-us backends will appear:

varnish> backend.list -p static*
200
Backend name                   Admin      Probe                Last updated
boot.static-eu                 probe      Healthy (no probe)   Tue, 05 Jan 2021 12:34:08 GMT
boot.static-us                 probe      Healthy             8/8
  Current states  good:  8 threshold:  3 window:  8
  Average response time of good probes: 0.004164
  Oldest ================================================== Newest
  -----------------------44444444444444444444444444444444444444444 Good IPv4
  -----------------------XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit
  -----------------------RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv
  ---------------------HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy
 Tue, 05 Jan 2021 12:34:08 GMT

We can use the backend.set_health command to override the health of one or more backends, based on a backend pattern.

For example, when downtime is expected for a group of backends, it makes sense to explicitly set them to unhealthy beforehand. If the backends are governed by a director, they will automatically be taken out of the directors rotation, which is a more graceful approach to a planned outage.

Here’s an example where we set both static-eu and static-us to an unhealthy state:

varnish> backend.set_health static-* sick
200

When we list the backends, we can see that the Admin field no longer contains the probe value, but the sick value:

varnish> backend.list
200
Backend name                   Admin      Probe                Last updated
boot.default                   probe      Healthy             8/8 Tue, 05 Jan 2021 12:34:08 GMT
boot.static-eu                 sick       Healthy (no probe)   Tue, 05 Jan 2021 12:42:36 GMT
boot.static-us                 sick       Healthy             8/8 Tue, 05 Jan 2021 12:42:36 GMT

Let’s go ahead and set the health of the two backends to auto. This will undo our previous backends.set_health command, setting their health back to the value as listed under the Probe field.

varnish> backend.set_health static-* auto
200

You can also force a backend to be considered healthy, as illustrated in the example below:

varnish> backend.set_health static-* healthy
200

Banning

We already talked about banning via the CLI in chapter 6. We’d like to refer to that part of the book for more details.

As a quick reminder, here’s an example of a ban issued via the CLI:

varnish> ban "obj.http.Content-Type ~ ^image/"
200

And here’s a ban list example:

varnish> ban.list
200
Present bans:
1609850980.159475     0 -  obj.http.Content-Type ~ ^image/

Parameter management

The CLI has various commands to set the value of a parameter, list its value, and reset it to the default value.

Displaying parameters

The param.show command lists the value of the configurable runtime parameters inside Varnish.

When running this command without additional options or arguments, you get a list of parameters with their value.

Here’s an extract because the full list is a bit too long:

varnish> param.show
200
accept_filter                 -
acceptor_sleep_decay          0.9 (default)
acceptor_sleep_incr           0.000 [seconds] (default)
acceptor_sleep_max            0.050 [seconds] (default)
auto_restart                  on [bool] (default)
backend_idle_timeout          60.000 [seconds] (default)
backend_local_error_holddown  10.000 [seconds] (default)
backend_remote_error_holddown 0.250 [seconds] (default)
ban_cutoff                    0 [bans] (default)
ban_dups                      on [bool] (default)
ban_lurker_age                60.000 [seconds] (default)
ban_lurker_batch              1000 (default)
ban_lurker_holdoff            0.010 [seconds] (default)
ban_lurker_sleep              0.010 [seconds] (default)
between_bytes_timeout         60.000 [seconds] (default)
...

You can also get this output with a lot more context and meaning. Just add the -l option, as you can see in the extract below:

varnish> param.show -l
200
acceptor_sleep_decay
		Value is: 0.9 (default)
		Minimum is: 0
		Maximum is: 1

		If we run out of resources, such as file descriptors or worker
		threads, the acceptor will sleep between accepts.
		This parameter (multiplicatively) reduce the sleep duration for
		each successful accept. (ie: 0.9 = reduce by 10%)

		NB: We do not know yet if it is a good idea to change this
		parameter, or if the default value is even sensible. Caution is
		advised, and feedback is most welcome.

acceptor_sleep_incr
		Value is: 0.000 [seconds] (default)
		Minimum is: 0.000
		Maximum is: 1.000

		If we run out of resources, such as file descriptors or worker
		threads, the acceptor will sleep between accepts.
		This parameter control how much longer we sleep, each time we
		fail to accept a new connection.

		NB: We do not know yet if it is a good idea to change this
		parameter, or if the default value is even sensible. Caution is
		advised, and feedback is most welcome.
...

It is also possible to only list the parameters where the value was changed. To achieve this, just use the param.show changed command.

Here’s some example output:

varnish> param.show changed
200
feature                       +http2
shortlived                    5.000 [seconds]
thread_pool_max               7500 [threads]

In this case, we added the http2 feature flag, modified the timing for short-lived objects to five seconds, and set the maximum number of threads in a thread pool to 7500 threads.

You can also get the value of an individual parameter, as shown in the example below:

varnish> param.show shortlived
200
shortlived
		Value is: 5.000 [seconds]
		Default is: 10.000
		Minimum is: 0.000

		Objects created with (ttl+grace+keep) shorter than this are
		always put in transient storage.

It is even possible to list the output in JSON format by adding a -j option. Here’s an example where we display information about the default_ttl parameter in JSON format:

varnish> param.show -j default_ttl
200
[ 2, ["param.show", "-j", "default_ttl"], 1609857571.607,
  {
	"name": "default_ttl",
	"implemented": true,
	"value": 120.000,
	"units": "seconds",
	"default": "120.000",
	"minimum": "0.000",
	"description": "The TTL assigned to objects if neither the backend nor the VCL code assigns one.",
	"flags": [
	  "obj_sticky"
]
}
]

Setting parameter values

The param.set command assigns a new value to a parameter, which is quite convenient because it doesn’t require restarting the varnishd process.

The downside of setting parameters via the CLI is that the changes are not persisted. As soon as varnishd gets restarted, the values that were assigned by -p are used, and other values are reset to their default value.

The param.set command is great for temporary changes, or for changes where a varnishd restart is not desirable. If you want a parameter change to be persisted, just add the appropriate -p option to your varnishd startup script.

Here’s an example of a parameter change where we set the default_ttl parameter to one minute:

varnish> param.set default_ttl 60
200

But if you need to undo the change and want to reset the parameter to its default value, just run param.reset:

varnish> param.reset default_ttl
200

VCL management

Another important feature of the Varnish CLI is the VCL management capability. This is especially useful from a VCL deployment point of view.

You can load multiple VCL configurations, set an active one, and even assign labels so that inactive VCL code can be conditionally loaded into your main VCL file.

VCL inspection

Commands like vcl.list and vcl.show can be used to list the available VCL configurations and to show the corresponding VCL code.

When you start Varnish, this is probably the output you’ll get:

varnish> vcl.list
200
active      auto/warm          0 boot

We have a single active VCL configuration, which is called boot. If we want to see the VCL code for this configuration, we run vcl.show boot, as illustrated below:

varnish> vcl.show boot
200
vcl 4.1;

backend default {
	.host="localhost";
	.port="8080";
}

Loading VCL

If you want multiple VCL configurations to be loaded, you can add one or more configurations by running the vcl.load command.

As you can see, the command requires a configuration name and a path to the VCL file:

varnish> vcl.load server1 /etc/varnish/server1.vcl
200
VCL compiled.

The vcl.load command will compile the code and bail out if an error was encountered. This is also an interesting way to check whether your VCL is syntactically correct.

If you don’t want to depend on a VCL file, you can directly inject a quoted VCL string via vcl.inline. The quoting sometimes gets a bit tricky, but here’s a very simple example:

varnish> vcl.inline default << EOF
varnish> vcl 4.1;
varnish>
varnish> backend be {
varnish>     .host="localhost";
varnish>     .port="8080";
varnish> }
varnish> EOF
200
VCL compiled.

If we want the previously loaded VCL configuration to be active, just run the following command:

varnish> vcl.use server1
200
VCL 'server1' now active

Don’t forget that inactive VCL configurations still consume resources. If you no longer need older VCL configurations, it is advisable to remove them using the vcl.discard command, as the next example shows:

varnish> vcl.discard boot
200

VCL labels

VCL labels have two purposes.

They behave like symbolic links to actual VCL configurations and can be used to switch from one VCL configuration to another.

Here’s an example where we assign the my_label label to the my_configuration VCL configuration:

varnish> vcl.label my_label my_configuration
200

At this point my_label will be listed as such and can be used with other VCL commands:

varnish> vcl.list
200
active      auto/warm          0 my_configuration
available  label/warm          0 my_label -> my_configuration
varnish> vcl.use my_label
200
VCL 'my_label' now active
varnish> vcl.list
200
available   auto/warm          0 my_configuration
active     label/warm          0 my_label -> my_configuration

Multiple labels can point to the same VCL configuration, but a label cannot point to another label. This can be useful to maintain abstract VCL configurations. You could imagine having one label called production and another called maintenance to easily switch from one to the other during an outage, without needing to know in detail which exact VCL configuration should be used for either scenario. You can update and roll back the underlying VCLs independently and separate VCL management from VCL selection.

But the second purpose of VCL labels is probably the most useful. The active VCL is allowed to switch to a different VCL in the vcl_recv subroutine. This allows you to maintain multiple concurrent VCL configurations independently, which can greatly help virtual hosting when multiple applications need very different cache policies.

Imagine a situation where multiple VCL configurations are loaded, one for each web application it is caching:

varnish> vcl.load www_1 www.vcl
200
VCL compiled.
varnish> vcl.load api_1 api.vcl
200
VCL compiled.

As you can see, on top of the default configuration, we also have the www_1 and api_1 configurations.

We can label these configurations, as illustrated below:

varnish> vcl.label www www_1
200
varnish> vcl.label api api_1
200
varnish> vcl.label www_example_com www_1
200
varnish> vcl.label api_example_com api_1
200
  • The www_1 config has labels www and www_example_com
  • The api_1 config has labels api and api_example_com

From within our main VCL file, we’ll load various labeled VCL configurations based on the host header of the request.

Here’s the main VCL file that loads the labels:

vcl 4.1;
import std;

backend default none;

sub vcl_recv {
	if (req.http.Host == "www.example.com") {
		return(vcl(www));
	} elseif (req.http.Host == "api.example.com") {
		return(vcl(api));
	} else {
		return(synth(404));
	}
}
  • If a request is received containing the Host: www.example.com request header, the www label is used
  • If a request is received containing the Host: api.example.com request header, the api label is used

Each labeled VCL configuration has its own logic, and its own backends. This allows for multi-tenancy to some extent.

The vcl.list command then shows the labels, and how they are used:

varnish> vcl.list
200
available   auto/warm          0 www_1 (2 labels)
available   auto/warm          0 api_1 (2 labels)
available  label/warm          0 www -> www_1 (1 return(vcl))
available  label/warm          0 api -> api_1 (1 return(vcl))
active      auto/warm          0 default
available  label/warm          0 www_example_com -> www_1
available  label/warm          0 api_example_com -> api_1

The configurations themselves have a reference counter that keeps track of how many times they were used by a label. The labels point to the configuration they are associated with. And if any of these labels are used within a return(vcl()) statement, this is also mentioned. In this example the regular web application lives alongside an HTTP API, and they both have different cache policies and can be updated independently:

varnish> vcl.load www_2 www.vcl
200
VCL compiled.
varnish> vcl.label www www_2
200
varnish> vcl.label www_example_com www2
200
varnish> vcl.list
200
available   auto/cold          0 www_1
available   auto/warm          0 api_1 (2 labels)
available  label/warm          0 www -> www_2 (1 return(vcl))
available  label/warm          0 api -> api_1 (1 return(vcl))
active      auto/warm          0 default
available  label/warm          0 www_example_com -> www_2
available  label/warm          0 api_example_com -> api_1
available   auto/warm          0 www_2 (2 labels)

Rolling back is only a matter of labeling www_1 again if the www_2 update wasn’t correct, without disturbing the API.

VCL temperature

VCL configurations consume resources, even when they are not active. If you deploy a new version of your VCL and keep the previous versions, the allocated resources for these VCL files will not be released immediately.

Varnish has a built-in system to cool down VCL configurations when they are no longer in use. Resources that were reserved by these VCLs are eventually released.

When a new VCL configuration is deployed, it becomes warm. This is done automatically, but the vcl.state command allows you to override the VCL temperature.

The VCL temperature can be set to one of the following values:

  • auto
  • warm
  • cold

The vcl.list command lists the various configurations, but also includes the temperature, and how it was set:

varnish> vcl.list
200

active      auto/warm          0 default
available   auto/cold          0 test

In this case, the default configuration, which is the active one, is in the auto/warm state. The test configuration is available, but no longer in use. It has become cold.

If we want to force the temperature, we can use the vcl.state command to warm up or cool down the configuration:

varnish> vcl.state test warm
200

In this example we explicitly set the state to warm, which is also reflected in the VCL list:

varnish> vcl.list
200

active      auto/warm          0 default
available   warm/warm          0 test

We can also set it back to auto:

varnish> vcl.state test auto
200
varnish> vcl.list
200

active      auto/warm          0 default
available   auto/warm          0 test

Configuring remote CLI access

If you’re planning on connecting to the Varnish CLI remotely, it makes sense to tune the remote CLI access runtime parameters.

The -T runtime parameter sets the listening address and port for the CLI. The -S runtime parameter is used to define the location of the secret file.

This secret file contains the secret key that is required to gain access to the CLI.

You probably want to see these two parameters in action, so here’s an example:

varnishd -a :80 -T localhost:6082 -S /etc/varnish/secret -f /etc/varnish/default.vcl

This is a pretty basic varnishd configuration where the CLI is only accessible locally using port 6082. The authentication protocol uses the contents of the /etc/varnish/secret file.

The varnishadm command is capable of connecting to a remote CLI. The -T and -S options are also available for varnishadm.

Here’s an example of a remote ban using varnishadm:

varnishadm -S /etc/varnish/secret -T varnish.example.com:6082 ban "obj.http.x-url == /info"

The CLI protocol

The Varnish CLI has its own CLI protocol, which is largely abstracted away when using varnishadm. But if you want to integrate the Varnish CLI into your own application, you need to understand the protocol.

From your application, you’ll connect to the host and port that were configured using the -T parameter. In the example below this is localhost:6082 because the application happens to run on the same machine as Varnish.

For security reasons, access will be restricted based on the secret key that was set using the -S parameter.

Here’s an example where we connect to the CLI via telnet. The assumption is that /etc/varnish/secret contains my-big-secret as its value.

Here’s the output:

$ telnet localhost 6082
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
107 59
lgsefmatosfviyytnnrbwrvwngdkhkrn

Authentication required.

- `107` is the status code that means *authentication is required*.
- The next line contains `lgsefmatosfviyytnnrbwrvwngdkhkrn`, which is
*the challenge*.

Based on this challenge and the secret from /etc/varnish/secret, the authentication string is composed.

You start by creating a string that contains the following parts:

  • The challenge
  • A newline (\0x0a)
  • The secret
  • The challenge again
  • A newline (\0x0a)

If we use my-big-secret\n as the secret key, this would be our string:

lgsefmatosfviyytnnrbwrvwngdkhkrn
my-big-secret
lgsefmatosfviyytnnrbwrvwngdkhkrn

This string then needs be hashed via the SHA256 hashing algorithm, and the resulting digest should be returned in lowercase hex.

The end result would be the following authentication string:

b931c0995b200b83645a4e4e9bbb9061b2c80c2aaa878920d8b2da8612756f5c

The response to the challenge in the Varnish CLI would be auth <authentication string>. In our case, this is what happens:

auth b931c0995b200b83645a4e4e9bbb9061b2c80c2aaa878920d8b2da8612756f5c
200 277
-----------------------------
Varnish Cache CLI 1.0
-----------------------------
Linux,5.4.39-linuxkit,x86_64,-junix,-sdefault,-sdefault,-hcritbit
varnish-6.0.7 revision 525d371e3ea0e0c38edd7baf0f80dc226560f26e

Type 'help' for command list.
Type 'quit' to close CLI session.

Because status code 200 was returned, we know the authentication procedure was successful. We get the banner from the Varnish CLI, and we can start executing CLI commands.

For your convenience we have created a small bash script that will create the authentication string for you:

#!/bin/sh

set -e

exec </etc/varnish/secret

if [ $# = 0 ]; then
	echo "Challenge not set, exiting" >&2
	exit 1
fi

(
	printf '%s\n' "$1"
	cat
	printf '%s\n' "$1"
) |
sha256sum |
awk '{print $1}'

The script checks whether or not /etc/varnish/secret exists and whether or not the challenge was passed as a command line argument.

The authentication string is created and sent to sha256sum to create the SHA256 digest. Finally we send the output to the awk program to fetch the first part, which is the final authentication string.

Here’s how you would invoke the script: /auth.sh <challenge>. And here’s the script in action:

$ ./auth.sh lgsefmatosfviyytnnrbwrvwngdkhkrn
b931c0995b200b83645a4e4e9bbb9061b2c80c2aaa878920d8b2da8612756f5c

And as expected, b931c0995b200b83645a4e4e9bbb9061b2c80c2aaa878920d8b2da8612756f5c is the output you can use to respond to the authentication challenge that was imposed by the Varnish CLI.

The CLI command file

We already mentioned the fact that changes through the Varnish CLI are not persisted. This means that a varnishd restart will undo your changes.

One of the solutions we suggested, especially for param.set commands, was to also add the customizations in your varnishd startup script via -p runtime parameters.

This can work for parameter tuning, but for other commands it doesn’t. Take for example the vcl.label command: if you depend on VCL labels, a varnishd restart can result in effective downtime.

To avoid any drama, varnishd has a -I option that points to a CLI command file. This contains CLI commands that are executed when varnishd is launched.

This way, you can ensure your VCL labels are correctly set, and the corresponding VCL files are loaded when you start or restart Varnish.

vcl.load s1 /etc/varnish/server1.vcl
vcl.load s2 /etc/varnish/server2.vcl
vcl.label server1 s1
vcl.label server2 s2

If these commands are stored inside /etc/varnish/clifile, the following example loads this file:

varnishd -a :80 -f /etc/varnish/default.vcl -I /etc/varnish/clifile

If any of the commands fail, varnishd will not properly start. Commands that are prefixed with - will not abort varnishd startup upon failure.

Quoting pitfalls

Using quoted or multi-line strings in the CLI can lead to unexpected behavior.

Expansion

CLI commands take a set number of arguments. If one of the arguments happens to be a multi-word string, you’ll need to use quotes. However, if you run these commands outside of the CLI shell and inside the shell of your operating system, double expansion takes place.

The quoting examples hinge on the fact that we want to override the cc_command runtime parameter. In reality you’ll rarely change the value of this parameter. We selected this example because it’s one of the few parameters that takes a string argument.

Imagine that we want to set the value of cc_command to my alternate cc command.

You might set the parameter as follows:

varnish> param.set cc_command my alternate cc command
105
Too many parameters

But as you can see, only my is used as the value. The other parts of the string are considered extra arguments. This is a problem, and the solution is to add quotes, as illustrated in the following example:

varnish> param.set cc_command "my alternate cc command"
200

Change will take effect when VCL script is reloaded

If we make this example into a one-liner outside of the varnishadm CLI scope and execute it in the operating system’s shell, you’ll get the following result:

$ varnishadm param.set cc_command "my alternate cc command"
Too many parameters

Command failed with error code 105

The string is expanded twice, resulting in the error we saw previously. To work around this, we can add extra quotes.

$ varnishadm param.set cc_command '"my alternate cc command"'

Change will take effect when VCL script is reloaded

If you want to parse an environment variable into a varnishadm CLI command, more quoting magic takes place:

$ TEST="Varnish"
$ varnishadm param.set cc_command '"'$TEST'"'

Change will take effect when VCL script is reloaded

The extra pair of single quotes is required, otherwise $TEST will be passed as an unparsed string.

Heredoc

If you want to pass multi-line content using the CLI, you may use Heredoc notation.

Here’s an example where we use the cat program to output a string. This string happens to be a multi-line one, that is defined using Heredoc syntax:

$ cat <<EOF
Thijs
Feryn
EOF
Thijs
Feryn

Within the Varnish CLI, you cannot use <<EOF as the start of a multi-line string. Here’s what you get when you do:

varnish> vcl.inline test <<EOF
106
Message from VCC-compiler:
VCL version declaration missing
Update your VCL to Version 4 syntax, and add
	vcl 4.1;
on the first line of the VCL files.
('<vcl.inline>' Line 1 Pos 1)
<<EOF
##---

Running VCC-compiler failed, exited with 2
VCL compilation failed

To make Heredoc-style input work, you need to add a space between << and EOF, as illustrated below:

varnish> vcl.inline test << EOF
varnish> vcl 4.1;
varnish> backend default none;
varnish> sub vcl_recv {
varnish>     return(synth(200));
varnish> }
varnish> EOF
200
VCL compiled.

Remember: the Varnish CLI format for Heredoc text requires an extra space, but outside of the CLI scope this no longer applies. If you want this to work using a varnishadm one-liner, you need to quote the Heredoc.

Here’s the previous example again, but inserted from outside of the CLI scope:

$ varnishadm vcl.inline test '<< EOF
vcl 4.1;
backend default none;
sub vcl_recv {
	return(synth(200));
}
EOF'
VCL compiled.

®Varnish Software, Wallingatan 12, 111 60 Stockholm, Organization nr. 556805-6203