See the installation guide for detailed instructions for your platform, and an automated installation script.
Follow the after insallation steps to ensure successful installation and service availability.
For the access control integration to work, a premade VCL is necessary in addition to your own VCL. Execute the following command to download artifactory.vcl
:
curl -o /etc/varnish/artifactory.vcl https://raw.githubusercontent.com/varnish/toolbox/refs/heads/main/vcls/artifactory/artifactory.vcl
Varnish will automatically search the /etc/varnish
directory for included VCLs, so the above command makes it possible to include "artifactory.vcl";
in your own VCL.
Open /etc/varnish/default.vcl
in your favorite editor. Replace it’s contents with the following:
vcl 4.1;
import udo;
import activedns;
include "artifactory.vcl";
backend default none;
sub vcl_init {
new artifactory_director = udo.director(random);
new artifactory_group = activedns.dns_group("http://<host>:<port>");
artifactory_director.subscribe(artifactory_group.get_tag());
}
sub vcl_recv {
set req.backend_hint = artifactory_director.backend();
set req.http.Host = "<host>";
}
Note: Make sure to replace <host>
with Artifactory’s DNS name or IP address and <port>
with Artifactory’s port. To communicate with Artifactory over TLS, configure the DNS group with https://
instead of http://
.
Then run the following command to reload the VCL:
systemctl reload varnish
You can now try pulling an artifact though Varnish. Let’s try pulling a docker
image:
docker login <varnish host>:6081
docker pull <varnish host>:6081/repository/image
Example output:
user@host:~# docker login <varnish-host>:6081
Login Succeeded
user@host:~# docker pull <varnish-host>:6081/repository/image
latest: Pulling from repository/image
e6590344b1a5: Pull complete
Digest: sha256:e0b569a5163a5e6be84e210a2587e7d447e08f87a0e90798363fa44a0464a1e8
Status: Downloaded newer image for <varnish-host>:6081/repository/image
<varnish-host>:6081/repository/image
Varnish offers many different ways to observe how traffic is handled, usuallt though separate programs that can read Varnish’s shared memory log. Here are some of the most commonly used tools:
varnishlog
gives you verbose logs with all the details of every transactionvarnishncsa
gives you oneliner access logs in the format of your choicevarnishlog-json
(experimental) gives you oneliner json logs in a nice formatvarnishsat
gives you a live view of the Varnish metrics and countersTo verify that Varnish is caching artifacts, we can for example use varnishncsa
to inspect the logs and see which URLs have gotten a cache HIT:
varnishncsa -q 'ReqMethod eq GET and ReqHeader:X-Preflight eq authorized and Hit'
This will output a log line to our terminal once a matching transaction is completed. To trigger a cache HIT, we can run a docker pull
twice, removing the image between each call:
docker pull <varnish host>:6081/repository/image
docker rmi -f <varnish host>:6081/repository/image
docker pull <varnish host>:6081/repository/image
Provided Varnish has been configured correctly, you should see output similar to this:
::1 - - [20/Feb/2025:16:14:23 +0000] "GET http://<varnish host>:6081/v2/repository/image/manifests/sha256:e0b569a5163a5e6be84e210a2587e7d447e08f87a0e90798363fa44a0464a1e8 HTTP/1.1" 200 12662 "-" "docker/27.2.0 go/go1.21.13 git-commit/3ab5c7d kernel/6.11.0-9-generic os/linux arch/amd64 UpstreamClient(Docker-Client/27.2.0 \\(linux\\))"
::1 - - [20/Feb/2025:16:14:23 +0000] "GET http://<varnish host>:6081/v2/repository/image/manifests/sha256:03b62250a3cb1abd125271d393fc08bf0cc713391eda6b57c02d1ef85efcc25c HTTP/1.1" 200 1035 "-" "docker/27.2.0 go/go1.21.13 git-commit/3ab5c7d kernel/6.11.0-9-generic os/linux arch/amd64 UpstreamClient(Docker-Client/27.2.0 \\(linux\\))"
::1 - - [20/Feb/2025:16:14:23 +0000] "GET http://<varnish host>:6081/v2/repository/image/blobs/sha256:74cc54e27dc41bb10dc4b2226072d469509f2f22f1a3ce74f4a59661a1d44602 HTTP/1.1" 200 547 "-" "docker/27.2.0 go/go1.21.13 git-commit/3ab5c7d kernel/6.11.0-9-generic os/linux arch/amd64 UpstreamClient(Docker-Client/27.2.0 \\(linux\\))"
::1 - - [20/Feb/2025:16:14:23 +0000] "GET http://<varnish host>:6081/v2/repository/image/blobs/sha256:e6590344b1a5dc518829d6ea1524fc12f8bcd14ee9a02aa6ad8360cce3a9a9e9 HTTP/1.1" 200 2436 "-" "docker/27.2.0 go/go1.21.13 git-commit/3ab5c7d kernel/6.11.0-9-generic os/linux arch/amd64 UpstreamClient(Docker-Client/27.2.0 \\(linux\\))"
By default, Varnish accepts incoming (plaintext) HTTP traffic on port 6081
on all interfaces. This is configured in the systemd service file:
systemctl cat varnish
To change the port where Varnish listens for incoming (non-TLS) traffic, you can change the -a
argument in the service file:
sudo systemctl edit --full varnish
The -a :6081
line directly under ExecStart=/usr/sbin/varnishd
can be changed to any other port, for example -a :80
. Additional -a
arguments can be added to listen to multiple endpoints.
Example:
(...)
ExecStart=/usr/sbin/varnishd \
-a :80 \
(...)
To accept incoming TLS traffic, Varnish must be configured with one or more TLS listen endpoints. These can be configured in a separate configuration file, which is located at /etc/varnish/tls.cfg
.
In the TLS config file, change pem-file
from /etc/varnish/certs/example.com
to the path to your PEM file, which should contain private key, public certificate, and any intermediate certificates.
Example:
frontend = {
host = "*"
port = "443"
}
pem-file = "/path/to/cert.pem"
(...)
Finally, add -A /etc/varnish/tls.cfg
to the Varnish service file with:
sudo systemctl edit --full varnish
Example:
(...)
ExecStart=/usr/sbin/varnishd \
-A /etc/varnish/tls.cfg \
(...)
For more information, see the Client TLS documentation.
The persistent caching system in Varnish Enterprise is called Massive Storage Engine 4 (MSE4). This allows you to extend and persist the cache, preferably with directly attached SSD drives.
/var/lib/mse
To avoid issues with SELinux permissions, the persisted cache should reside in mount points under /var/lib/mse
:
mkdir -p /var/lib/mse
Make sure this directory has the right user/group:
chown -R varnish:varnish /var/lib/mse
It is strongly recommended to use EXT4 file system(s) as the backing for your MSE4 persistent cache, as other file systems have issues with fragmentation and poor performance.
The following steps will assume that /var/lib/mse/disk1
is either a directory in an EXT4 file system, or a mount point to an EXT4 file system on the drive you wish to persist your cache to.
Multiple mount points can be created under /var/lib/mse/
for systems with multiple drives.
Create the MSE4 configuration file:
touch /etc/varnish/mse4.conf
Copy/paste the following example configuration into /etc/varnish/mse4.conf
:
env: {
books = ( {
id = "book";
filename = "/var/lib/mse/disk1/book";
size = "5G";
stores = ( {
id = "store";
filename = "/var/lib/mse/disk1/store";
size = "2043G";
} );
} );
};
This configuration assumes that /var/lib/mse/disk1
has at least 2 TB free space. Adjust the size
of the store to fit your drive.
The mkfs.mse4
tool is bundled with Varnish Enterprise, and can be used to initialize and resize the persistent cache when Varnish is not running.
mkfs.mse4 -c /etc/varnish/mse4.conf configure
Change the -s mse
argument in the Varnish service file to -s mse4,/etc/varnish/mse4.conf
:
sudo systemctl edit --full varnish
Example:
(...)
ExecStart=/usr/sbin/varnishd \
(...)
-s mse4,/etc/varnish/mse4.conf
(...)
If Varnish is already running, restart the service:
sudo systemctl restart varnish