In the case of cloud instances, or any other platform featuring clusters that
can shrink and grow dynamically, the
nodes.conf file needs to be updated and
vha-agent needs to re-read it when a change occur. This is
It will watch a given source, such as the VAC api or DNS information and will
create or update a
nodes.conf file tracking the relevant machines, in
addition, it can send a SIGHUP to
vha-agent (or any process) to warn it the
Its package is
vha-agent must expose a pid file using the
-P argument. Starting
with 2.1.1, the
varnish-plus-ha packages are configured to create that file
/var/run/vha-agent/vha-agent.pid on sysv
varnish-discovery uses it by default.
Also, it’s recommended to run
vha-agent without the
-m switch, ie.
let it use the hostname of the machine as
varnish-discovery will use it when
no node name is readily available (for DNS for example).
varnish-discovery supports multiple backends to generate a
each with their own set of switches (most are optional), so to help you hit the
ground running, here’s a selection of examples.
Note: if you just need to create the file then exit (eg. before you start VHA
for the first time), you can just append
--once to those commands.
--group specifies an autoscaling group on AWS. This feature relies
on the AWS SDK, so if
awscli is properly configured and works, this subcommand
will only require
/usr/bin/varnish-discovery aws \ --nodefile /etc/varnish/nodes.conf \ --warnpid /run/vha-agent/vha-agent.pid \ --group $DOMAIN_NAME
We need to specify a domain name that will be checked every now and then (see
/usr/bin/varnish-discovery dns \ --nodefile /etc/varnish/nodes.conf \ --warnpid /run/vha-agent/vha-agent.pid \ --group $DOMAIN_NAME
If running inside a pod,
discovery will be able to find information about
token to access the API server , so
there’s no need to specify them:
/usr/bin/varnish-discovery k8s \ --nodefile /etc/varnish/nodes.conf \ --warnpid /run/vha-agent/vha-agent.pid \ --api "https://$K8S_API/" \ --group $ENDPOINT \ --port $PORT
If you don’t specify the port,
discovery will assume it’s
depending on the protocol you use (
http if not specified).
If your node is only listed by more than one endpoint you must use
--group otherwise the node will be present in multiple clusters and
VHA won’t know what to do with it.
Varnish Administration Console, we only need to point to the API’s
varnish-discovery will be able to figure out its own cluster.
/usr/bin/varnish-discovery vac \ --nodefile /etc/varnish/nodes.conf \ --warnpid /run/vha-agent/vha-agent.pid \ --api $LOGIN:$PASSWD@$VAC_IP
All packages offer service files, how you edit them is going to depend on your
platforms. For sysv, you can edit
varnish-discovery.params located in either
/etc/deafult/ or in
/etc/sysconfig to change the
For systemd, run you can create the file
and redefine the
ExecStart parameter, for example:
cat > /etc/systemd/system/varnish-discovery.service.d/exec.conf << EOF [Service] ExecStart= ExecStart=/usr/bin/varnish-discovery dns --group localhost --ipv4 --nodefile /etc/varnish/nodes.conf --warnpid /run/vha-agent/vha-agent.pid EOF systemctl daemon-reload
--nodefile NODEFILE (-)
Output the cluster information in NODEFILE, with “-” meaning stdout.
--proto PROTO (http)
If the source of information doesn’t explicit the protocol used, we fallback to PROTO.
If the source of information doesn’t explicit the port used, we fallback to PORT. If port is omitted, it is inferred from the protocol, and if “0” is specified, the first port is used (useful in the kubernetes case).
What pidfile should be read to warn the owners when the configuration changes. This option can be specified multiple times. PIDFILE will be read everytime a signal has to be sent.
Restrict what version of the IP protocol should be used (useful in the DNS case). If none is supplied, use both.
By default, varnish-discovery will monitor its source of information indefinitely, but with this option it only does it once before exiting. An non-zero return indicates an error during the run.
--every SECONDS (2)
This specify how frequently varnish-discovery should contact the source. For DNS and VAC, this means the duration between two requests (from start to start). In the kubernetes case, which uses long-poll, it tells varnish-discovery how long it should wait before trying again in case of failure (again, this is he time since the start of the failed request).
This option can be used multiple times and tell varnish-discovery the name to look for. The meaning is different depending on the source, but it always represent a handle behind which multiple IPs can hide::
If the AWS region hasn’t been configured yet (using
aws configure), you
may specify it here
By default, varnish-discovery tries to find the IP corresponding to the local machine in the resolved list and replaces it with the local hostname. This is useful because vha-agent by default, looks for the hostname to find the local machine in the list.
This switch disables this find-and-replace behaviour.
--namespace PATH (/var/run/secrets/kubernetes.io/serviceaccount/namespace)
When operating inside a pod, varnish-discovery can find how to communicate to the kubernetes api using the contional default, but they can all be overridden to work outside of a pod or to connect to a non-standard URL.
Note that in k8s’ case, if no endpoint is specified, varnish-discovery will use the one owning the pod it’s running on. This allows for minimal configuration based on context.
Where to find the VAC API. URL must include the login and password to correctly authenticate to the API.
As for the dns command, varnish-discovery will try and replace the local ip with the local hostname.