In order to install Varnish-Broadcaster on either Debian/Ubuntu or Redhat Enterprise, access to Varnish Plus is required. Please get in touch via email@example.com for more information on Varnish Plus.
If you are installing on Debian or Ubuntu, use the prebuilt packages:
Add the Varnish Plus repository for Varnish-Broadcaster
Update and install:
$ apt-get update $ apt-get install varnish-broadcaster
Currently only RPMs for RHEL6 and RHEL7 and compatible derivatives are available.
Add the Varnish-Broadcaster yum repository as per Varnish Plus instruction.
$ yum update $ yum install varnish-broadcaster
The broadcaster will start with its default configuration file pointing to
the log level set to INFO.
$ systemctl start broadcaster
If succesfully started, the broadcaster will expose two ports:
See below section for other available options.
Start the broadcaster with any of the below options. All of them have been preconfigured with default values, except the cfg option which points to the file containing the nodes to broadcast against.
The broadcaster does not have a specific requirement of running in its own VM, however, if running on the same node with Varnish and Hitch it is worth taking into account the broadcaster’s https configuration in order to avoid port collision with Hitch.
||The port under which the broadcaster is exposed.||8088|
||Listening port for the broadcaster’s management.||8089|
||Path to a file containing configured nodes.||X|
||Enable async mode. If true, any incoming request will return imediatelly with a X-Job-Id header.||false|
||The ttl of a finished invalidation request. When done, every invalidation request is kept in memory for the specified amount of time. This due to status purposes only.||10 minutes|
||Set log level. Available options: debug, info, warning, error, quiet.||info|
||Broadcaster https listening port.||8443|
||CRT file used for HTTPS support.||For HTTPS|
||KEY file used for HTTPS support.||For HTTPS|
||Show current version and exit.|
||Set the broadcaster’s running host.||0.0.0.0|
||Connection keep-alive duration.||1 minute|
||A flag which tells whether keep-alive should be disabled.||false|
||Path to a file where the broadcaster writes its pid.||/run/varnish-broadcaster/broadcaster.pid|
||Proxy host for the user interface|
||Proxy port for the user interface.|
||Proxy proto fpr the user interface.|
||Connection timeout between the broadcaster and the configured nodes.||10 seconds|
||TLS verification mode towards nodes (AUTO, CLIENT, SERVER, NONE).||AUTO|
The broadcaster handles TLS certificate verification towards nodes in four different ways described below.
AUTO- The default way. If the node in the configuration file is a hostname it will use the hostname for server name verification. If it’s an IP, it will use the host header from the incoming request. It will not use local server IP/host in the validation.
CLIENT- Always use the
Hostheader in the incoming request for server name verification.
SERVER- Always use the host/IP in the configuration file.
NONE- Do not perform any TLS server name verification. This trusts ALL TLS certificates (Note that this is insecure and not recommended in production).
All logs from the broadcaster will be written to stdout. Using systemd, these logs can be seen using journald.
By default, the broadcaster starts listening on the port, however - if both
key options are set, it will automatically switch onto https and listen to the
https-port (default: 8443).
*will tell the broadcaster to only broadcast to one node in each configured group. It will randomly select one. If the selected node is failing, the next one is tried and so forth, until a response from a node. The
X-Broadcast-Grouphave precedence over
X-Broadcast-Group: *will override
X-Broadcast-Random: *. Multiple groups are white-space separated. Mixing group names and
*will use the group names and reject the
broadcasterwill treat each relevant cluster one after the other. This is useful for purging multi-layer setup from upstream to downstream.The order is defined by the
X-Broadcast-Group, or if it’s empty or
*, by the
nodes.conffile. Group names are white-space separated.
See the examples section for usage examples.
From release 1.2.4 all requests towards backends will include the header X-Broadcaster-Ua which includes the version of the broadcaster itself.
The value for this header is in the form
X-Broadcaster-Ua: Broadcaster/<version> (eg.
The broadcaster converts all request headers to canonical format. The canonicalization converts the first
letter and any letter following a hyphen to upper case, the rest are converted to lowercase. The HTTP standard requires
all servers to treat the header names in a case insensitive fashion, so the normalization
is only problematic if your server does not respect the standard. This means that the header
accept-encoding: text will be converted to
The Broadcaster requires a file which contains the nodes to broadcast against. The format of the file is similar to the ini format.
Below you have a couple of snippets from a valid configuration file.
This configuration has two clusters (Europe/US) each with its own nodes:
# this is a comment [Europe] First = 18.104.22.168:9090 Second = 22.214.171.124:6081 Third = example.com [US] Alpha = http://[1::2] Beta = 126.96.36.199
The following configuration has all the nodes available in the local cluster.
alpha = 188.8.131.52 bravo = [1:2::3]:45 charlie = https://184.108.40.206:90 delta = http://[1::2]
Note that a combination of the two configurations is not allowed. The following configuration is invalid:
alpha = 220.127.116.11 bravo = [1:2::3]:45 [US] charlie = https://18.104.22.168:90 delta = http://[1::2]
If the broadcaster receives a
SIGHUP notification, it will trigger a configuration reload from disk.