This section will highlight some of the simple use cases where the broadcaster can be used. While it was meant as a generic http distributor, its main purpose resides around the invalidation of cached content.
The API path will, from version 1.2.4, support versioning in the form of /api/v1
. It will still
support /api/
for now, which points to version v1
.
The examples below uses the following nodes.conf:
[GroupA]
Cache1=http://example.com:81
Cache2=http://example.com:82
Cache3=http://example.com:83
[GroupB]
Cache4=http://example.com:84
Cache5=http://example.com:85
[GroupC]
Cache6=http://example.com:86
Cache7=http://example.com:87
Cache8=http://example.com:88
For each case (purge, ban, ykey), these examples assumes the Varnish nodes are configured like described in the invalidation tutorial.
A purge will invalidate all objects with the same hash as the invalidation
requests, so make sure to specify the host
header:
# Purge the /something/to/purge URL from all the nodes
curl http://localhost:8088/something/to/purge -X PURGE -H "host: my.domain.com"
# Only target the GroupA nodes, but connect the broadcaster via HTTPS
curl https://localhost:8088/something/to/purge -X PURGE -H "host: my.domain.com" -H "X-Broadcast-Group: GroupA" --cacert server.crt
For banning, we have access to regular expressions, allowing to cache multiple subdomains and entire subtrees.
# Ban all objects under /foo/
curl -is http://localhost:8088/ -X BAN -H "ban-url: ^/foo/"
# Ban everything in a *.foo.com subdomain
curl -is http://localhost:8088/ -X BAN -H "ban-host: \.foo\.com$"
# Ban all the content in foo.com/static/ from the `GroupB` nodes
curl -is http://localhost:8088/ -X BAN -H "ban-host: ^foo\.com$" -H "ban-url: ^/static/" -H "X-Broadcast-Group: GroupA"
Here, we just need the id that needs to be invalidated.
# Invalidate all ykey objects marked with "a1b2c3d4f5g6"
curl -is http://localhost:8088/ -H "ykey-purge: a1b2c3d4f5g6"
The broadcaster can also act as a load balancer and pick only one node, at
random, out of a group, using the X-Broadcast-Group
.
If a group is treated “randomly”, the broadcaster will pick only one node from it, but if it can’t get an HTTP response out of it, it will cycle through the group until it get a response.
Note: the HTTP code of the response isn’t taken into account, only whether or not a response exist.
The header X-Broadcast-Group
has precedence over X-Broadcast-Random
, so if
a group appears in both, the group will be treated fully.
As an example you can send a request to one node in each group:
curl -is https://localhost:8088/endpoint/to/trigger -H "X-Broadcast-Random: *"
With a possible response being:
HTTP/1.1 200 OK
Content-Type: application/json
Date: Mon, 11 Dec 2017 12:34:33 GMT
Content-Length: 120
{
"done": true,
"method": "GET",
"uri": "/something/to/fetch",
"ts": 1512995646,
"rate": 100,
"nodes": {
"Cache2": 200,
"Cache5": 200,
"Cache8": 200,
}
}
But you can also send the request to all the GroupA
nodes and pick just one out
of GroupB
and GroupC
curl -is https://localhost:8088/endpoint/to/trigger -H "X-Broadcast-Random: *" -H "X-Broadcast-Group: GroupA"
Giving us:
HTTP/1.1 200 OK
Content-Type: application/json
Date: Mon, 11 Dec 2017 12:34:33 GMT
Content-Length: 101
{
"done": true,
"method": "GET",
"uri": "/something/to/fetch",
"ts": 1512995646,
"rate": 100,
"nodes": {
"Cache1": 200,
"Cache2": 200,
"Cache3": 200,
"Cache5": 200,
}
}
A multi-layer cache setup will have some caches closer to the origin and other caches closer to the user. To ensure data consistency, the content must be purged first from the layer closest to the origin, then progress downstream, all the way to the edge servers. This means that groups must be purged in order instead of all at once, which is what this header enables.
Assuming that in the example nodes.conf
:
GroupA
is the origin shield, right in front of the origin,GroupB
is a storage tier using GroupA
as backend,GroupC
is the edge tier serving the users, and using GroupB
as backend,
then we can purge them all, in the right order with:$ curl -is localhost:8088/foo -X PURGE -H "host: my.domain.com" -H "X-Broadcast-Group: GroupA GroupB GroupC" -H "X-Broadcast-InOrder: true"
Note the default group order is taken from the config file, from top to bottom. Taking advantage of this we can omit the X-Broadcast-Group header and get the same purge order of the previous command.
$ curl -is localhost:8088/foo -X PURGE -H "host: my.domain.com" -H "X-Broadcast-InOrder: true"
Output sample in synchronous mode:
HTTP/1.1 200 OK
Content-Type: application/json
Date: Mon, 11 Dec 2017 12:34:33 GMT
Content-Length: 101
{
"done": true,
"method": "PURGE",
"uri": "/foo/bar",
"ts": 1512995646,
"rate": 25,
"nodes": {
"Cache1": 503,
"Cache2": 500,
"Cache3": 503,
"Cache4": 200
}
}
Output sample in async mode:
HTTP/1.1 202 Accepted
X-Job-Id: 570204483
Date: Mon, 11 Dec 2017 12:20:59 GMT
Content-Length: 0
Content-Type: text/plain; charset=utf-8
When in async mode, the broadcaster returns immediately to the client with the status code of 202
and a X-Job-Id header.
Use the value of this header in the /api/status
endpoint to check whether the invalidation request has finished.
curl http://localhost:8089/api/configuration
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: X-Job-Id
Content-Type: application/json
Date: Wed, 12 Dec 2017 09:40:59 GMT
Content-Length: 313
{
"Local": [
{
"address": "https://localhost:7081",
"name": "beta"
},
{
"address": "https://localhost:4443",
"name": "charlie"
},
{
"address": "https://localhost:6081",
"name": "alpha"
}
],
"Remote": [
{
"address": "https://burlan.eu",
"name": "second"
},
{
"address": "https://apienginedemo.varnish-software.com",
"name": "third"
}
],
"async": false
}
curl -I http://localhost:8089/api/ping
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: X-Job-Id
Content-Length: 0
X-Pong: 1631603497
Date: Tue, 14 Sep 2021 07:11:37 GMT
curl -is http://localhost:8089/api/status
HTTP/1.1 200 OK
Content-Type: application/json
Date: Mon, 11 Dec 2017 12:42:29 GMT
Content-Length: 574
{
"510158363": {
"done": true,
"method": "PURGE",
"uri": "/foo/bar",
"ts": 1512995673,
"rate": 25,
"nodes": {
"Cache1": 503,
"Cache2": 500,
"Cache3": 503,
"Cache4": 200
}
},
"752412541": {
"done": true,
"method": "BAN",
"uri": "/foo/bar",
"ts": 1512995630,
"rate": 25,
"nodes": {
"Cache1": 503,
"Cache2": 500,
"Cache3": 503,
"Cache4": 200
}
},
"2377945909": {
"done": true,
"method": "PURGE",
"uri": "/foo/bar",
"ts": 1512995637,
"rate": 25,
"nodes": {
"Cache1": 503,
"Cache2": 500,
"Cache3": 503,
"Cache4": 200
}
}
}
curl -is http://localhost:8089/api/status?id=2377945909
HTTP/1.1 200 OK
Content-Type: application/json
Date: Mon, 11 Dec 2017 12:42:43 GMT
Content-Length: 101
{
"done": true,
"ts": 1512995637,
"method": "PURGE",
"uri": "/foo/bar",
"rate": 25,
"nodes": {
"Cache1": 503,
"Cache2": 500,
"Cache3": 503,
"Cache4": 200
}
}
curl -is http://localhost:8089/api/status?id=752412541
HTTP/1.1 202 Accepted
X-Job-Id: 570204483
Date: Mon, 11 Dec 2017 12:20:59 GMT
Content-Length: 0
Content-Type: text/plain; charset=utf-8
From version 1.6 it’s possible to request number of failed and successful job requests using the status API. This is a fast way of just getting an overview if jobs has failed. A failed request is counted when at least one of the configured nodes failed to handle the request for a job. A successful request is counted if all nodes successfully handled the broadcasted request.
curl -is http://localhost:8089/api/status/count
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: X-Job-Id
Content-Type: application/json
Date: Tue, 18 Oct 2022 12:54:17 GMT
Content-Length: 34
{
"Successful": 0,
"Failed": 1
}
The stats endpoint returns the statistics for broadcasts. It will return the structure shown below in JSON format.
The statistics is grouped per configured groups and nodes. It keeps statistics for methods used, counter for responses from nodes and
an aggregated TotalRequests
per scope.
sinceRequests
shows the time since the statistics was last reset.started
is the timestamp when the statistics was last reset.totalBroadcasts
is the number of broadcasts performed since the last reset.requestsPerSec
is number of requests per second to nodes since reset.Fetch current statistics:
curl -is http://localhost:8089/api/stats
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: X-Job-Id
Content-Type: application/json
Date: Tue, 20 Oct 2020 07:17:43 GMT
Content-Length: 720
{
"started": "2020-10-20T09:17:40.458600175+02:00",
"totalBroadcasts": 1,
"sinceReset": "2.725s",
"groups": {
"global.cluster": {
"hosts": {
"server1": {
"address": "http://127.0.0.1:1234",
"requests": {
"GET": {
"200": 1
}
}
},
"server2": {
"address": "http://127.0.0.1:1235",
"requests": {
"GET": {
"500": 1
}
}
}
},
"totalRequests": {
"GET": {
"200": 1,
"500": 1
}
}
}
},
"totalRequests": {
"GET": {
"200": 1,
"500": 1
}
},
"requestsPerSec": 0
Reset statistics:
curl -is localhost:8089/api/v1/stats/reset
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: X-Job-Id
Content-Type: application/json
Date: Tue, 20 Oct 2020 07:21:28 GMT
Content-Length: 0
The status information for any invalidation request has the following structure:
Name | About |
---|---|
method |
The HTTP method used for broadcasting. |
uri |
The URI to be broadcasted against the configured nodes. |
done |
Boolean telling whether the job batch has finished. |
ts |
Unix timestamp which tells the time when a specific invalidation request has been sent onto the working channel. |
rate |
Percentage which represents the quota of successful broadcasts. A broadcast is considered to have succeeded if its returned status code is 200 . |
nodes |
Dictionary representation of the nodes which have been broadcasted against along with their returned status codes. |
err |
If present, this tag will contain errors occurred at broadcast time. |