The API responds to requests for the following URLs
/key/<key>
Retrieves stats for a single key. Key name must be URL encoded.
/match/<regex>
Retrieves a list of keys matching the URL encoded regular
expression. Accepts the query parameter verbose=1
, which will display all
stats collected for the keys matched.
/all
Retrieves a list of all the keys we are currently tracking. Like
/match
, this also accepts verbose=1
for verbose output.
For /match/<r>
and /all
, VCS can produce sorted lists of keys. The
command (which is appended to the URL) defines which sorting criteria
should be used:
/all/top
Sort based on number of requests.
/all/top_ttfb
Sort based on the ttfb_miss
field.
/all/top_size
Sort based on the n_bodybytes
field.
/all/top_miss
Sort based on the n_miss
field.
/all/top_respbytes
Sort based on number of bytes transmitted to clients.
/all/top_reqbytes
Sort based on number of bytes received from clients.
/all/top_berespbytes
Sort based on number of bytes fetched from backends.
/all/top_bereqbytes
Sort based on number of bytes transmitted to backends.
/all/top_restarts
Sort based on the avg_restarts
field.
/all/top_5xx
all/top_4xx
all/top_1xx
Sort based on number of HTTP response
codes returned to clients, in buckets for 5xx
, 4xx
, 1xx
, etc.
/all/top_uniq
Sort based on the n_req_uniq
field.
Note that in the above, you can substitute /all
with /match/<r>
, to
limit the listing to only apply to a specific set of keys.
Further, a /k
parameter can be appended, which specifies the number
of keys to include in the top list. If no k
value is provided the
top 10 will be displayed.
By default, the sorting is based on the data in the single latest
window in the time series. To make it consider multiple windows, a
query parameter b=<n>
can be specified. Specifying /all/top?b=5
will
then result in a top list sorted by the combined number of requests
over the 5 latest windows.
The API produces JSON output for the data it maintains for each tracked key. When requesting stats for a key, the output is in the following format
{
"example.com": [
{
"timestamp": "2013-09-18T09:58:30",
"n_req": 76,
"n_req_uniq": "NaN",
"n_miss": 1,
"avg_restarts": 0.000000,
"n_bodybytes": 10950,
"ttfb_miss": 0.000440,
"ttfb_hit": 0.000054,
"resp_1xx": 0,
"resp_2xx": 76,
"resp_3xx": 0,
"resp_4xx": 0,
"resp_5xx": 0
},
{
"timestamp": "2013-09-18T09:58:00",
"n_req": 84,
"n_req_uniq": "NaN",
"n_miss": 0,
"avg_restarts": 0.000000,
"n_bodybytes": 12264,
"ttfb_miss": "NaN",
"ttfb_hit": 0.000048,
"resp_1xx": 0,
"resp_2xx": 84,
"resp_3xx": 0,
"resp_4xx": 0,
"resp_5xx": 0
},
]
}
For the above example, the key is example.com
and the vstatd
instance was configured with 10 second time buckets.
Each time bucket contains the following fields:
Field | Description |
---|---|
timestamp |
This is the timestamp for the start of the bucket’s period. |
n_req |
The number of requests. |
n_req_uniq |
The number of unique requests, if configured. This counter will increase once for each distinct value of vcs-unique-id encountered, configured in VCL. See https://info.varnish-software.com/blog/getting-live-statistics-varnish-hlshds for an example use case for this. |
n_miss |
Number of backend requests (i.e. cache misses). Number of hits can be calculated as n_hit = n_req - n_miss . |
avg_restarts |
The average number of VCL restarts triggered per request. |
n_bodybytes |
The total number of bytes transferred for the response bodies. |
ttfb_miss |
Average time to first byte for requests that ended up with a backend request. |
ttfb_hit |
Average time to first byte for requests that were served directly from varnish cache. |
resp_1xx … resp_5xx |
Counters for response status codes. |
reqbytes |
Number of bytes received from clients. |
respbytes |
Number of bytes transmitted to clients. |
berespbytes |
Number of bytes received from backends. |
bereqbytes |
Number of bytes transmitted to backends. |
For top lists, the output is in the following format:
{
"example.com": 327,
"MISS": 168,
"HIT": 159,
"example.com/img.png": 37,
...
}
The URL /status produces a JSON object containing a few simple counters,
{
"uptime": 2133,
"n_keys": 358,
"n_trans": 483,
"db_mem_usage": 1913328
}
Field | Description |
---|---|
uptime |
The number of seconds elapsed since VCS was launched. |
n_keys |
The number of keys we are currently tracking. |
n_trans |
The number of transactions that have been processed by VCS. |
db_mem_usage |
The amount of memory consumed for storage, in bytes. |
The output can also be presented in JSONP format, with a JavaScript function
call wrapped around it. This is done by adding the query parameter
?callback=myFunction
to the URL. myFunction
has to be a valid ascii JS
identifier.
Due to limitations in the current varnishapi, support for ESI requests
are very limited. Until Varnish 4.0, we do not recommend using VCS
along with ESI requests. In the case you are using ESI, wrap the key
definitions inside of an if (req.esi_level == 0)
block, e.g.:
sub vcl_deliver {
if (req.esi_level == 0) {
std.log("vcs-key:" + req.http.host);
}
}
Using the summed window sorting param (b=<n>
query param) is not
available for the stat n_req_uniq
.
Retrieve stats for /key/example.com
Retrieve a list of the top 5 requested keys in the previous window /all/top/5
Retrieve a list of the top 5 requested keys, summed over the previous 3 windows,
in JSONP format /all/top/5?b=3
For keys with names ending with .gif
, retrieve a list of the top 10
/match/(.*)%5C.gif$/top
Find a list of the top 50 slowest requests: /all/top_ttfb/50
Find a list of the top 50 slowest backend requests, in JSONP format:
/all/top_ttfb/50?callback=myfunc