Varnish Custom Statistics



vcs [-h] [-m period_len] [-b bucket_len] [-a port] [-s tbl_len] [-z port] [-l acl] [-P file] [-u dir] [-F] [-g] [-V] [-Z] [-K file] [-d domain]


The vcs utility consumes and aggregates log records from a set of vcs-agent(8) instances. The resulting data is made available in an API, that is queried over HTTP, presented in JSON format.

The records are aggregated based on keys specified via the vcs-agent(8) utility, and organized in time series buckets.


The following options are available:

-h Display help. Displays a brief list of vcs's options, along with default values.
-b seconds Shorthand for setting parameter bucket_len. See description below.
-m num_buckets Shorthand for setting parameter num_buckets. See description below.

-p param=value

-z port Listening port for vcs-agent(8) instances.
-a port Listening port for the HTTP interface.
-l acl Access control list for the HTTP interface. Specified as a comma-separated list of IP subnets, each prepended by a '+' or '-' sign. Plus means allow, minus means deny. For example, to allow only 192.168/16 subnet to connect, use "-,+". Supported only for IPv4 sessions.
-O [host]:port
Open a TCP connection to the specified endpoint and transmit finished buckets in JSON format for further handling/storage by 3rd party tools.
-P file Write the process's PID to the specified file.
-F Run in foreground. Output will be written to stdout/stderr.
-u dir Root directory for vcs ui files.
-g n Output debug information. Defaults to 0.
-V Show version- and copyright information then exit.
-Z Display timestamps in UTC.
-K file Credentials file for HTTP authentication
-d domain htdigest HTTP authentication domain (or realm)


Length of each individual bucket in the time series tracked.
Number of buckets in a time series. The total duration of the tracked period will be the number of buckets multiplied by the bucket length (bucket_len).
Hash table size. Internally all keys are organized in a hash table. The number here should be roughly the same magnitude as the number of keys you are tracking.
Maximum backlog for messages waiting to be transmitted to the JSON output endpoint. If this limit is reached, messages will be dropped.


The API responds to requests for the following URLs:

Output key filtering


Retrieves stats for a single key. Key name must be URL encoded.


Retrieves a list of keys matching the URL encoded regular expression. Accepts the query parameter verbose=1, which will display all stats collected for the keys matched.
/all Retrieves a list of all the keys we are currently tracking. Like /match, this also accepts verbose=1 for verbose output.

Top lists

For /match/<r> and /all, vcs can produce sorted lists of keys. The command (which is appended to the URL) defines which sorting criteria should be used:

Sort based on number of requests.
Sort based on the ttfb_miss field.
Sort based on the n_bodybytes field.
Sort based on the n_miss field.
Sort based on number of bytes transmitted to clients.
Sort based on number of bytes received from clients.
Sort based on number of bytes fetched from backends.
Sort based on number of bytes transmitted to backends.
Sort based on the avg_restarts field.
/all/top_5xx, /top_4xx, .., /top_1xx
Sort based on number of HTTP response codes returned to clients, in buckets for 1xx, 2xx, 3xx, etc.
Sort based on the n_req_uniq field.

Note that in the above, you can substitute /all with /match/<r>, to limit the listing to only apply to a specific set of keys.

Further, a "/k" parameter can be appended, which specifies the number of keys to include in the top list. If no 'k' value is provided the top 10 will be displayed.

Summed window sorting

By default, the sorting is based on the data in the single latest window in the time series. To make it consider multiple windows, a query parameter 'b=<n>' can be specified. Specifying /all/top?b=5 will then result in a top list sorted by the combined number of requests over the 5 latest windows.


The API produces JSON output for the data it maintains for each tracked key. When requesting stats for a key, the output is in the following format:

 "": [
         "timestamp": "2013-09-18T09:58:30",
         "n_req": 76,
         "n_req_uniq": "NaN",
         "n_miss": 1,
         "avg_restarts": 0.000000,
         "n_bodybytes": 10950,
         "ttfb_miss": 0.000440,
         "ttfb_hit": 0.000054,
         "resp_1xx": 0,
         "resp_2xx": 76,
         "resp_3xx": 0,
         "resp_4xx": 0,
         "resp_5xx": 0
         "timestamp": "2013-09-18T09:58:00",
         "n_req": 84,
         "n_req_uniq": "NaN",
         "n_miss": 0,
         "avg_restarts": 0.000000,
         "n_bodybytes": 12264,
         "ttfb_miss": "NaN",
         "ttfb_hit": 0.000048,
         "resp_1xx": 0,
         "resp_2xx": 84,
         "resp_3xx": 0,
         "resp_4xx": 0,
         "resp_5xx": 0

For the above example, the key is "" and the vcs instance was configured with 10 second time buckets.

Each time bucket contains the following fields:

This is the timestamp for the start of the bucket's period.
The number of requests.
The number of "unique" requests, if configured. This counter will increase once for each distinct value of "vcs-unique-id" encountered, configured in VCL. See for an example use case for this.
Number of backend requests (i.e. cache misses). Number of hits can be calculated as n_hit = n_req - n_miss.
The average number of VCL restarts triggered per request.
The total number of bytes transferred for the response bodies.
Average time to first byte for requests that ended up with a backend request.
Average time to first byte for requests that were served directly from varnish cache.
resp_1xx - resp_5xx
Counters for response status codes.
Number of bytes received from clients.
Number of bytes transmitted to clients.
Number of bytes received from backends.
Number of bytes transmitted to backends.

For top lists, the output is in the following format:

    "": 327,
    "MISS": 168,
    "HIT": 159,
    "": 37,


The URL /status produces a JSON object containing a few simple counters:

    "uptime": 2133,
    "n_keys": 358,
    "n_trans": 483,
    "db_mem_usage": 1913328
  • 'uptime' is the seconds elapsed since VCS was launched.
  • 'n_keys' is the number of keys we are currently tracking.
  • 'n_trans' is the number of transactions that have been processed by VCS.
  • 'db_mem_usage' is the amount of memory consumed for storage, in bytes.


The output can also be presented in JSONP format, with a JavaScript function call wrapped around it. This is done by adding the query parameter ?callback=myFunction to the URL. myFunction has to be a valid ascii JS identifier.



Due to limitations in the current varnishapi, support for ESI requests are very limited. Until Varnish 4.0, we do not recommend using VCS along with ESI requests. In the case you are using ESI, wrap the key definitions inside of an if (req.esi_level == 0) block, e.g.:

sub vcl_deliver {
  if (req.esi_level == 0) {
    std.log("vcs-key:" +;

Summed window sorting and n_req_uniq

Using the summed window sorting param (b=<n> query param) is not available for the stat n_req_uniq.


Retrieve stats for key named "":


Retrieve a list of the top 5 requested keys in the previous window:


Retrieve a list of the top 5 requested keys, summed over the previous 3 windows, in JSONP format:


For keys with names ending with '.gif', retrieve a list of the top 10:


Find a list of the top 50 slowest backend requests, in JSONP format:



This document was written by Dag Haavi Finstad <>.