Varnish Cache Plus

Synthetic backends (synthbackend)

Varnish 6.0

Description

Varnish can produce synthetic responses out-of-the-box, but these objects aren’t inserted into the cache, and they aren’t processed the same way (no ESI, gzip processing for example). This vmod allows the insertion of synthetic object at the beginning of the fetch pipeline, allowing you to store and manipulate them exactly like backend-generated response.

Example VCL

Pushing a file in cache

vcl 4.0;

import synthbackend;

backend s1 {
	.host = "1.1.1.1";
}

sub vcl_recv {
	# because varnish will reset the method to "GET", we have to store the
	# POST info in a header
	unset req.http.post-method;
	if (req.method == "POST") {
		set req.http.post-method = "yes";
		return (hash);
	}
}

sub vcl_backend_fetch {
	# if the request was a POST, reestablish the method, and set the
	# "mirror" backend that will send back the request body as response body
	if (bereq.http.post-method == "yes") {
		set bereq.method = "POST";
		set bereq.backend = synthbackend.mirror();
		return (fetch);
	}
}

curl can then be used to push content:

curl 'http://example.com/path/to/file.html' --data-binary @file.html

Triggering a multi-URLs pre-warm

It’s also possible to tell Varnish to fetch a list of URLs to prime the cache.

In this example, vmod_xbody is used to transform the provided list of URLs:

/path/to/page/1
/path/to/page/2
/path/to/page/3
...

into

{{ > /path/to/page/1 }}
{{ > /path/to/page/2 }}
{{ > /path/to/page/3 }}
...

which is vmod_edgestash syntax to trigger subrequests.

In this example, we push the URL file with the LOAD method:

curl 'http://example.com/' --data-binary @url.list -X LOAD
import edgestash;
import synthbackend;
import xbody;

backend s1 {
	.host = "1.1.1.1";
}

sub vcl_recv {
	unset req.http.warm-state;
	# set the warm-state header: "top" for the initial request, "sub" for
	# subrequests to be cached
	if (req_top.method == "LOAD") {
		if (req.esi_level == 0) {
			set req.http.warm-state = "top";
			# we don't want the request cached, only its subrequests
			return(pass);
		} else {
			set req.http.warm-state = "sub";
			# convert the request into a HEAD one to avoid sending
			# the body back
			set req.method = "HEAD";
			return(hash);
		}
	}
}

sub vcl_backend_fetch {
	# only use mirror for the top request
	if (bereq.http.warm-state == "top") {
		set bereq.backend = synthbackend.mirror();
	} else {
		set bereq.backend = s1;
	}
}

sub vcl_backend_response {
	# prepare the list for edgestash processing
	if (bereq.http.warm-state == "top") {
		set bereq.backend = synthbackend.mirror();
		xbody.regsub("(.*)\n?", "{{ > \1 }}");
		edgestash.parse_response();
	}
}

sub vcl_deliver {
	# trigger the edgestash processing
	if (req.http.warm-state == "top") {
		edgestash.execute();
		return (deliver);
	}
}

API

.from_blob() / .from_string()

BACKEND from_blob(BLOB)

BACKEND from_string(STRING)

Take a BLOB/STRING as argument and return a backend that will respond 200 OK with the argument as body. Both functions are largely equivalent, and from_string is certainly going to be used most of the time, with from_blob a necessary fallback in the case binary data is needed.

Note: these function must be called from vcl_backend_fetch.

.mirror()

BACKEND mirror()

Reflects the request body as response body. If bereq.http.content-encoding exists, it’ll be copied to beresp.http.content-encoding.