Search

Varnish Enterprise 6.0.13r8 Release

Published November 14, 2024.

About the release

This release introduces new features in vmod S3 and parallel ESI, and a significant performance and resource usage optimization in MSE4 when using ykey - among other things. Instances using a persistent cache with MSE4 will need to empty the cache as part of the upgrade process due to an update to the book and store format. More about this in the upgrade notes section.

The features and optimizations are highlighted below. For the complete list of changes, please see the Changelog.

New features

Custom claim reader in vmod-jwt

The JWT vmod is typically used when scalable stateless authentication and authorization is needed.

Tokens often come with custom claims, which can be parsed and read using the JSON vmod. This release adds support to read custom claims in the JWT vmod directly, which reduces the number of VCL lines needed.

The following example shows how a JWT can be parsed and verified, and how the custom claim called role can be read and used to create a request header.

vcl 4.1;

import jwt

sub vcl_init {
	new jwt_reader = jwt.reader();
}

sub vcl_recv {
	# Parse the token provided in the Authorization header
	if (!jwt_reader.parse(req.http.Authorization)) {
		return (synth(401, "Invalid Authorization Token"));
	}

	# Set the secret used to verify the signature
	if (!jwt_reader.set_key("changeme")) {
		return (synth(401, "Invalid Authorization Token"));
	}

	# Verify the token using the HS256 algorithm
	if (!jwt_reader.verify("HS256")) {
		return (synth(401, "Invalid Authorization Token"));
	}

	# Get the claim named "role" and put its value into the request header
	# named Role. If the claim does not exist, use the value "none".
	set req.http.Role = jwt_reader.get_claim("role", "none");
}

The other methods that are introduced in this release are get_claim_string(), get_claim_bool(), get_claim_number() and get_claim_integer(). All of them parse and return specific data types.

For more information, please refer to the documentation for the JWT vmod.

Parallel ESI can now limit concurrency

The obvious benefit of parallel ESI is its ability to fetch from the origin in parallel. It reduces latency and improves user experience as long as the origin is able to keep up with the amount of parallel work. If the origin is not able to keep up it can lead to load problems on the origin and in the end delivery problems.

This release introduces a limit that determines the maximum number of fetches in flight at each ESI level for each single delivery. The purpose of the limit is to mitigate overloading the origin even with heavy ESI usage.

The limit is set using the parameter esi_limit which has the default value of 10:

-p esi_limit=10

Fetches that exceed this limit are executed as soon as the number of fetches in flight is lower than the limit.

For more information, please refer to the documentation for parallel ESI.

Parallel ESI support for the onerror attribute

Parallel ESI now supports the onerror attribute in <esi:include/> tags, which allows the user to specify the behavior when Varnish fails to fetch an ESI fragment. Fragments are considered to be failed if an error occurs during the backend fetch, or the response status for the fragment is different from 200 and 204.

The feature is enabled using the following parameter to Varnish:

-p feature=+esi_include_onerror

When this feature is enabled, Varnish will look for the onerror attribute in the <esi:include/> tags.

Examples:

  • A failure to fetch the following ESI include will abort the parent request.

    <esi:include src="..." onerror="abort"/>
    
  • A failure to fetch the following ESI include will allow ESI processing to continue and serve the parent request in full, including the failed fragment. This is the default behavior if the feature is not enabled or the onerror attribute is not specified.

    <esi:include src="..." onerror="continue"/>
    

For more information, please refer to the documentation for parallel ESI.

AWSv4 signatures on arbitrary backend requests

The S3 vmod can already sign its own backend requests with AWSv4 signatures. This release introduces support to sign other backend requests handled by arbitrary types of backends and directors. It will take care of setting the necessary headers, including the Authorization header. You can then set the backend of your choice. Example:

vcl 4.1;

import s3;
import std;

sub vcl_init {
	new signer = s3.signer();
	signer.set_region("region");
	signer.set_access_key("<your_key>", "<your_key_id>");
}

sub vcl_backend_fetch {
	set bereq.backend = <some backend>;
	set bereq.http.Host = "example.com";
	if (!signer.sign()) {
		std.log("signing failed, check log");
	}
}

The AWSv4 signature can be generated using an IAM role instead of an access key and secret key if running inside of AWS EC2. For more information, please refer to the documentation for vmod S3.

Quality of Service control with IPv6 support

Varnish Enterprise now supports setting the TOS field to prioritize traffic for sessions on both IPv4 and IPv6. This is ideal for e.g. STB devices needing QoS. Example:

vcl 4.1;

import std;

sub vcl_recv {
	if (req.url ~ "QoS=True") {
		# TOS value 104 corresponds to DSCP value 26, which indicates
		# prioritized traffic
		std.set_ip_tos(104);
	}

	if (req.url ~ "^/slow/") {
		# TOS value 0 for default behavior and no special QoS treatment
		std.set_ip_tos(0);
	}
}

The TOS value is set on the session and will therefore apply to future requests on the same session, unless the function is called again for these future requests with another value.

For more information, please refer to the documentation for vmod std.

Optimizations

MSE4 ykey handling

The processing of ykey invalidation requests is significantly optimized, both in terms of reduced CPU and memory usage.

A multi rooted self balancing tree is used to make key lookups much more efficient and memory consumption predictable. In order to reduce the memory consumption of the data structure, we are using a novel approach to make the tree lean without pointers to the parent nodes. This design reduces the memory consumption by 25% compared to a traditional non-lean tree, while the lookup efficiency remains the same.

New counters have been introduced per book for better transparency on resource usage and time spent on invalidations. The first new counter to highlight is g_ykey_bytes, which shows the amount of memory consumed by ykey right now:

  • MSE4_BOOK.<book_id>.g_ykey_bytes Number of bytes spent on Ykey search trees.

The second highlight is a set of counters that make up a histogram, which shows the distribution of processing time for cache invalidation and stat operations:

  • MSE4_BOOK.<book_id>.c_ykey_iter_10ms Number of ykey iterations taking 0.01s or less.
  • MSE4_BOOK.<book_id>.c_ykey_iter_100ms Number of ykey iterations taking between 0.01s and 0.1s.
  • MSE4_BOOK.<book_id>.c_ykey_iter_1000ms Number of ykey iterations taking between 0.1s and 1s.
  • MSE4_BOOK.<book_id>.c_ykey_iter_1000ms_up Number of ykey iterations taking more than 1s.

The majority of iterations will normally take 0.01s or less, however certain iterations that match many objects will take longer.

For more information, please refer to the documentation for MSE4.

Upgrade notes

Recreate MSE4 books and stores

The optimized handling of ykeys in this release requires that any MSE4 books and stores are recreated with the newest file device version. This recreation will empty the cache in the process. If attempting to start Varnish using the old books and stores, the following log messages will appear and the startup will be aborted:

MSE4: [book,"book1",/var/lib/mse/book1] Invalid file device version (has 4020.1, expected 4022.1)
MSE4: File device version mismatch

The following steps can be taken to recreate the books and stores as part of the upgrade process:

# Ensure that Varnish is stopped.
sudo systemctl stop varnish

# Recreate the books and stores while Varnish is stopped.
sudo mkfs.mse4 -f -c /etc/varnish/mse4.conf configure

# Start Varnish again.
sudo systemctl start varnish

Recompile custom vmods

If custom (or thirdparty) vmods are used, we generally recommend to recompile these whenever Varnish is updated. When upgrading to this release, recompilation is required because the ABI (application binary interface) version number has been increased. If custom vmods are not recompiled, the following message will appear during startup:

ABI mismatch,
expected <Varnish Plus 6.0.13r8 55dd7de87f1ffd14da928f3dfc7d2d8c0694b577>,
got <Varnish Plus 6.0.13r7 9d6cb9496d3318198c87a150985a31446794e787>

The ABI change is fully backwards compatible, and no source code changes will be needed in custom vmods.

References


®Varnish Software, Wallingatan 12, 111 60 Stockholm, Organization nr. 556805-6203