New features:
vmod_jwt reader now has get methods for custom claim fields. See the vmod_jwt manual page for more information.
New feature flag esi_include_onerror that enables the support of the onerror="continue" attribute of <esi:include/> tags.
When this feature flag is set, failure of an included fragment would abort the parent request, unless the onerror="continue" attribute was set for the failing fragment. Fragments are considered to be failed if an error occurs during the backend fetch, or the response status for the fragment is different from 200 and 204. When the feature flag is not set, the default behaviour remains the same, ie: all the fragments are included regardless of the outcome of their backend fetches.
Note that enabling this feature when you have cached esi objects on a persistent storage from a previous release is not supported and will result in undefined behavior.
Parallel ESI processing is now subject to a new parameter esi_limit. This parameter determines the maximum number of includes in flight at each ESI level for a single delivery. The default value of 10 provides a theoretical maximum of 50 simultaneous subrequests with the default max_esi_depth limit of 5.
vmod_s3 now has a signer object that can be used to sign backend requests without using a director. See the vmod_s3 manual page for more information.
The MSE4 data format version has changed with this release. This in order to optimize Ykey handling.
MSE4 Ykey handling of persisted objects has been significanly improved.
An MSE4 configuration parameter has been deprecated and removed. In the Book sections, the key ykey_buckets is no longer recognized as a valid key.
The default value of the parameter startup_timeout has been increased from 60 seconds to 10 minutes. This is to ensure that the startup sequence is not affected in some scenarios.
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
A new VMOD feature has been introduced, allowing VMOD developers to limit the set of VCL subroutines from where a VMOD function or method can be called. Trying to use a restricted VMOD function/method outside its limited set of allowed VCL subroutines will be detected by the VCL compiler and will fail the compilation of that VCL. This means that if, for example, a VMOD function is restricted to vcl_recv and a VCL calls it in vcl_backend_fetch, that VCL will fail to compile and will not be loaded. (3915)
This feature has been applied to all existing VMODs, meaning that starting from this release, VCL configurations that used to compile with no errors and where VMOD functions/methods were misused will no longer compile, even if the call appears in a part of the VCL that is never executed. All restrictions can be found in the respective VMOD documentation or man page.
A temporary debug flag vcc_lenient_restrict is added in this release to turn restriction violations into warnings and allow the loading of the VCL despite the fact that a function/method is used outside its allowed set of subroutines. The debug flag can be enabled from the CLI like follows:
param.set debug +vcc_lenient_restrict
or by adding the following to the varnishd command line:
-p debug=+vcc_lenient_restrict
Please keep in mind that this debug flag is just meant to be used as a workaround and will be removed in a future release. If your VCL is no longer compiling with this release, you should review it and fix all the reported violations.
New features:
New parameter slicer_excess_ratio, which pertains to how large we allow the final segment in a response to be when doing Slicer processing.
vmod_udo director health can now be checked from client context with std.healthy(). The director will be considered healthy if any of the backends in the director report as healthy.
IP address lookups can be skipped by vmod_mmdb when several keys are looked up consecutively for the same IP address.
Better error messages for thread pool tweaks. For example if increasing thread_pool_min failed because it would exceed thread_pool_max, this is mentioned in the error message. (3099)
The ruleset objects in vmod_rewrite have a new .field() method to extract a specific field after matching a rule. This can be used as a more readable alternative to "only-matching" rewrites.
New -Q option to read VSL queries from a file in varnishlog, varnishncsa and other logging utilities. A VSL query file can have one query per line, blank lines and comments starting with a #.
The classic -q option also accepts multi-line queries. When more than one query is present, this is equivalent to having a single query that is an or expression of all the queries.
A VSL query file can look like this:
$ cat common-errors.vslq # All error tags *Error # HTTP server errors *Status >= 500 # Custom backend errors BerespHeader:X-Custom-Error
There can be multiple -Q and -q options. They are again treated like an or expression of all queries:
varnishlog -Q common-errors.vslq -q 'RespStatus == 403'
For very long queries that should span multiple lines for legibility, or existing multi-line queries, use a back-slash at the end of a line to continue the query on the next line. (3001)
Bugs fixed:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
Bugs fixed:
New features:
Bugs fixed:
New features:
Added mitigation options and visibility for HTTP/2 "rapid reset" attacks (CVE-2023-44487).
Global rate limit controls have been added as parameters, which can be overridden per HTTP/2 session from VCL using the new vmod h2:
The h2_rapid_reset parameter and h2.rapid_reset() function define a threshold duration for an RST_STREAM to be classified as "rapid": If an RST_STREAM frame is parsed sooner than this duration after a HEADERS frame, it is accounted against the rate limit described below.
The default is one second.
The h2_rapid_reset_limit parameter and h2.rapid_reset_limit() function define how many "rapid" resets may be received during the time span defined by the h2_rapid_reset_period parameter / h2.rapid_reset_period() function before the HTTP/2 connection is forcibly closed with a GOAWAY and all ongoing VCL client tasks of the connection are aborted.
The defaults are 100 and 60 seconds, corresponding to an allowance of 100 "rapid" resets per minute.
The h2.rapid_reset_budget() function can be used to query the number of currently allowed "rapid" resets.
Sessions closed due to rapid reset rate limiting are reported as SessClose RAPID_RESET in vsl(7) and accounted to main.sc_rapid_reset in vsc as visible through varnishstat(1).
The vcl_req_reset feature (controllable through the feature parameter, see varnishd(1)) has been added and enabled by default to terminate client side VCL processing early when the client is gone.
req_reset events trigger a VCL failure and are reported to vsl(7) as Timestamp: Reset and accounted to main.req_reset in vsc as visible through varnishstat(1).
In particular, this feature is used to reduce resource consumption of HTTP/2 "rapid reset" attacks.
Note that req_reset events may lead to client tasks for which no VCL is called ever. Presumably, this is thus the first time that valid vcl(7) client transactions may not contain any VCL_call records.
Bugs fixed:
Bugs fixed:
New features:
MSE 3 has a new fault tolerance facility which allows it to continue running with a subset of its configuration in the event of hardware failure.
See the varnish-mse(7) manual page for more information.
New commands socket.open, socket.close and socket.list for the CLI. They can be used to decide whether to accept or refuse new client traffic. When sockets are closed, ongoing requests are processed until they complete, but new requests are refused and connections are closed. For now, it is only possible to open or close all listen sockets.
See the varnish-cli(7) manual page for more information.
The old parameters of timeout_req and timeout_reqbody from Varnish Plus 4.0 are back due to popular demand. timeout_reqbody is disabled by default.
Refined statistics for the reasons why HTTP/2 sessions are closed. (3507)
New VTC tunnel command that acts as a proxy between two peers. A tunnel can pause and control how much data goes in each direction. It can also be used to trigger socket timeouts, possibly in the middle of protocol frames, without having to change how the peers are implemented.
The counter MAIN.http1_iovs_flush has been added to track the number of premature writev() calls due to an insufficient number of IO vectors.
vmod_jwt now supports the JWS header parameter kid, which can be used to select a JWK when verifying and is only used for this purpose. An optional parameter check_kid has been added to the .verify() method, which determines whether the kid is used in verification with JWKs.
See the vmod_jwt(3) manual page for more information.
vmod_jwt method .verify() can now skip the timestamp validation checks of the exp and nbf claims. New boolean parameters check_exp and check_nbf control whether to skip the checks. The default behavior is to validate the exp and nbf claims if present.
vmod_jwt can now generate and verify signatures of arbitrary data with new methods .generate_raw() and .verify_raw().
New vmod_utils function .hex2integer() can convert hexadecimal numbers to decimal numbers.
Better column alignment in the varnishscoreboard output.
The .length() and .is_<type>() functions in vmod_json now have an optional element argument to inspect the JSON context more deeply than the root node.
vmod-s3 now has support for creating backends with arbitrary port numbers, enabling TLS for backends created with port 443.
vmod_jwt methods .set_key(), .generate(), .generate_raw(), and .to_string() now support base64url encoded HMAC keys (secrets). These methods now have an optional encoding parameter to indicate if the secret is encoded.
A warning message will be logged to standard output and syslog if the working directory is not mounted on tmpfs. (VS issue #560)
The error message received when the working directory is on a filesystem mounted with noexec has been improved. (3943)
Bugs fixed:
New features:
Bugs fixed:
With this release, the VRT version has been bumped from 6.8.0 to 6.9.0, which means that
New features:
Backend tasks can now queue if the backend has reached its max_connections. This allows the task to wait for a connection to become available rather than immediately failing. This feature is enabled by setting both of the new parameters added.
New parameters: backend_wait_timeout sets the amount of time a task will wait. backend_wait_limit sets the maximum number of tasks that can wait.
These parameters can also be set as backend attributes .wait_timeout and .wait_limit.
New counters: backend_wait count of tasks that waited in queue for a connection. backend_wait_fail count of tasks that waited in queue but did not get a connection within the wait_timeout.
VCL backend probe gained an .expect_close boolean attribute. By setting to to false, backends which fail to honor Connection: close can be probed.
Notice that the probe .timeout needs to be reached for a probe with .expect_close = false to return. (3886)
Backend health is checked once and cached per transaction in vmod_udo, and a new .reset() can clear the transaction's cache for either health check or exhaustion status.
A new .set_identity() method for vmod_udo directors enable the manual identification of a Varnish node as being one of its own backends. This is an alternative to the dynamic .self_identify() approach. Another new method .is_identified() confirms whether a Varnish server is identified as one of its backends, through either means.
The .self_identify() method in vmod_udo takes a comma-separated list of identifiers to detect loops in a self-routing cluster.
varnishncsa: Add support for the -T option
The Slicer has a change in behavior in its handling of serving a 304 Not Modified response. Slicer will now ensure it visits all of the relevant segments also on a 304, to ensure they stay relevant in terms of LRU reordering.
varnishtest: Add new tls_config arguments sess_out and sess_in for persisting and resuming a TLS session.
varnishtest: Add a new tls.sess_reused command for use with expect, to query if a reused session was negotiated.
Add a .find() method to vmod_aclplus. This method returns the rule that was matched from the supplied ACL list.
Add a gauge function to vmod_kvstore. This function creates a key and sets its value.
A new .set_quick_ack() function for vmod-tcp can now be used to send of an immediate acknowledgement to clients sending small messages without using TCP_NODELAY. A new .get_quick_ack() function can also be used to retrive the current setting. See vmod_tcp(3) manual page for more information.
The rendering of counters made by varnishstat moved to libvarnishapi. As a result JSON rendering is now shared between varnishstat and vmod_stat.
This move brings a change to the libvarnishapi.so soname, and might require a rebuild of third-party applications linking to it. Rebuilding should not be needed, as libvarnishapi should remain binary-compatible, but care should be taken to ensure this before upgrading.
vmod_jwt can now sign and verify signatures with elliptic curve (ECDSA) algorithms ES256, ES384, ES512. The method .set_jwk() allows JWKs with those algorithms and supports curves P-256, P-384, P-521.
Bugs fixed:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
New version of vmod_http. The new version allows for better connection reuse, caching of hostname lookups, and gives better performance when many simultaneous requests are active.
Note that this version will limit the protocols enabled in the underlying CURL execution engine to HTTP and HTTPS, and that HTTP/1.1 will be the preferred HTTP version. This can be changed by adjusting the corresponding Varnish runtime parameters (vmod_http_require_http and vmod_http_prefer_http_11).
The new version is VCL compatible with the previous version, and no adjustments to VCL programs is necessary.
The documentation of ykey.stat_ functions has improved, and some unnecessary flags have been removed because they give no data. The feature is marked as experimental for now.
New features:
Introduce YKEY stat functions to query the cache for statistics. This feature is experimental and some minor changes to the API in the coming releases are to be expected.
Introduce libadns, an asynchronous dns resolution library. VMODs can use this library to configure a domain to be be actively resolved by the ActiveDNS service, and set up callbacks to receive dns updates each time the domain is resolved.
Introduce vmod_activedns, a VMOD to create dns_groups. A dns_group contains a set of rules for dns resolution, and a template for creating dynamic backends. Compatible VMODs can subscribe to updates from a dns_group in order to create dynamic backends.
Introduce support for subscribing to dns_group updates in vmod_udo, adding full support for dynamic backends.
Add functions .get_identifier(), .self_identify(), and .self_is_next() to vmod_udo. These functions are useful for implementing self-identification and self-routing. Works with both static and dynamic clusters.
Add experimental cluster.vcl. This VCL can be included to enable self-routing within a cluster, with optional partial replication across nodes. This VCL is subject to change in future releases.
Add prometheus format for dynamic udo backend names in vmod_stat
Add -delay argument to the varnishtest dns server
Add experimental headerplus.write_req0() function which can be used to make label VCLs start from a modified set of headers and URL.
Added utils.force_fresh(), which can be used to force a backend request to happen on a fresh connection.
Adjusted counters in varnishstat to mitigate negative gauges being represented with very large values. This can happen when a gauge is decremented and its value is flushed before its prior increment was. A negative gauge is represented as zero, since gauges must have been strictly positive before being decremented. This happens mostly in benchmarking conditions.
A new -r option and r key binding for varnishstat toggle between raw and adjusted gauges in the output. (VS issue #967)
Faster recycling of reusable backend connections during a fetch. (VS issue #1383)
Add call_backend_response option to probe-proxy.vcl.
Add remove_duplicate function to vmod_cookieplus.
Add first or last occurrence selection to cookieplus.get().
Add support for alternative Akamai Sureroute testobject in the Akamai connector by also intercepting the /akamai/sureroute-test-object.html endpoint.
The Slicer now asks for only the bytes needed from the stevedore to fulfill a range request rather than asking for the whole slice. This reduces the load on the disk when using the Slicer with the persisted MSE.
There is now TLS support in varnishtest. Read more about this by running man vtc and search for TLS.
Bugs fixed:
Bugs fixed:
Bugs fixed:
Bugs fixed:
New features:
Bugs fixed:
Bugs fixed:
Note: This release has been removed from the repositories due to a critical bug in the new Slicer feature.
New features:
Bugs fixed:
Bugs fixed:
New features:
Bugs fixed:
Other changes:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs Fixed:
New features:
Bugs fixed:
Bugs fixed:
New features:
Bugs fixed:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
Bugs fixed:
This release focuses heavily on bug fixes and performance improvements.
Varnish Shared Memory improvements
Other bugs:
Included vtree.h in the distribution for vmods and renamed the red/black tree macros from VRB_* to VRBT_* to disambiguate from the acronym for Varnish Request Body.
Added req.is_hitmiss and req.is_hitpass (2743)
Fix assigning <bool> == <bool> (2809)
Add error handling for STV_NewObject() (2831)
Fix VRT_fail for 'if'/'elseif' conditional expressions (2840)
Add VSL rate limiting (2837)
This adds rate limiting to varnishncsa and varnishlog.
For varnishtest -L, also keep VCL C source files.
Make it possible to change varnishncsa update rate. (2741)
Tolerate null IP addresses for ACL matches.
Many cache lookup optimizations.
Display the VCL syntax during a panic.
Update to the VCL diagrams to include hit-for-miss.
Fix a gzip data race
Akamai-connector can assume session timeouts
Kvstore locks
vmod-rewrite .add_rules() method can take a type (including "any")
New rule type "glob" in vmod-rewrite
Fix the startup waterlevel purge code in MSE3
Add dynamic VSC counters to KVStore
Add vmod-http body functions
vmod-http can copy req headers from CSV
Add function to create URL from a backend/director in vmod-http
VMOD_cookieplus: various fixes
New VMOD; synthbackend
Massive Storage Engine for Varnish Cache Plus 6.0
This is the first release of Varnish Cache Plus 6.0 that includes the Massive Storage Engine (MSE). See the varnish-mse manpage for configuration and usage details.
Please note the following:
The counters
now break down the detailed reason for session accept failures, the sum of which continues to be counted in sess_fail.
This version is up-to-date with Varnish Cache 6.0.0
Varnish Cache Plus is an enhanced version of Varnish Cache. The 6.0 series was forked from Varnish Cache 6.0.0, and then features were ported from Varnish Cache Plus 4.1.
There are also several new features that are only available in 6.0.
All Plus only features are described on our docs web site.
Note that this version (6.0.0r0) was released in Limited Availability. It does not contain all the changes in Varnish Cache 6.0.0, but there are no major from the user's perspective. The next version will be up to date with Varnish Cache 6.0.0 and also contain additional fixes and improvements.
Varnish Cache Plus will fetch ESI (Edge Side Includes) in parallel. For pages with many ESI includes this can speed up page loading greatly.
All the features of Edgestash is available in Varnish Cache Plus 6.0.
Varnish Cache Plus supports backend SSL/TLS through the OpenSSL library. This is enabled in the same way as in previous versions.
Many Plus only VMODs have been brought to 6.0:
Also bundled are the following VMODs, collectively known as Varnish Modules:
The following features are exclusive to Varnish Cache Plus 6.0
This new feature is the last piece in getting end-to-end encryption in Varnish Cache Plus, and, as far as we know, any HTTP cache. Enabling Varnish Total Encryption will make even in-memory data encrypted, and this will protect you against "data leak" bugs like Meltdown and Spectre.
The changelog below is identical with the Varnish Cache project. The list is not exhaustive, but should contain all major changes from the user's point of view.
Fixed implementation of the max_restarts limit: It used to be one less than the number of allowed restarts, it now is the number of return(restart) calls per request.
The cli_buffer parameter has been removed
Added back umem storage for Solaris descendants
The new storage backend type (stevedore) default now resolves to either umem (where available) or malloc.
Since varnish 4.1, the thread workspace as configured by workspace_thread was not used as documented, delivery also used the client workspace.
We are now taking delivery IO vectors from the thread workspace, so the parameter documentation is in sync with reality again.
Users who need to minimize memory footprint might consider decreasing workspace_client by workspace_thread.
The new parameter esi_iovs configures the amount of IO vectors used during ESI delivery. It should not be tuned unless advised by a developer.
Support Unix domain sockets for the -a and -b command-line arguments, and for backend declarations. This requires VCL >= 4.1.
return (fetch) is no longer allowed in vcl_hit {}, use return (miss) instead. Note that return (fetch) has been deprecated since 4.0.
Fix behaviour of restarts to how it was originally intended: Restarts now leave all the request properties in place except for req.restarts and req.xid, which need to change by design.
req.storage, req.hash_ignore_busy and req.hash_always_miss are now accessible from all of the client side subs, not just vcl_recv{}
obj.storage is now available in vcl_hit{} and vcl_deliver{}.
Removed beresp.storage_hint for VCL 4.1 (was deprecated since Varnish 5.1)
For VCL 4.0, compatibility is preserved, but the implementation is changed slightly: beresp.storage_hint is now referring to the same internal data structure as beresp.storage.
In particular, it was previously possible to set beresp.storage_hint to an invalid storage name and later retrieve it back. Doing so will now yield the last successfully set stevedore or the undefined (NULL) string.
IP-valued elements of VCL are equivalent to 0.0.0.0:0 when the connection in question was addressed as a UDS. This is implemented with the bogo_ip in vsa.c.
beresp.backend.ip is retired as of VCL 4.1.
workspace overflows in std.log() now trigger a VCL failure.
workspace overflows in std.syslog() are ignored.
added return(restart) from vcl_recv{}.
The alg argument of the shard director .reconfigure() method has been removed - the consistent hashing ring is now always generated using the last 32 bits of a SHA256 hash of "ident%d" as with alg=SHA256 or the default.
We believe that the other algorithms did not yield sufficiently dispersed placement of backends on the consistent hashing ring and thus retire this option without replacement.
Users of .reconfigure(alg=CRC32) or .reconfigure(alg=RS) be advised that when upgrading and removing the alg argument, consistent hashing values for all backends will change once and only once.
The alg argument of the shard director .key() method has been removed - it now always hashes its arguments using SHA256 and returns the last 32 bits for use as a shard key.
Backwards compatibility is provided through vmod blobdigest with the key_blob argument of the shard director .backend() method:
for alg=CRC32, replace:
<dir>.backend(by=KEY, key=<dir>.key(<string>, CRC32))
with:
<dir>.backend(by=BLOB, key_blob=blobdigest.hash(ICRC32, blob.decode(encoded=<string>)))
Note: The vmod blobdigest hash method corresponding to the shard director CRC32 method is called ICRC32
for alg=RS, replace:
<dir>.backend(by=KEY, key=<dir>.key(<string>, RS))with:
<dir>.backend(by=BLOB, key_blob=blobdigest.hash(RS, blob.decode(encoded=<string>)))
The shard director now offers resolution at the time the actual backend connection is made, which is how all other bundled directors work as well: With the resolve=LAZY argument, other shard parameters are saved for later reference and a director object is returned.
This enables layering the shard director below other directors.
The shard director now also supports getting other parameters from a parameter set object: Rather than passing the required parameters with each .backend() call, an object can be associated with a shard director defining the parameters. The association can be changed in vcl_backend_fetch() and individual parameters can be overridden in each .backend() call.
The main use case is to segregate shard parameters from director selection: By associating a parameter object with many directors, the same load balancing decision can easily be applied independent of which set of backends is to be used.
To support parameter overriding, support for positional arguments of the shard director .backend() method had to be removed. In other words, all parameters to the shard director .backend() method now need to be named.
Integers in VCL are now 64 bits wide across all platforms (implemented as int64_t C type), but due to implementation specifics of the VCL compiler (VCC), integer literals' precision is limited to that of a VCL real (double C type, roughly 53 bits).
In effect, larger integers are not represented accurately (they get rounded) and may even have their sign changed or trigger a C compiler warning / error.
Add VMOD unix.
Add VMOD proxy.
This is the first beta release of the upcoming 5.0 release.
The list of changes are numerous and will not be expanded on in detail.
The release notes contain more background information and are highly recommended reading before using any of the new features.
Major items:
Changes since 4.1.9:
Changes since 4.1.8:
Changes since 4.1.7:
Changes since 4.1.7-beta1:
Changes since 4.1.6:
Changes between 4.0 and 4.1 are numerous. Please read the upgrade section in the documentation for a general overview.
New since 4.0.2-rc1:
New since 4.0.1:
New since 4.0.0:
New since 4.0.0-beta1:
New since TP2:
varnishsizes -----------~
Persistent storage is now experimentally supported using the persistent stevedore. It has the same command line arguments as the file stevedore.
obj.* is now called beresp.* in vcl_fetch, and obj.* is now read-only.
The regular expression engine is now PCRE instead of POSIX regular expressions.
req.* is now available in vcl_deliver.
Add saint mode where we can attempt to grace an object if we don't like the backend response for some reason.
Related, add saintmode_threshold which is the threshold for the number of objects to be added to the trouble list before the backend is considered sick.
Add a new hashing method called critbit. This autoscales and should work better on large object workloads than the classic hash. Critbit has been made the default hash algorithm.
When closing connections, we experimented with sending RST to free up load balancers and free up threads more quickly. This caused some problems with NAT routers and so has been reverted for now.
Add thread that checks objects against ban list in order to prevent ban list from growing forever. Note that this needs purges to be written so they don't depend on req.*. Enabled by setting ban_lurker_sleep to a nonzero value.
The shared memory log file format was limited to maximum 64k simultaneous connections. This is now a 32 bit field which removes this limitation.
Remove obj_workspace, this is now sized automatically.
Rename acceptors to waiters
vcl_prefetch has been removed. It was never fully implemented.
Add support for authenticating CLI connections.
Add hash director that chooses which backend to use depending on req.hash.
Add client director that chooses which backend to use depending on the client's IP address. Note that this ignores the X-Forwarded-For header.
varnishd now displays a banner by default when you connect to the CLI.
Increase performance somewhat by moving statistics gathering into a per-worker structure that is regularly flushed to the global stats.
Make sure we store the header and body of object together. This may in some cases improve performance and is needed for persistence.
Remove client-side address accounting. It was never used for anything and presented a performance problem.
Add a timestamp to bans, so you can know how old they are.
Quite a few people got confused over the warning about not being able to lock the shared memory log into RAM, so stop warning about that.
Change the default CLI timeout to 10 seconds.
We previously forced all inserts into the cache to be GET requests. This has been changed to allow POST as well in order to be able to implement purge-on-POST semantics.
The CLI command stats now only lists non-zero values.
The CLI command stats now only lists non-zero values.
Use daemon(3) from libcompat on Darwin.
Remove vcl_discard as it causes too much complexity and never actually worked particularly well.
Remove vcl_timeout as it causes too much complexity and never actually worked particularly well.
Update the documentation so it refers to sess_workspace, not http_workspace.
Document the -i switch to varnishd as well as the server.identity and server.hostname VCL variables.
purge.hash is now deprecated and no longer shown in help listings.
When processing ESI, replace the five mandatory XML entities when we encounter them.
Add string representations of time and relative time.
Add locking for n_vbe_conn to make it stop underflowing.
When ESI-processing content, check for illegal XML character entities.
Varnish can now connect its CLI to a remote instance when starting up, rather than just being connected to.
It is no longer needed to specify the maximum number of HTTP headers to allow from backends. This is now a run-time parameter.
The X-Forwarded-For header is now generated by vcl_recv rather than the C code.
It is now possible to not send all CLI traffic to syslog.
It is now possible to not send all CLI traffic to syslog.
In the case of varnish crashing, it now outputs a identifying string with the OS, OS revision, architecture and storage parameters together with the backtrace.
Use exponential backoff when we run out of file descriptors or sessions.
Allow setting backend timeouts to zero.
Count uptime in the shared memory log.
Try to detect the case of two running varnishes with the same shmlog and storage by writing the master and child process ids to the shmlog and refusing to start if they are still running.
Make sure to use EOF mode when serving ESI content to HTTP/1.0 clients.
Make sure we close the connection if it either sends Connection: close or it is a HTTP/1.0 backend that does not send Connection: keep-alive.
Increase the default session workspace to 64k on 64-bit systems.
Make the epoll waiter use level triggering, not edge triggering as edge triggering caused problems on very busy servers.
Handle unforeseen client disconnections better on Solaris.
Make session lingering apply to new sessions, not just reused sessions.
VCL Manual page --------------~
Red Hat spec file ----------------~
Build system -----------~
Build system -----------~
Build system -----------~
The request workflow has been redesigned to simplify request processing and eliminate code duplication. All codepaths which need to speak HTTP now share a single implementation of the protocol. Some new VCL hooks have been added, though they aren't much use yet. The only real user-visible change should be that Varnish now handles persistent backend connections correctly (see ticket #56).
Support for multiple listen addresses has been added.
An "include" facility has been added to VCL, allowing VCL code to pull in code fragments from multiple files.
Multiple definitions of the same VCL function are now concatenated into one in the order in which they appear in the source. This simplifies the mechanism for falling back to the built-in default for cases which aren't handled in custom code, and facilitates modularization.
The code used to format management command arguments before passing them on to the child process would underestimate the amount of space needed to hold each argument once quotes and special characters were properly escaped, resulting in a buffer overflow. This has been corrected.
The VCL compiler has been overhauled. Several memory leaks have been plugged, and error detection and reporting has been improved throughout. Parts of the compiler have been refactored to simplify future extension of the language.
A bug in the VCL compiler which resulted in incorrect parsing of the decrement (-=) operator has been fixed.
A new -C command-line option has been added which causes varnishd to compile the VCL code (either from a file specified with -f or the built-in default), print the resulting C code and exit.
When processing a backend response using chunked encoding, if a chunk header crosses a read buffer boundary, read additional bytes from the backend connection until the chunk header is complete.
A new ping_interval run-time parameter controls how often the management process checks that the worker process is alive.
A bug which would cause the worker process to dereference a NULL pointer and crash if the backend did not respond has been fixed.
In some cases, such as when they are used by AJAX applications to circumvent Internet Explorer's over-eager disk cache, it may be desirable to cache POST requests. However, the code path responsible for delivering objects from cache would only transmit the response body when replying to a GET request. This has been extended to also apply to POST.
This should be revisited at a later date to allow VCL code to control whether the body is delivered.
Varnish now respects Cache-control: s-maxage, and prefers it to Cache-control: max-age if both are present.
This should be revisited at a later date to allow VCL code to control which headers are used and how they are interpreted.
When loading a new VCL script, the management process will now load the compiled object to verify that it links correctly before instructing the worker process to load it.
A new -P command-line options has been added which causes varnishd to create a PID file.
The sendfile_threshold run-time parameter's default value has been set to infinity after a variety of sendfile()-related bugs were discovered on several platforms.
The formatting callback has been largely rewritten for clarity, robustness and efficiency.
If a request included a Host: header, construct and output an absolute URL. This makes varnishncsa output from servers which handle multiple virtual hosts far more useful.
The flag that is raised upon reception of a SIGHUP has been marked volatile so it will not be optimized away by the compiler.