As mentioned earlier, there are two versions of Varnish: an open source version and an enterprise version.
Originally, Varnish started out as tailor-made software for Verdens Gang, a Norwegian newspaper. The development of Varnish was spearheaded by long-time FreeBSD core contributor Poul-Henning Kamp in collaboration with Nordic open source service provider Redpill Linpro.
Eventually it was decided that the source code would be open sourced. Poul-Henning Kamp has remained the project lead, and continues to maintain Varnish Cache with the help of various people in the open source community and Varnish Software.
The success and enormous potential of the project led to Varnish Software being founded in 2010 as a spinoff of Redpill Linpro. Initially Varnish Software focused on support and training, which funded further development of the open source project.
In 2014, Varnish Software started developing specific features on top of Varnish Cache in a commercial version of the software, and named it Varnish Plus. It is now known as Varnish Enterprise.
The feature additions that Varnish Enterprise initially offered primarily consisted of extra VMODs, but as time went by, some substantial features were developed by Varnish Software that went beyond modules.
The most significant feature being MSE, which is short for Massive
Storage Engine. MSE is a so-called stevedore that Varnish uses to
store its cached objects. Unlike malloc (memory storage stevedore),
and file (non-persistent disk storage stevedore), MSE offers a
dual-layer storage solution that leverages the speed of memory, and the
resilience of disk, without the typical slowdown effect of traditional
disk-based storage systems.
In addition, VHA, which stands for Varnish High Availability, was introduced. This solution replicates stored cache objects across multiple Varnish servers.
Add built-in client and backend TLS/SSL termination, and a browser-based administration interface to that, and you have a pretty solid feature set.
The combination of these features, and the fact that they were shipped by default, supported and covered by a Service Level Agreement (SLA), made Varnish Enterprise look pretty interesting to enterprise companies.
There is a correlation between the Varnish Cache and Varnish Enterprise version numbers.
Varnish Cache is on a six-month release schedule. Every year you’ll see a release in March, and a release in September:
Varnish Enterprise 6 is based on Varnish Cache 6.0, and doesn’t follow the minor version upgrades. Instead the Varnish Software team backports fixes and some of the features.
However, Varnish Enterprise 6 does follow the patch version upgrades for Varnish Cache 6.0, but adds a release version number.
So when Varnish Cache 6.0.1 was released on August 29th 2018, the corresponding Varnish Enterprise release was version 6.0.1r1. When a new Varnish Enterprise release takes place, and there is no new Varnish Cache 6.0 patch release; the release number just increases.
This happened for example on October first when Varnish Enterprise 6.0.1r2 was released.
At the time of writing this book, the latest Varnish Enterprise version is 6.0.8r1.
It would be too simplistic to conclude that Varnish Enterprise is just Varnish Cache with some extra features and an SLA. The differences are much more fundamental.
Varnish Cache is a project. Varnish Enterprise is a product.
This quote sums it up best, and they both have different goals.
Varnish Enterprise is a product that you install once, and then let run for several years, without having to put in a significant amount of effort every time a new minor version is released.
If for example you installed Varnish Enterprise version 6.0.1r1 when it came out back in 2018, and you now upgrade to 6.0.8r1, everything will just work without any risk of incompatibility.
For Varnish Cache, the goal is to continuously improve the code and the architecture, and look toward the future. Compatibility breaks are discouraged, but every six months a release is cut, which might break users’ setups. The Varnish Cache community tries to document everything that has changed enough to affect users. But it’s still up to users to check whether or not these changes are compatible with their setup.
Varnish Cache is extremely fast and stable. It has a rich feature set that can be used:
Here’s an overview of Varnish Cache’s features:
| Feature | Description | 
|---|---|
| Request coalescing | Protects origin servers against cache stampedes by collapsing similar requests | 
| Cache-Controlsupport | Varnish respects Cache-Controlheaders, and usesmax-ageands-maxagevalues to define the object TTL. Directives likepublic,private,no-cache,no-store, andstale-while-revalidateare also interpreted. | 
| Expiressupport | Varnish can interpret the Expiresheader and set the object TTL accordingly. | 
| Conditional requests | Varnish supports 304 Not Modified behavior by interpreting ETagandLast-Modifiedheaders and issuingIf-None-MatchandIf-Modified-Sinceheaders. This is supported both at the client side and at the backend side. | 
| Grace mode | Varnish’s implementation of Stale While Revalidate. Varnish will serve stale objects while the latest version of the object is fetched in the background. The duration is configurable in VCL or via the stale-while-revalidatedirective in theCache-Controlheader. | 
| Content streaming | Varnish will start streaming content to the client as soon as it has received the response headers from the backend. | 
| Cache invalidation | Varnish has purging, banning, and content refresh capabilities to remove objects from cache. | 
| LRU cache evictions | When the cache is full, Varnish will use a Least Recently Used (LRU) algorithm to remove the least recently used objects in an attempt to free up space. | 
| HTTP/2 support | Varnish supports the HTTP/2 protocol. | 
Backend health checking |  The health of a backend can be checked using configurable health probes. These checks can lead to backends not being selected for backend fetches.
Backend connection limiting          Limit the maximum number of open connections to a single backend.
Backend timeout control |  Limit the amount of time Varnish waits for a valid back response. Configurable through various timeout settings
Advanced backend selection | Programmatically select the backend, based on a custom set of conditions written in VCL
Advanced request saving | Varnish can transparently save requests by retrying other backends if the initial backend request fail or serving stale content if the backend is unavailable.
Configurable listening addresses |      Varnish can accept incoming HTTP requests on multiple listening addresses. Hostname/IP and port number are configurable
PROXY protocol support |   Listening addresses can be configured to accept PROXY protocol requests rather than standard HTTP requests. The PROXY PROTOCOL will keep track of the IP address of the original client, regardless of the number of potential proxies in front of Varnish.
TLS termination  | Although Varnish Cache doesn’t support TLS natively, TLS termination can be facilitated with the PROXY protocol.
Unix domain socket support (UDS) |  Both incoming connections and connections to backends can be made over Unix domain sockets (UDS) instead of TCP/IP
Stevedores     |   A stevedore is the storage mechanism that Varnish uses to store cached objects. malloc (memory storage) is the default. file (non-persistent disk storage) is also common
Command line interface (CLI) |     Varnish has a command line interface (CLI) that can be used to tune parameters, ban objects from cache and load a new VCL configuration
Edge Side Includes (ESI) |        XML-based placeholder tags whose src attributes are processed by Varnish (on the edge) and where the HTTP responses replace the placeholders.
Zero-impact config reload |      Load a new VCL file without having to restart the Varnish process
Label-based multi-VCL configurations |       Load multiple VCL files and conditionally execute them using labels in your main VCL file
Access control lists (ACLs) |    Allow or restrict access to parts of your content using access control lists (ACLs), containing IP addresses, hostnames or subnets that can be matched in VCL
URL transformation |              Transform any URL in VCL
Header transformation |           Transform any request or response header in VCL
Synthetic HTTP responses |    Return custom HTTP responses that did not originate from your origin
VCL unit testing framework |  The varnishtest tool performs Varnish unit tests based on VTC files containing unit testing scenarios
Advanced logging | The varnishlog, varnishtop, and varnishncsa tools allow you to perform deep introspection into the Varnish flow, the input and output
Advanced statistics |         The varnishstat tool displays numerous counters that give you a global insight into the state of your Varnish server
Additionally, Varnish Cache also comes with a set of VMODs that are plugged into Varnish, and which are discussed in chapter 5.
Here’s a list of some of the core features of Varnish Enterprise:
| Feature | Description | 
|---|---|
| Massive Storage Engine (MSE) | An optimized dual-layer storage solution that offers persistence | 
| Varnish High Availability (VHA) | A multi-master object replication suite that keeps the contents of multiple Varnish servers in sync, and as a consequence reduces the number of backend revalidation requests | 
| Varnish Controller | A GUI and API to administer all the Varnish servers in your setup | 
| Varnish Custom Statistics (VCS) | A statistics engine allowing you to aggregate, display and analyze user web traffic and cache performance in real time | 
| Varnish Broadcaster | Broadcasts client requests to multiple Varnish nodes from a single entry point | 
| Varnish Live | A mobile app that shows the performance of Varnish instances | 
| Varnish Web | Web Application Firewall capabilities, based on | 
| Application Firewall (WAF) | the ModSecurity library | 
| Client TLS/SS | Termination of client TLS/SSL connections on the edge | 
| Backend TLS/SS | Connect to backend servers over TLS/SSL, ensuring end-to-end encryption | 
| Parallel ESI | Process Edge Side Includes (ESI) in parallel, whereas Varnish Cache only processes ESI tags sequentially | 
| JSON logging | Output from the varnishlogandvarnishncsalogging tools can be sent in JSON format | 
| TCP-only probes | Allow probes to perform health checks on backends by checking for an available TCP connection, without actually sending an HTTP request | 
| last_byte_timeout | A backend configuration parameter that defines how long Varnish waits for the full backend response to be completed | 
| Total Encryption | Encryption of cached objects, both in memory and on disk | 
| Veribot | Identify and verify traffic that comes from online bots | 
| Brotli compression | Compress HTTP responses with Brotli compression, which offers a higher compression rate than GZIP | 
| Dynamic backends | Define backends on-the-fly, instead of relying on hardcoded backend definitions in the VCL file | 
| Body access & modification | Via the xbody module, request and response bodies can be inspected and modified | 
Besides feature additions in the Varnish core, Varnish Enterprise offers many features as VMODs that are plugged into Varnish Enterprise, which are also discussed in chapter 5.