I'd love this added as an HTTP Content-Encoding, like brotli recently was (and gzip has been for a long time). You could get a bit more compression at gzip-like speeds, or have very cheap compression using its fastest modes (hundreds of MB/s/core; see --fast modes at https://github.com/facebook/zstd/releases/tag/v1.3.4). brotli especially helped at the high end (static stuff you'll serve many times and can spend a long time compressing); zstd can be especially helpful nearer the low end (dynamic content you'll serve a single time).
On Twitter a designer of Brotli raised the question of whether there should be a standardized
limit on RAM required to decode in the HTTP Content-Encoding context, since zstd supports huge window sizes (GBs). I don't feel strongly--clients can refuse to handle responses that take too much RAM to decode, servers have an incentive to serve up things their clients can handle, and there tend to be sharply diminishing returns to increasing window size--but I can imagine a limit in the MBs being a reasonable compromise between allowing high compression and not breaking on phones with a couple hundred MB RAM.
> In order to protect the decoder from unreasonable memory requirements, a decoder is allowed to reject a compressed frame that requests a memory size beyond the decoder's authorized range.
> For broader compatibility, decoders are recommended to support memory sizes of at least 8 MB. This is only a recommendation; each decoder is free to support higher or lower limits, depending on local limitations.
As a corollary, and in the absence of an explicit negotiation process between client and server, servers should probably avoid using a window larger than 8 MB.
Yeah, we're just getting the ball rolling. I recently added support to HHVM[1]. Pushing for community adoption generally is definitely on our agenda, but will take some time.
As best I can see that assigns a Content-Type ("MIME type"), distinct from a Content-Encoding supported by browsers.
So it specifies what Content-Type header your browser should use for a .zst file, but doesn't represent a plan to let your HTML, JS, CSS etc. be delivered compressed using Zstd and transparently decompressed by the browser, as is possible with the gzip and br Content-Encodings.
That's different from what I'm trying to talk about, which is how zstd might be useful as an HTTP Content-Encoding, and how you might (or might not) limit the window sizes allowed in that context to ensure usability on low-memory clients.
SLZ is cool, but the specific question jyzg raised and my comment is talking about is whether or not any standards should spell out something like "don't use enormous window sizes with the [still hypothetical] zstd Content-Encoding, to avoid causing problems for small-memory clients decompressing your responses." This comes up because zstd (unlike gzip) lets you set your window to be GBs if you want (look for "long-range match finder" in https://github.com/facebook/zstd/releases if you want more context); those window sizes would, of course, not be ideal on a low-end smartphone, for example.
I bet some unpackers grab physical memory for their sliding window up front, but the balance between memory per connection and ratio, and the question of whether any standard here ought to specify a number or we should just let servers and clients do their best to work things out, would still be a thing even if the space taken were also bounded by the download's size or stream's length.
So can someone help me untangle how this relates to possible Zstandard patents?
Back some time ago, when all public facebook git repositories had a patent "grant" document, there was some angst about what exactly this meant: https://github.com/facebook/zstd/issues/335 -- this was considered "resolved" when facebook decided to distribute the software under GPLv2, though after reading the relevant sections of GPLv2 I can't say I follow why this was supposed to resolve any uncertainty around potential FB patents of zstd.
BSD doesn't have a patent grant, so doesn't this mean you are only legally free of their patent if you use the GPL code? Also, any independent implementation of the algorithm would be subject to patent fees?
On Twitter a designer of Brotli raised the question of whether there should be a standardized limit on RAM required to decode in the HTTP Content-Encoding context, since zstd supports huge window sizes (GBs). I don't feel strongly--clients can refuse to handle responses that take too much RAM to decode, servers have an incentive to serve up things their clients can handle, and there tend to be sharply diminishing returns to increasing window size--but I can imagine a limit in the MBs being a reasonable compromise between allowing high compression and not breaking on phones with a couple hundred MB RAM.