
Lepton image compression: saving 22% losslessly from images at 15MB/s - samber
https://blogs.dropbox.com/tech/2016/07/lepton-image-compression-saving-22-losslessly-from-images-at-15mbs/
======
ausjke
This is very impressive for archiving images.

For a quick test, I run it over ~1.3GB JPEG pictures I had locally, the
finally result is 810M, that's 66% of the original size, very impressive
considering it's lossless. It only deals with jpg file though, no png, no iso,
no zip, no any formats other than JPG.

If someone can do this over video files that will be PiedPiper comes into real
life.

~~~
colechristensen
The reason this can be done with JPEGs is the compression hasn't been updated.
There are folks that have used h264 for compressing images and webp uses vp8
compression, both with much better results than JPEG.

Lepton is cool because it helps make existing technology a whole lot better,
but what we actually need is a better image format.

You wouldn't see the same leap for videos because people have been working
hard to make those great for a while while JPEG has been left to rot.

~~~
neuronexmachina
I'd love to have something like this for archiving DVD ISOs though, where the
VOBs are compressed with old-school MPEG.

~~~
cbhl
Shouldn't you just use Handbrake and H.264/AAC? (Assuming your computer is
fast/new enough to play it back.)

~~~
nacs
The article (and the person you're replying to) is referring to _lossless_
compression. Converting to H264 and AAC may maintain a high quality at a lower
bitrate but they're definitely not lossless.

------
the_duke
I really admire Dropox for open sourcing this, it shows their commitment.

Saving almost a quarter of space for most images stored is something that
truly gives a competitive edge. (I say most because people probably primarily
have JPEG images).

Especially considering how many images are probably stored on services like
Dropbox.

And they just gave it away to their competitors.

~~~
jacobsladder
Then why do you admire them? Would you also admire them if you were their
investor? Dropbox management is obligated by law to act in the best interests
of their shareholders, i.e. to make them as much profit as possible.

It's more likely that they have released it because of some profit-seeking
interest. They are not charity.

~~~
the_duke
I admire them because they certainly considered the pros and cons of doing so,
and decided that they feel secure enough in their market position and prefer
to give this back to the open source community

Pretty much every modern company safes tremendous amounts of money from open
source (Linux and upwards in the stack), and so the OSS community should
rightly be considered a STAKEholder.

The share vs stakeholder obsession in the space of large companies and
corporations represents a lot that's wrong with our current markets.

\--

Also, the only upside to open sourcing this is getting other involved in
development. I just tested on 10k images, and the promise both on compression
rate and bit parity after decompression holds true.

Seems to be a pretty stable product, so the that motivation is probably only
miniscule.

~~~
eridius
There's also the upside of attracting other talented developers to come work
at Dropbox if Lepton is representative of the kind of project they might be
working on.

------
Klasiaster
A similar approach has also been developed for use with zpaq, a compression
format which stores the decompression algorithm as bytecode in the archive:

[The configuration] "jpg_test2" by Jan Ondrus compresses JPEG images (which
are already compressed) by an additional 15%. It uses a preprocessor that
expands Huffman codes to whole bytes, followed by context modeling.
[http://mattmahoney.net/dc/zpaqutil.html](http://mattmahoney.net/dc/zpaqutil.html)

------
Jabbles
It's interesting to do a cost analysis here:

It's saved "multiple petabytes" of space.

Backblaze storage is $0.005/GB/Month = $5k/PB/Month.

The GitHub repo has 7 authors, perhaps costing Dropbox $200k/year each and
taking most of a year ~ $1M to develop this system.

So this might pay for itself after 200PB*Months, assuming Dropbox's storage
costs are the same as Backblaze's prices, and assuming CPU time is free.
(TODO: estimate CPU costs...)

Of course, advancing the state of the art has intrinsic advantages, but again,
it's interesting to look at the purely financial point.

[https://www.backblaze.com/b2/cloud-storage-
pricing.html](https://www.backblaze.com/b2/cloud-storage-pricing.html)

~~~
BrunoJo
Space is cheap but the bandwidth isn't. At Backblaze the download traffic
costs $0.05/GB = $50k/PB.

~~~
Jabbles
I don't think they're compressing this and decompressing it client side. At
least I didn't get that impression from the article.

You're correct ofc, download costs = 10 months of storage.

~~~
aab0
> I don't think they're compressing this and decompressing it client side.

The speed quotes made it sound like client-side was a concern. Why would you
go to all the effort of devising a new image compression format saving 20%+
storage and on the wire, and _not_ have it decompressed client-side,
especially when you control the client?

~~~
dgoldstein0
My rough understand, as someone who works at Dropbox and knows some of the
people who worked on this (but isn't directly involved), is that this
currently only runs on our servers. The perf requirements are primarily that
we don't want to slow down syncing / downloads significantly - and also want
to keep the CPU cost under control. As is, the savings in storage space should
easily pay for the extra compute power required.

------
mmastrac
Great work. Ensuring bit-by-bit identical output _and_ compressing an extra
22% is amazing.

> For those familiar with Season 1 of Silicon Valley, this is essentially a
> “middle-out” algorithm.

I _really_ wish that every single compression-related blog post would stop
referencing Silicon Valley.

~~~
XorNot
They will once Silicon Valley takes them to task for it. Just like season 1
pretty much wiped "making the changes world a better place" from the lingo of
SV companies.

~~~
rhaps0dy
> season 1 pretty much wiped "making the changes world a better place" from
> the lingo of SV companies

Did it really?

~~~
matznerd
Yes, the show is making the world a better place.

~~~
golergka
Better than we do.

------
ot
> To encode an AC coefficient, first Lepton writes how long that coefficient
> is in binary representation, by using unary. [...] Next Lepton writes a 1 if
> the coefficient is positive or 0 if it is negative. Finally, Lepton writes
> the the absolute value of the coefficient in standard binary. Lepton saves a
> bit of space by omitting the leading 1, since any number greater than zero
> doesn’t start with zero.

The wording almost implies that this is novel, but it is actually Gamma coding
[1], which in the signal compression community is often called Exp-Golomb
coding [2]. I wonder why this is not acknowledged, considering that they
mention the VP8 arithcoder instead.

[1]
[https://en.wikipedia.org/wiki/Elias_gamma_coding](https://en.wikipedia.org/wiki/Elias_gamma_coding)

[2] [https://en.wikipedia.org/wiki/Exponential-
Golomb_coding](https://en.wikipedia.org/wiki/Exponential-Golomb_coding)

------
mbreese
So, am I to read this as to mean that when I send Dropbox a JPEG, they are
behind the scenes further compressing it using Lepton? Then when I request it
back, they are re-converting it back to JPEG?

~~~
daniel_rh
That's the idea behind the algorithm, yes. And since it's lossless, every
original bit is preserved. The same idea could be applied on the Desktop
client instead of on the server, which would save 22% of the bandwidth as well
and make syncing faster.

~~~
noahkim11
Any word on when that would be implemented?

~~~
hmage
it already is.

------
SapphireSun
This is amazing, we've been struggling with JPG storage and fast delivery at
my lab (terabytes and petabytes of microscopy images). We'll be running tests
and giving this a shot!

~~~
jszymborski
jpg is a weird format to be storing microscopy images, no? Usually end up in
some sort of bitmap TIFF (or their Zeiss/etc. proprietary format) from what
I've seen.

~~~
SapphireSun
So the weird thing is that we're not only a lab, but also a web tool. We have
the files backed up in a standard format in one place, but delivering
1024x1024x128 cubes of images over the internet has been tricky. We don't need
people to always view them at full fidelity, just good enough.

We tried JPEG2000, which was better quality per a file size, but the web
worker decoder was slower than the JPEG one adding seconds to the total
download/decode time.

EDIT: We're currently doing 256x256x256 (equivalent to a 4k image) on
eyewire.org. We're speeding things up to handle bigger 3D images.

EDIT2: If you check out Eyewire right now, you might notice some slowdown when
you load cubes, that's because we're decoding on the main thread. We'll be
changing that up next week.

~~~
bnolsen
Yeah jpeg2k sucks. It doesn't seem to do anything very well. Design by
committee ruined it by making it waay to complex.

------
jsingleton
This looks very useful for archiving, but as others have pointed out, less
useful for web development. I did lots of research on image compression for a
book recently and found quite a few helpful tools.

jpeg-archive [^1] is designed for long term storage and you can still serve
the images over the web. imageflow [^2] has just been kickstarted and looks
really promising for use with ASP.NET Core.

mozjpeg is also showing progress and if FLIF takes off then that will be
great. Scalable images would be fantastic. No more resizing and all the
security issues that brings [^3].

[^1]: [https://github.com/danielgtaylor/jpeg-
archive](https://github.com/danielgtaylor/jpeg-archive)

[^2]: [https://www.imageflow.io](https://www.imageflow.io)

[^3]: [https://imagetragick.com](https://imagetragick.com)

------
diamindo
The technical rigor in these recent Dropbox blog posts is admirable. Seems
like an impressively talented engineering team (or maybe just good at
marketing :)

------
ptspts
Portability notes for the Lepton implementation
([https://github.com/dropbox/lepton](https://github.com/dropbox/lepton)):

* Implemented in C++ (-std=c++0x and -std=c++11 work, -std=c++98 doesn't work).

* Needs a recent g++ to compile (g++-4.8 works, g++-4.4 doesn't work).

* Runs on Linux and Windows.

* Runs on i386 (-m32) and amd64 (-m64) architectures. Doesn't work on other architectures, because it uses SSE4.1 instructions.

* Can be compiled without autotools ([http://ptspts.blogspot.ch/2016/07/how-to-compile-lepton-jpeg...](http://ptspts.blogspot.ch/2016/07/how-to-compile-lepton-jpeg-lossless.html)).

------
sevenless
I'm interested in a 'super lossy' (deep learning based?) compression. You
should be able to compress movies down to a screenplay and a few stage
directions.

A picture should of course be worth 1000 words.

~~~
mcbits
I started something like this long ago but didn't get very far with it. This
was before "deep learning" and I was groping in the dark, but I think the
concept is sound up to a point.

The idea was to train a neural net and build up a database of features (maybe
on the order of 1-10 GB, or whatever is just small enough to ship) to estimate
the missing details from downscaled and extremely over-compressed JPEGs. If it
worked, I think it would also improve the quality of all the 10-20 year old
images out there where the uncompressed source is long gone. Sort of a Blade
Runner-style "enhance" tool, but of course it would only be filling in
aesthetically plausible details.

~~~
kardos
Sounds like the "content aware fill" algorithm [1], only for small scale
features that are somehow stitched onto the low-resolution image

[1]
[http://www.logarithmic.net/pfh/resynthesizer](http://www.logarithmic.net/pfh/resynthesizer)

------
nachtigall
I would be interested why C++ was chosen for
[https://github.com/dropbox/lepton](https://github.com/dropbox/lepton) instead
of Rust?

Given the recent usage of Rust for the implementation of Brotli compression
([https://blogs.dropbox.com/tech/2016/06/lossless-
compression-...](https://blogs.dropbox.com/tech/2016/06/lossless-compression-
with-brotli/)) and that it's used for data storage
([http://www.wired.com/2016/03/epic-story-dropboxs-exodus-
amaz...](http://www.wired.com/2016/03/epic-story-dropboxs-exodus-amazon-cloud-
empire/)) this somewhat surprises me.

The reasons given for Rust as stated in
[https://blogs.dropbox.com/tech/2016/06/lossless-
compression-...](https://blogs.dropbox.com/tech/2016/06/lossless-compression-
with-brotli/) would seem valid here too: > For Dropbox, any decompressor must
exhibit three properties: > > 1\. it must be safe and secure, even against
bytes crafted by modified or hostile clients, > 2\. it must be
deterministic—the same bytes must result in the same output, > 3\. it must be
fast.

~~~
daniel_rh
SIMD support in Rust is still very early and required unsafe mode for SSE, and
lepton makes heavy use of SSE intrinsics.

Now I do see
[http://huonw.github.io/simd/simd/](http://huonw.github.io/simd/simd/) was
being developed in August 2015, but it seems to be gathering dust of late.

I really do wish that Rust would provide nice alignment guarantees (eg 32
byte) without depending on customizing the allocator, and builtin, safe, SIMD
instructions

~~~
joelg236
> I really do wish that Rust would provide nice alignment guarantees (eg 32
> byte) without depending on customizing the allocator, and builtin, safe,
> SIMD instructions

Same could be said about C/C++. I'm guessing the answer is much simpler: the
author(s) are comfortable with C++. And they probably don't deploy much rust
code yet.

------
ngry
Hmm, have to try it in WMS/TMS tile storage for web cartography. It uses JPEG
files also but with fixed sizes like 256x256. May be it needs to tune
predictor for that, because aerial imagery is a bit different from smartphone
photos.

------
ValentineC
Would someone be kind enough to explain how one could store the compressed
.lep files, serve them, and then have the browser render them, without using a
JavaScript library?

~~~
wongarsu
You would either need a JavaScript library (which doesn't exist yet) or you
would need to convince browser vendors to support lepton files natively.

In Dropbox's case they control both client and server and can just
compress/decompress in the dropbox client. Using lepton on websites wasn't
really Dropbox's goal (but it would be cool if a sufficiently fast JavaScript
library existed).

------
xchip
I love short articles where I learn something new :) Thank you!

------
deegles
Does Dropbox do block or file-level deduping?

~~~
bmalehorn
Block, it deduplicates based on 4MB blocks.

Source:
[https://news.ycombinator.com/item?id=2478595](https://news.ycombinator.com/item?id=2478595)

~~~
the_duke
I wonder if they use a rolling checksum too, to avoid duplicating a complete
file if only a view bytes shifted (for example adding a line of text in the
beginning of a file)

The backup tool bup ([https://github.com/bup/bup](https://github.com/bup/bup))
does this.

~~~
aidenn0
They almost certainly do not, mostly because of how slow doing so is.

~~~
sliverstorm
It probably wouldn't hit the most important cases either, dedup is typically
most powerful & valuable on large media files, software packages, disk ISO's,
and the like which do not frequently have arbitrary text inserted at the start
of the file!

------
mappu
So it's using the VP8 arithmetic coder - how does that stack up compared to
rANS?

~~~
daniel_rh
rANS is really cool, but I think we tried it too early in the lepton research
phase while some of the other ideas were still brewing... We might revisit
rANS now that v1.0 is out!

~~~
eln1
WebP is switching to rANS: [https://chromium-
review.googlesource.com/#/c/338781/](https://chromium-
review.googlesource.com/#/c/338781/)

Here is a superfast implementation of rANS:
[https://github.com/jkbonfield/rans_static](https://github.com/jkbonfield/rans_static)

------
are595
Interesting that they are using VP8 to compress the JPEGs. This is one degree
of separation away from Google's WebP [1]. It would be interesting to see how
they stack up (WebP has a lossy and lossless mode).

[1].
[https://developers.google.com/speed/webp/](https://developers.google.com/speed/webp/)

~~~
niftich
(EDIT: removed wording that Lepton produces files that conform to the JPEG
spec. It doesn't. It losslessly compresses into a custom format, that
losslessly decompresses into a JPEG)

Lepton uses the arithmetic coder [1] from VP8. Using arithmetic coding instead
of Huffman encoding to get better compression was always an option in JPEG,
but it has been historically avoided due to patents [2].

Compared to VP8-Intra, the compression used in lossy WebP, JPEG is missing the
prediction step, usually called 'filtering' [3], which is the single largest
contributor of WebP's compression outperforming JPEG.

Reading through the Lepton blog post, it seems they're using a different
method of prediction, based on observations about typical gradients and
correlations between AC and DC coefficients. VP8 uses a more 'traditional'
approach of predicting your neighboring pixels, which was borne out of run-
length encoding, but also very applicable to video's moving macroblocks. A
comparison would indeed be enlightening.

[1]
[https://tools.ietf.org/html/rfc6386#section-7](https://tools.ietf.org/html/rfc6386#section-7)

[2]
[https://en.wikipedia.org/wiki/Arithmetic_coding#US_patents](https://en.wikipedia.org/wiki/Arithmetic_coding#US_patents)

[3] [https://medium.com/@duhroach/how-webp-works-lossly-
mode-33bd...](https://medium.com/@duhroach/how-webp-works-lossly-
mode-33bd2b1d0670)

------
Stanleyc23
Just a random idea: could machine learning algorithms that do object
recognition help to improve the compression of images or videos? Maybe a lossy
algorithm could compress away "irrelevant" things. This way a high resolution
frame might have lower resolution objects inside but it would he ok because
the important part of the content is preserved.

~~~
joeyo
Absolutely, yes. There is a very deep link between compression and
"understanding". I think we have every reason to believe that networks that
can understand/"explain away" the content/statistics of a scene ought to be
able to compress them better.

I presume someone (or likely many people) are working on exactly this.

~~~
Stanleyc23
Very cool! if anyone who comes across this particular thread knows about
papers/research being written about this topic, I'd be very interested to
learn more.

~~~
sapphireblue
There is an award-winning compressor that uses many statistical models (and a
3-layered dense neural network) to compress the data losslessly:
[http://www.byronknoll.com/cmix.html](http://www.byronknoll.com/cmix.html)

Also there is a free book that contains descriptions of various common and
exotic compression formats
[http://www.mattmahoney.net/dc/dce.html](http://www.mattmahoney.net/dc/dce.html)

Overall I think we are yet to see the full potential of deep learning
unleashed on data compression. For example the neural network in cmix
compressor is quite primitive compared to modern architectures. Someone will
certainly find a way to do better than that!

~~~
Stanleyc23
ah thanks. really appreciate the lead. do you yourself happen to have much
experience in the space of machine learning or compression?

------
mp3geek
Does it happen often where a company will disable pull access on github and to
request EUA to be signed first?

------
orware
Are we able to take the EXE and run it on JPEGs from a Windows machine without
issue? (I haven't tried it myself, but probably will using the instructions I
saw on Github a moment ago and report back...it looks like it should work
though):
[https://github.com/dropbox/lepton/releases](https://github.com/dropbox/lepton/releases)

However I am wondering about the two different EXEs available on the page (one
has an avx prefix that I need to try and figure out). If anyone has info on
that that'd be useful.

I could see this potentially being useful for schools and businesses doing
quite a bit of digital scanning so I'm going to try running some tests using
it and some images I think we have available somewhere.

~~~
mappu
It should work fine.

The AVX version will be faster, but it requires a recent-ish CPU:
[https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPU...](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX)

~~~
orware
Thank you very much for the extra details mappu! That answers my question :-).

------
therealmarv
I want this working seamlessly on the file system. That means: I see JPGs but
they are lepton compressed in reality. Would be a great use case for existing
file servers. How could this be (theoretically) achieved on a linux machine?

~~~
Strom
One way would be to write a FUSE [1] that does this.

[1]
[https://en.wikipedia.org/wiki/Filesystem_in_Userspace](https://en.wikipedia.org/wiki/Filesystem_in_Userspace)

~~~
therealmarv
Now I get very interested in writing a FUSE for that
[http://www.cs.nmsu.edu/~pfeiffer/fuse-
tutorial/](http://www.cs.nmsu.edu/~pfeiffer/fuse-tutorial/)

------
inian
Would be interesting to carry out a comparison with the MozJpeg format..

------
ptspts
Some statistics. TL;DR Lepton indeed provides about 22% of size savings.

When running Lepton through JPEG photos downloaded from Flickr, I've got about
23.37% of size savings.

When running Lepton through JPEGs generated by mozjpeg (default settings:
-quality 75, progressive) from JPEG photos download from Flickr, I've got
22.63% of size savings.

The mozjpeg output is about 3.83 times smaller than the original JPEG photo,
on average.

------
fake-name
Am I the only person who spent the first paragraph or so very confused why
dropbox was messing with thermal cameras?

The FLIR Lepton is thermal imaging sensor
([http://www.flir.com/cores/content/?id=66257](http://www.flir.com/cores/content/?id=66257)),
and has been for a few years now. They really should have used a different
name.

~~~
kayoone
Just because you know another thing with the name doesn't make it illegal to
use that name as it has a meaning anyway. There is also a CMS called Lepton.
Pretty sure this will soon be the most popular product with this name anyway.

~~~
fake-name
"Illegal"? Wut?

It's not illegal, it's just _dumb_. Sure, you could implement your entire C++
application inside the `std` namespace, and I'm sure it'd work fine, but you
/shouldn't/. If you're going to start a project, at least google the name
first.

------
personjerry
Could someone explain to me how compression algorithms work? Shouldn't they
not work very consistently, by pigeonhole principle?

~~~
daniel_rh
It's very easy to make a JPEG file that will not compress at all. Luckily it
might look like snow from a television set rather than a typical image
produced by a camera.

We live in a world where it is common for blue sky to occupy a portion of the
frame and green grass to occupy another portion of the frame. Since images
captured of our world exhibit repetition and patterns, there are opportunities
for lossless compression that focuses on serializing deviations from the
patterns.

~~~
GrantS
Impressive work, Daniel! Do I understand correctly that any image prediction
for which the deltas are smaller in absolute value than the full JPEG/DCT
coefficients would offer continued compression benefits? As in, if you could
"name that tune" to predict the rest of the entire image from the first few
pixels, the rest of the image would be stored for close to free (and if not,
it essentially falls back to regular JPEG encoding).

If that's the case, then not only could we rely on the results of everything
we've decompressed so far to use for prediction (which is like one-sided image
in-painting), but we also could store a few bits of semantic information (e.g.
from an image-net-based CNN, from face detection) about the content of the
original image before re-compression, and use that semantic information for
prediction as well via some generative model. All of this would obviously be
trading computation for storage/bandwidth, but it this seems like an exciting
direction to me. Again, nice work.

~~~
daniel_rh
Hi GrantS: That's pretty close to how it works. We always use the new
prediction, even where it's worse than JPEG though (very rarely), to stay
consistent.

As for having the mega-model that predicts all images better: well it turns
out with the lepton model out you only lose a few tenths of a percent by
training the model from scratch on each images individually. We have a test
case for training a global model in the archive (it's
[https://github.com/dropbox/lepton/blob/master/src/lepton/tes...](https://github.com/dropbox/lepton/blob/master/src/lepton/test_custom_table.sh)
) That trains the "perfect" lepton model on the current image then uses that
same model to compress the image (It's not meant to be a fair test, but it
gives us a best-case scenario for potential gains from a model that has been
trained from a lot of images) and in this case it doesn't gain much, even in a
controlled situation like the test suite.

However the idea you mention here may still be a good idea for a hypothetical
model--but we haven't identified that model yet.

------
jmspring
There were techniques discussed as part of the JPEG-2000 efforts where just
reordering coefficients before doing the entropy coding would gain you a good
deal of compression (though at the expense of the block based nature of JPEG).

It's always good to see new techniques out in the open.

------
nogridbag
Does anyone know how this compares to jpegoptim?

[https://github.com/tjko/jpegoptim](https://github.com/tjko/jpegoptim)

I'm hitting the limits in my OneDrive account and jpegoptim seemed to reduce
my photos quite a bit.

~~~
ptspts
jpegoptim has lossless and lossy modes. In lossless mode it preserves all
pixels, but it doesn't preserve the file itself. Lossless jpegoptim is
comparable to Lepton. In general, Lepton can tends to give better
improvements, because it uses a different output file format. How much better
depends on your input files, you should try both.

I'd say 22% for Lepton and 5% for jpegoptim, based on fading past memories of
mine.

~~~
nogridbag
Thanks. I posted that before trying out Lepton not realizing the artifact is
no longer a jpg.

------
ipsin
I wonder if Dropbox is going to do steganography detection before using Lepton
compression, because "pixel-for-pixel identical" is obviously not the same as
"byte-for-byte".

------
voltagex_
I could imagine Flickr being very interested in this, if they survive.

------
pdknsk
similar:
[https://github.com/packjpg/packJPG](https://github.com/packjpg/packJPG)

------
raverbashing
Very interesting. But it's not clear to me how estimating the DC components
makes it _guaranteed_ lossless

~~~
usrusr
You store the delta to the estimate. If the estimate is bad (e.g. because
someone designed content for maximum surprise of the estimator function),
compression rate goes down, not quality.

------
amelius
When reading the title I thought this was about compressing images originating
from a particle accelerator :)

------
mrcactu5
are there public data sets of images the scale of 15MB/s ?

------
S3curityPlu5
wheres the code?

------
jrcii
That's nothing compared to LenPEG 3
[http://www.dangermouse.net/esoteric/lenpeg.html](http://www.dangermouse.net/esoteric/lenpeg.html)

>For the standard test image the new LenPEG 3 compresses the image so
efficiently that data storage space is actually freed on the computer right up
to the entire capacity of the storage devices

~~~
ausjke
don't get this one, where is the code and how can I try it out? if it's so
good I would assume dropbox have found it instead of doing lepton.

~~~
capnhooke
"When presented with an image, the LenPEG 3 algorithm uses the following
steps:

Is the image Lenna? If yes, delete all other data on the computer's storage
devices. If no, proceed to the next step..."

------
Dzugaru
\---

~~~
plus
They are seeing a 22% lossless compression of already lossily-compressed
JPEGs, though.

~~~
Dzugaru
Ah, my bad. Disregard everything :)

