
Google offers JPEG alternative for faster Web - ukdm
http://news.cnet.com/8301-30685_3-20018146-264.html
======
lmkg
Wait, so Google took a bunch of JPGs, re-compressed them with a lossy format,
and claims to have gotten them 40% smaller with no loss in quality. Either the
lossyness in the format is _entirely_ in aspects of the image that the JPG
compression already removed and they're just better at compressing the residue
(possible, but I am skeptical), or else the image is altered, but in a way
that Google claims is subjectively a sidegrade in quality. I'm not putting
much faith in their quality claims until I see side-by-side comparisons of a
JPG and WebP compression of the same TIFF (or other uncompressed) image at the
same compression ratio. A double-blind study is probably too much to ask, but
it would be nice.

~~~
ebiester
I agree that we should expect side by side examples to support the claims, but
my first thought is that they're making an analysis of the JPEG compression
and using improvements on computing power to encode the information more
efficiently. Consider that JPEG is using a (or more) Huffman table and a (or
more) quantization table. (I'm getting this from wikipedia, IANA compression
expert)

What if you analyzed all those images and came up with a composite huffman
compression that was more efficient than the best guess in the 70s? Then, you
did some magic on the quantization table to make the most common vales
correspond to the lowest numbers, relying on processing power to decode the
compressed quantization table before you started?

~~~
jodrellblank
I note that WinZip 14 does decode JPGs and reencode them smaller, but without
further loss of quality:

"The main trick those three programs use is (partially) decode the image back
to the DCT coefficients and recompress them with a much better algorithm then
default Huffman coding." - <http://www.maximumcompression.com/data/jpg.php>

~~~
Lerc
Most lossy image compression systems use a transform that does not reduce the
data size but makes transformed data that is more compressible. This means the
final layer of compression is a traditional compressor. These, like the
transforms themselves get improved upon over time. JPEG is old.

When I did some experiments with various compression techniques, I found that
DCT with a LZMA base compared quite well to newer compression systems.

------
jomohke
Have they explained their reasons for not backing JPEG-XR instead? This seems
to be a step back from JPEG-XR:

\- No alpha support

\- Not lossless support (Useful for the alpha channel, and could encourage
more cameras to support lossless images)

\- No HDR support

JPEG-XR also allows different regions of the image to be encoded
independently. See wikipedia for more features:
<http://en.wikipedia.org/wiki/JPEG_XR>

I have no idea what the patent landscape for JPEG-XR is, but I'd be
disappointed if we replaced JPEG and didn't get some of these features.

The lack of alpha support in JPEG is especially a pain for web developers. PNG
does not do photos well.

~~~
roel_v
I've seen the 'no alpha' repeated a few times, but can you give your use cases
for alpha _in jpeg_? I understand the need for alpha channels in gif/png, and
have fought the lack of alpha support for years, but for jpg's I don't
remember ever needing it, nor can I easily imagine a situation where it would
be needed. So I'm interested to learn about where you'd use transparency in
jpgs.

~~~
ithkuil
why not? I mean, why should I be forced to use a lossyless image format just
because I want it to blend with the background (if the image compresses better
with jpeg and the image quality is fine)

Are there technical reasons to not implement an alpha in jpeg-like
compressions?

~~~
roel_v
Features shouldn't be there 'just because it's possible', at least not if they
may impact other features(1). If greater compression can be reached by leaving
out the alpha, and if there's no compelling reason to put it in, it should be
left out. 'why not' is not a reason, there needs to be a business case for
each feature, in everything.

In my experience, and this seems to be a widely held position, the main use
case for jpgs is in pictures, as in photographs. The main use case for png
(gif) is for graphical elements: borders, menus, etc. Those last ones you want
to compress with a lossless format anyway - you need to be sure that a flat
menu background is not dithered or doesn't have other artifacts. I understand
the question mostly as 'do you need transparency in photographs' and 'do you
need non-rectangular photographs where the non-rectangular nature is encoded
in the photograph itself, and not part of another rendering in stage in the
presentation layer'.

Thinking about it more, maybe things like drop shadows or other fancy borders
could be case where you need transparency in photos. Otherwise you have to
work around it by having the picture as jpg and the border as a separate (or
several separate) png's. More requests, harder layouting, etc. I'm not
convinced yet that this use case alone is a compelling argument.

As for technical reasons to not implement it, I don't know - I'm assuming
there are because I'm quite sure that someone at Google must've thought about
it and decided against it, they must have had their reasons.

(1) I'm reasoning from the assumption that including transparency has adverse
effects on file size and/or decompression complexity. Maybe there aren't in
which case balancing features becomes a different matter and most of my
argument is moot.

------
goalieca
I was hoping Jpeg2000 would take over because it is so flexible. It encodes
really well at the low end and really well at the high end. You can target a
specific file size and have it produce it. Instead of the blocking artifacts
caused by the 8x8 DCT grid, you get smooth blurring.

~~~
Goosey
[http://upload.wikimedia.org/wikipedia/commons/5/51/JPEG_JFIF...](http://upload.wikimedia.org/wikipedia/commons/5/51/JPEG_JFIF_and_2000_Comparison.png)

JPEG looks better to me. It's only a data point of 1, but it's the most
important data point to me. ;)

~~~
Groxx
I've seen that picture before, and it's pretty indicative of its lower
settings, yes. And I agree - the sharper & more accurate edges of 2000 make
the less detailed / smoother textured interiors look disproportionately worse.

------
jdavid
This has other implications.

* a save in bandwidth is huge on mobile speed, google believes that speed effects web use, web use effects revenue

* a save in bandwidth is cheaper for google

* att, verizon, sprint, and tmobile are limiting mobile data plans, smaller images means more web page loads

* net neutrality might fail, you might have to pay for data

* google runs a lot of content via app-engine, gmail + chrome, google should be able to make the switch for the stacks they own to develop an advantage.

* others will follow in adoption like facebook if it saves them on one of their largest costs cdns.

* openness an open format can go on more devices.

* open devices might appear faster on the web.

------
jmspring
JPEG (and many image/video coding algorithms) are really made up of a couple
pieces -- transform, modeling and entropy coding. In the case of JPEG, the
transform is handled through breaking the image up into eight by eight blocks
that are then run through the DCT and quantized. This is where the loss comes
from.

Modeling and entropy coding are handled on the coefficients generated above.
However, this is done on each 8x8 block (note, I am making a slight
simplification ignoring the use of differential compression on the DC
coefficients between blocks. Since the algorithm is relegated to encoding at
most 64 coefficients at a time, there isn't much "modeling" that can be done.

If one reorders the coefficients of the 8x8 blocks to resemble a non-block
based transform -- you can perform better modeling to get much better
compression with the exact same image quality as the original JPEG image.
However, in this case, you lose compatibility with a JPEG encoder since the
format of the coefficients is not JPEG.

~~~
eru
One simple way to be able to throw more CPU at image
compression/decompression, and thus save space, could be to use bigger blocks
than 8x8.

------
bediger
Won't the on-line Porn Industry have to adopt this for large scale adoption to
take place?

I'd think that the size decrease alone would sell the Porn Industry.

~~~
mhd
Are galleries that important? I thought right now it's all about videos.

~~~
barrkel
Soft / "art" sites focus much more on stills, and headline pixel counts, e.g.
met-art etc. I would expect them to prefer large file sizes, to aggravate
people who like to scrape content.

~~~
pjscott
I would expect them to prefer smaller file sizes, especially in public preview
galleries, to lower their bandwidth costs. Someone scraping high-quality
images incurs a small one-time bandwidth cost, and they can re-encode the
images at a lower quality if they want to save bandwidth.

------
metamemetics
Just a note to anyone using Adobe software to produce their web PNGs: Make
sure to run PNGCrush to remove all extraneous information in them!
<http://pmt.sourceforge.net/pngcrush/index.html>

~~~
dagw
Just be sure to test your crushed pngs under all the situations in which
they'll be used. I've had a few situations where PNGCrushed files couldn't be
opened by certain programs, but the uncrushed files could. In particular the
Python image library tends to choke on files that have been run through
PNGCrush.

------
acdha
I'd be very curious in seeing a comparison with JPEG-XR, which has a number of
nice advantages for photography.

~~~
confuzatron
I've been googling for examples of that format to no avail. I just wanted to
see if Google's browser supports it.

------
shawndumas
Not sure about this one google. Firstly, Worst. Name. Ever! Secondly, what
kind of browser support do they think they'll get? I know ie is covered via
google chrome frame but will Apple and Mozilla jump on this? Both issues have
alloyed this one on me.

~~~
njharman
Chrome is a given, Firefox (and isn't webkit) is open source. Google has the
devs to write the needed code/extension and give it to those projects.

~~~
shawndumas
Okay, I'll buy that Chrome and ie are a shoe-in, and that FF is probably easy.
So that leaves Apple and Opera. Not bad, not bad...

~~~
redrobot5050
Chrome and Safari both use WebKit. If Google releases an implementation, it
shouldn't be hard for Apple to adopt it. That just leaves Opera.

~~~
Tomek_
AFAIK Opera supports WebM, I assume they wouldn't have problem doing the same
with WebP. IE and Safari might be a different story though, they wouldn't be
so keen to support a format from their big competitor. If I would be in MS'
shoes I would use that as an occasion to put a support for JPEG XR into
Chrome/WebKit.

------
jessriedel
As a regular person, I really can't see a 40% decrease in size (of which I'm
skeptical) for _just_ jpeg images (not nearly the full "65% of the bytes on
the web") being worth the huge switch-over costs. The ubiquity of jpeg is just
too valuable.

~~~
wmf
If you understand how to use content negotiation (which basically no one
outside Google does) then there's virtually no switching cost.

~~~
jessriedel
I mean cost beyond just web development. As mentioned in the article, there is
a massive range of consumer and non-consumer products which have adopted jpeg
as a universal format.

~~~
wmf
So those devices just keep on using JPEG and thus bear no additional cost.
Even if WebP is only used between Google and Chrome, it will be worth it for
Google.

------
jbarham
Given that most JPEG images are generated by digital cameras, I don't think
WebP will get any traction until Canon, Nikon et al support WebP natively.

And hopefully attaching metadata to WebP images will be saner than it is for
JPEGs.

~~~
pjscott
It shouldn't be too long before Android phones start supporting WebP natively,
and image-hosting sites like Picasa and Flickr already do re-encoding to lower
the image size; adding WebP to that shouldn't be a serious problem. I can see
this getting traction even without support from most digital cameras.

As for metadata, the container format is based on RIFF, which consists of
tagged binary chunks. Metadata chunks follow the image data, and consist of
null-terminated strings. No word yet on whether or not you can use Unicode.

<http://code.google.com/speed/webp/docs/riff_container.html>

~~~
jbarham
> No word yet on whether or not you can use Unicode.

You should be able to use UTF-8 since it doesn't include embedded nulls.

~~~
pjscott
Sure, but I'm worried about the guy who decides that his decoder will use
ISO-8859-1 by default, or UTF-16 if that's specified in some meta-metadata
chunk somewhere. If this were written down in a spec, we wouldn't have to fret
about it.

------
pmjordan
Looks like they didn't bother to seize the opportunity to add an alpha
channel. Being stuck with PNG for transparent/translucent images sucks.

~~~
thesz
Lossy compression for alpha channel isn't that good idea, I think. Brightness
and color compression artefacts will be multiplied by alpha compression
artefacts.

~~~
kevingadd
Then provide baseline support for an uncompressed alpha channel. Not being
able to store one at all in an image limits the usefulness of the format.

Artifacts for lossy alpha compression are already, to some extent, well
understood and dealt with, since compressing video game textures that contain
an alpha channel is already done lossily using the DXTC compression formats.

------
drv
There's no way this will catch on; the slight image quality per bit
improvement is not nearly enough to counteract the huge momentum of existing
JPEG use. Certainly JPEG isn't the best possible image codec or even all that
good, but it's good enough, and it works _everywhere_.

------
nphase
But there is a loss in quality! The dithering is noticeably worse. Notice the
light outline on that red-ish thing in the top middle of the image. I bet I
could've gotten the 10kb decrease by just lowering the JPG quality.

~~~
mayank
Umm...how can you claim a loss in quality without looking at the original? The
linked page shows two lossily compressed versions of an unknown original
image. You're commenting on which image subjectively LOOKS better to you, not
on compression quality.

~~~
Tomek_
Take a look at those red and orange "things" in the center of the image in the
PNG/WebP version. My bet is on a loss in quality.

------
MikeCapone
Our CPUs are so much faster than even a few years ago, but bandwidth hasn't
increased that much (at least not here in Canada). I'd gladly trade some CPU
cycles for bandwidth (or same bandwidth, but better quality).

------
igrekel
The article is quite light so far. I am sure it brings other improvements than
just a reduced file size. I would hope that at least some features of JPEG2000
would make it in this format. Maybe also a convenient way to pack several
images in a single file without resorting to css clipping tricks.

------
Terretta
At 8 times longer to compress, this reminds of Iterated Systems FIF, but
article claiming based on WebM suggests it's still DCT compression.

Adoption by just Flickr and Facebook could push a new image format fast.

Google has much to gain since they archive a copy of indexed images. Hence
their interest in "recompression".

------
gshayban
With OLED/IPS etc. taking over in the next few years, will this really support
>8bpp or HDR?

Interesting move, Google

~~~
jbarham
WebP uses the same color model as WebM which "works exclusively with an 8-bit
YUV 4:2:0 image format" so seems like WebP will not be HDR capable, which is a
pity.

~~~
Maskawanian
Considering it basically is 1 frame of WebM it isn't exactly a surprise
either.

------
sbarre
I love this site! Reading the comments here has taught me more about image
compression in 45 minutes than I've learned from reading random articles on
the web for the last few years...

------
lazugod
There's a conversion tool available for WebP now:
<http://code.google.com/speed/webp/download.html>

~~~
pornel
They have only Linux binary. I've compiled one for Mac OS X:
<http://pornel.net/webp>

------
bartl
Why is the performance of PNG so disappointing? The WebP sample image from the
article (top image) is shown here as a PNG of 234kB...

~~~
FooBarWidget
I'm surprised that in 2010 people _still_ don't understand the difference
between PNG and JPG and that each is better for different kinds of images.

~~~
jodrellblank
_There are no surprising facts, only models that are surprised by facts; and
if a model is surprised by the facts, it is no credit to that model.

It is always best to think of reality as perfectly normal. Since the
beginning, not one unusual thing has ever happened.

The goal is to become completely at home with [a world where people don't
understand the difference between PNG and JPG in 2010]. Like a native.
Because, in fact, that is where you live._ \- (paraphrased)
<http://lesswrong.com/lw/pc/quantum_explanations/>

\--

 _Calling reality "weird" keeps you inside a viewpoint already proven
erroneous. Probability theory tells us that surprise is the measure of a poor
hypothesis; if a model is consistently stupid - consistently hits on events
the model assigns tiny probabilities - then it's time to discard that model. A
good model makes reality look normal, not weird; a good model assigns high
probability to that which is actually the case. Intuition is only a model by
another name: poor intuitions are shocked by reality, good intuitions make
reality feel natural. You want to reshape your intuitions so that the universe
looks normal. You want to think like reality.

This end state cannot be forced. [..] But it will also hinder you to keep
thinking How bizarre! Spending emotional energy on incredulity wastes time you
could be using to update. It repeatedly throws you back into the frame of the
old, wrong viewpoint. It feeds your sense of righteous indignation at reality
daring to contradict you._ \- <http://lesswrong.com/lw/hs/think_like_reality/>

~~~
CamperBob
_Whenever I hear someone describe quantum physics as "weird" - whenever I hear
someone bewailing the mysterious effects of observation on the observed, or
the bizarre existence of nonlocal correlations, or the incredible
impossibility of knowing position and momentum at the same time - then I think
to myself: This person will never understand physics no matter how many books
they read._

Well, that rules out Einstein.

~~~
jodrellblank
No it doesn't, Einstein died years before that author was even born. He will
never hear Einstein bewailing anything.

~~~
CamperBob
Point being, Einstein spent a large part of his career completely unable to
deal with the sheer weirdness of quantum physics. ("God does not play dice,"
"spooky action at a distance," and so forth.)

Ultimately he was able to adapt his worldview to include the implications of
quantum theory, but until then he was most certainly not in a state where he
would "never understand physics."

It was a great essay, actually, just a terrible lede, as EY himself
acknowledged in the comments.

------
sudonim
Please, if anyone can figure out a format so my mom doesn't attach 10 mb files
that should be 750k, I'm all for it.

~~~
dkarl
An actually helpful suggestion (I hope): help her change the settings on her
camera. Explain that she'll be able to take more pictures without filling up
the memory.

------
sswam
The Jpeg shown is of a higher fidelity than the WebP image; look around the
top edge of the red quadrilateral. In the Jpeg image, the edge is sharp, in
the WebP image it looks a bit like the coloured sprinkles you might put on
ice-cream. It would be better to show two versions of an image, having the
same file size, so that we can look for any difference in quality.

Read more:
[http://news.cnet.com/8618-30685_3-20018146.html?communityId=...](http://news.cnet.com/8618-30685_3-20018146.html?communityId=2140&targetCommunityId=2140&blogId=264&messageId=9908776&tag=mncol;tback#ixzz113sUDAa0)

~~~
sswam
I suspect that the improvement is not good enough that this image format would
be adopted, when jpeg is already established.

