Hacker News new | past | comments | ask | show | jobs | submit login
BPG Image format (bellard.org)
843 points by mjs on Dec 5, 2014 | hide | past | favorite | 328 comments



The big story here is this introduces a new image file format without requiring you to upgrade a browser or download a plugin. Those aren't PNGs of representative images you're looking at - that's BPG decoding happening in your browser.

So we don't like HVEC due to patent worries? Fine, we can swap in another format and use the same technique.

We don't have an I-frame format for Daala that's finalized yet? Fine, we can use work-in-progress. The format does not need to be finalized. If you update it, you rebuild your images and bundle a new JavaScript decoder.

The ability to ship media along with a sandboxed, runnable decoder is awesome, and I'm surprised it hasn't caught on until now. I remember Google a while back implemented their own "video" codec in JavaScript using JPEGs and primitive inter-frame processing, exactly because there wasn't a universal video codec format they could use.


Does the javascript decoder download the images, or does the browser do that, then the javascript simply decode it? If it downloads them itself, then any parallel fetching by browsers will stop working. And if they are decoded by javascript, they will take (very slightly) longer, and both the encoded and decoded images need to be held in memory both in javascript, and the browser. Are they decoded to .bmp, .jpg, .png? Can CSS reference them without javascript changing the styles? Can they be cached locally if downloaded directly?

If you needed this for any page that loaded lots of images, all of the above would significantly increase page-load time, and memory usage. Especially on a mobile browser, it would use more resources (hence more battery), than otherwise. So personally I wouldn't like to see this be the future. Maybe just as a fall back, but detection for that might be a little hard.


>> Does the javascript decoder download the images, or does the browser do that, then the javascript simply decode it?

Right now, it would appear that the with a hard reload of the demo: http://bellard.org/bpg/lena.html on chromium, the BPG images are initially loaded with the page, then the posts.js file scans through the page's images to find the ones who's src ends in '.bpg', then posts.js re-downloads the same files with ajax, which are then decoded to PNG files rendered in canvas tags with no identifying information.

The extra request to re-download images is unnecessary but easily removed to just process the data loaded with the initial page load.

>> Can CSS reference them without javascript changing the styles?

Not from what I can see. The images are being rendered to canvas elements without any identifying information. The decoder will need to be modified to add identifying id's or classes. This again is an easy fix.

>> Can they be cached locally if downloaded directly?

This is what I perceive to be an issue. Try downloading the image rendered to the 5kb bpg example... The image that is being rendered is a 427.5kb PNG and that's for a 512x512 image size.

PNGs aren't lossy... so any difference in file size is going to be the product of rendering size or how much the bpg compression has simplified the source image. ( I'm guessing the file size follows something like a logarithmic curve where it jumps initially based on rendered image dimensions and then approaches a limit as the compression approaches lossless. )

Because of the rendered out PNGs' large file size, I would expect that if you are rendering out large images or a lot of images, you would definitely use more resources than with comparable quality JPGs regardless of whether you cache the BPGs for rendering and you would indeed experience the drawbacks you mentioned in both memory and page-load time.

That being said, this is a cool idea, an incredible proof of concept and I'm very thankful to Fabrice for putting it out there.


>> The extra request to re-download images is unnecessary but easily removed to just process the data loaded with the initial page load.

Are you sure you are not inspecting with the cache disabled. If the appropriate cache headers are applied, this shouldn't issue another request.


A quick glance makes it look like it grabs it by url in the decoder so I think it is doing another download. That said it's probably not too bad since the browser should cache the image also.


If you're curious how it works, untar http://bellard.org/bpg/libbpg-0.9.tar.gz and have a look at the post.js file.


I'm a designer, can't read code, and I'm trying to see if I should evangelize this at work. Its better if you could communicate with us non-technical folk and post an answer that the previous guy asked so that we other designers can also understand.


it waits till the page is loaded (window.onload)

grabs all image urls of a page

checks which have .bpg extensions

replaces all these .bpg img tags with canvas tags

loads all the images with Ajax and renders them into the canvas tags


If you're a designer, and don't understand the issues involved, you probably should refrain from evangelizing sub-1.0 versioned products.


Sounds like a plan. I don't need to be an early adopter


Polyfills have been around for a while; this isn't the first one. e.g. http://webpjs.appspot.com/ http://badassjs.com/post/12035631618/broadway-an-h-264-decod...


If you want Designers to start using this, then we need a photoshop plugin, an illustrator plugin, a Sketch plugin, a pixelmator plugin. Make it easier for us to export images as a .BGP. If the images are truly smaller without affecting the current user experience of users using mostly modern web browsers, then we're more likely to pick up on using this.


No you don't. You export it as png or whatever lossless format you can and your job here is pretty much done. Then technical guys you are working with convert it or write a script to convert it automatically on upload/download on their website or do whatever they need to do to get lossy image for whatever purposes.

Still, to become truly widespread you need to have support in all kinds of software out there, "everything you need is js plugin" is unreasonable optimism in my opinion. And, anyway, decoding using js isn't fast enough to be one and only solution forever, you need to get to have browser and other software support sooner or later, and for that you need 1 standardized format, not "whatever your plugin supports" as thread-starter suggests.


For the Photoshop file-format plugin, someone should contract out to Toby Thain at Telegraphics who has written a bunch of file format plugins. Or optionally use his GPL'd source code as a starting point if you want to DIY:

http://www.telegraphics.com.au/sw/


This can all be handled by Gulp or Grunt and designers won't have to worry about it if someone on the team can code.


What if I use this: http://getbootstrap.com/examples/starter-template/ and add an img tag with a src to a .bgp? Will it render when I refresh the browser? I see that the BPG images are decoded in your browser with a small Javascript decoder. So a JS file is referenced in the HTML, and everything will run ok when I refresh? That's the promise?


Yes.


The server could encode it. So the designer can still upload a known format.

The only problem I see for now is SEO. Ofcourse the title and alt attributes can give some context but the images wont get indexed


Isn't there hardware support for JPEG decoding on most mobile devices? If so, I imagine it couldn't compete performance and battery life wise. I remember Mozilla did a software video decoder once in JavaScript.

Still, it is quite amazing.


Some devices do have hardware JPEG decoding, however all mobile browsers do software JPEG decoding, often with the SIMD-accelerated libjpeg-turbo.

This is the software video decoder, by the way: https://github.com/brion/ogv.js


That said, the JPEG decoding is still done with native code as opposed to interpreted JS.


How come all mobile browser uses software JPEG decoding? Is it because of the startup latency of the hardware decoder?


My guess is that the average mobile CPU speed makes decoding a minor time factor within the process of getting an image from server to the clients screen.

If this is the case, then putting in a hardware decoder would be a needless expense.


I used to own the jpeg decoder at Mozilla, and this was certainly the case last time I looked at it. Even without SIMD, JPEG decoding is pretty cheap compared to most of the things the browser does. I certainly wasn't clamoring for someone to burn JPEG into silicon; I had other things keeping me up at night.


Strange: JPG is, when compressed, order of times smaller than uncompressed, so if the graphic card would accept the compressed image, the amount of used memory bandwidth can significantly decrease. I also believe that the graphic card can achieve more parallelisation and therefore handle much bigger JPG images, which we produce all the time with always bigger cameras.

I believe that some browsers actually keep in RAM all the images uncompressed, that can acually be the reason that makes the potential for optimizations less obvious.


> I also believe that the graphic card can achieve more parallelisation and therefore handle much bigger JPG images

Very large images are fairly rare on the web. In particular phones don't have giant displays, so it's usually a waste of bandwidth to send a huge picture to a phone as part of a webpage. And, speaking only for mobile Firefox at the time I worked on it, we had lots of bigger problems with large images than decoding speed. For example, Firefox would keep the whole decoded image in memory while it was onscreen, even if it was only displaying only part of the image or only a downsampled version of the image. If it's a giant image, the decoded image could use up a large fraction of the device's memory.

I'm also not aware of any major phones with have GPGPUs -- I'm not an expert in this stuff, but I think you'd need that to get any serious acceleration from JPEG decoding.


Hardware is real-estate in ever compacting mobile landscape.


When transistor size goes down, accelerators which are idle most of the time become cheaper. Your power budget doesn't allow you to run all the chip's parts at the same time anyway.

https://en.wikipedia.org/wiki/Dark_silicon


Security. Malicious JPEG encodings could break hardware codec implementations.


Maybe it has something to do with the browsers displaying the images progressively as they are loaded. Maybe the hardware decoders don't support that? That's guessed.


this will definitely need to be hardware-supported for decoding. on my phone, the decode time far exceeds the time it would take to serve up a bigger jpg file.

the demos reminded me what browsing on dialup was like. in fact dialup was better cause at least the decoding/loading was progressive/incremental, which doesnt seem to be the case here. i assume it's just the way the decoder is implemented?


It just needs to be implemented natively. On the demo page the decoding is done in javascript.


I decided to give it a try on a low-spec Android tablet, running latest version of Firefox. Got coloured horizontal stripes instead of a proper picture for every BPG example – and not just once; I was unable to view the images on the tablet.

Not quite production-ready, I think.


Similar issue with Firefox/Android, the Lena BPG images are completely scrambled to hell.

Safari/iOS and Chrome/Android worked with no issue. Desktop Firefox also worked.


A native implementation like the JPEG decoder would be fine. The bundled JS decoder implementation is elegant but not efficient on mobile platform (stalling here for seconds on iPad 3 / iOS 8 / chrome while JPEG ones are displayed instantaneously)


Also unfortunately it seems to crash MobilrSafari something fierce when you start to zoom. :-/


Problem is that you can't link to BPG images yet. Specifically, right click -> copy link location.

If only the rightclick menu could be customized by JS. Adding only a limited number of entries to the menu seems like it'd be okay.


With additional effort you can by including the proper JS. However, many sites would find this a benefit rather than a problem, since hotlinking takes away page-views.


You wouldn't really have to change the right click menu. You could put an invisible html div over top the images, and give them the correct link, instead of the raw decompressed image data. So a user would think they are right clicking the image, when really they are right clicking our overlay.


Are you referring to this?

Google Animated doodle with JPEG and opacity

http://zoompf.com/blog/2012/04/how-do-google-animated-doodle...


I was ready to pass by this post with a yawn until I saw where it was coming from: Fabrice Bellard. He's no doubt an absolute freakin' genius. And if anybody knows about image conversion, it's him.

Even the things he does just for fun are impressive. Have you ever booted up Linux inside your browser? http://bellard.org/jslinux/


>I was ready to pass by this post with a yawn until I saw where it was coming from

Would this have been less interesting or valuable if it hadn't been made by a hacker celebrity?


New file format proposals generally aren't news, unless they're already implemented in some popular product. Finding out the latest amazing thing that Fabrice Bellard has been working on, that's news. And that it's already implemented in your browser via Javascript? Priceless.


Sounds like the author lends credibility to the proposal through past accomplishments, suggesting that the project is well planned and executed and thus worth checking out.


This guys hacks become the mainstream implementation. Ever used qemu or ffmpeg?


He should be working on Daala next (for both video and image formats). Let's make the next media formats for the web truly open source.

> Based on a subset of the HEVC open video compression standard.

So BPG is patent-encumbered, right?


At the bottom of the linked article is the answer to your question, with links to more information.

tl;dr (obviously tl;dr): This format is very, very easy to implement despite the patent risk, simply by assuming that any device new enough to be considering including it will have already licensed the relevant patents.


The patents would only cover the implementation that comes with the hardware, not any other software implementation running on it. This means you'd have to use the hardware decoder APIs to decode images, which is less than ideal when you want fast, low latency decoding.


But suing the party doing the infringing would be much harder. They can't just go after Google/Apple/Microsoft because all they're including is a javascript interpreter. They'd have to sue thousands and thousands of sites, most of them for amounts no larger than 1000 dollars.


not just about image compression, he also did some Tiny C compiler a while ago. he also had some other nice tech demos...


don't forget QEMU.


or ffmpeg


or his formula (Bellard's Formula) used for his pi calculation record.


and software radio stuff (sadly closed source and proprietary)


Don't forget 'How to broadcast Digital TV using only a VGA monitor'. http://bellard.org/dvbt/


A man has to eat.


No one remembers LZEXE?


I certainly do.


Thanks for pointing that out, I didn't notice. Fabrice is as great a hacker as it gets.

Having alpha channel support, 14 bits and better compression would be lovely.


100% agree with you, and in fact, as a proof, I didn't even know about Javascript Linux and I'm going to check it now :)


jslinux cool. I tried it; running Emacs in it - https://twitter.com/jouborg/status/541252302309715968


EDIT: I didn't expect this comment to be so popular and feel like I've hijacked the thread a little - sorry. Feel free to continue at https://news.ycombinator.com/item?id=8706850

I would much rather someone revived the Fractal Image format, which is now out of patent. It's very expensive to encode, but that's nowhere near as big a problem as it used to be. It's very fast to decode, very small, and the files are resolution independent: http://en.wikipedia.org/wiki/Fractal_compression

I was blown away when I encountered it at PC Magazine in the 90s and it seems like it would be very responsive to the needs of today's web.


I'd like to see a modern implementation and some unbiased testing done before I got too excited. From what I could tell, the tech got more hype than was justified by its ability to perform, and I suspect the fact that it still isn't in much use may have more to do with difficult being better at anything in particular than a current existing solution rather than a lack of people interested in it.

(And note my phrasing of "got more hype than was justified by its ability to perform"... it isn't that it has no ability, just that it got more hype than it should have. Same thing happened to "wavelets". Yes, I know about JPEG2000... again, the point isn't that they are "bad" or "can't work" but that they got more hype than their real performance justified.)


IT's not the most competitive in compression or speed terms as far as I know, but the resolution independence is a Big Deal to me. I first saw this technology about 20 years ago and I am still impressed by it because nothing else has delivered that for photographic images. It's like SVG for photography.

At the time, encoding required a) an expensive license and (IIRC) b) an accelerator board if you intended to do anything other than leave your CPU churning overnight - obvious disadvantages relative to formats like jpeg.


this is 100% BS, it's an illusion.

you should read the Talk page of the wikipedia entry, where it's obvious there are some misinformed/deceptive folks took control of the page. See the comp.compression FAQ is a much better resource, as is the paper Brendt Wohlberg and Gerhard de Jager, "A review of the fractal image coding literature", IEEE Transactions on Image Processing, vol. 8, no. 12, pp. 1716--1729, Dec 1999 in particular " purely fractal-based schemes are not competitive with the current state of the art"


I'm not talking about the Wikipedia page, which does not in any case make mild claims about the technology. I'm recounting my personal experience of using it and being impressed by the results.



Great find. I worked at PC Mag UK in London at that time. I don't remember if we did a piece on it, but I do remember the oohs and aahs in the office as we zoomed in on pictures of cute dolphins and the like.


Wow. "It took almost 4 minutes to convert a 1.3MB .TIF file to a 228K .JPG file." Those were the days.


The following paper explains many of the reasons that engineers lost interest in fractal image compression: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=660992...

Unfortunately I can't find a free PDF at the moment.



I tried something simpler than but similar to fractal compression earlier this year. Got interesting results:

http://pointersgonewild.com/2014/01/02/tile-based-image-comp...


http://www.verrando.com/pulcini/gp-uw1.html

They have a 6-minute figure on their page. I wonder what modern hardware could do with it? Anybody care to give it a whirl?


Takes about 20 seconds for a 1280x893 bitmap. But the decoded image always looks blurry no matter what settings I use. The original bitmap is 3,349 kB and the IFS file ranged from 17 kB (looking awful and blocky) to 569 kB (looking about half the resolution of the original but not too bad).


I wonder how much that figure would drop if one could optimize the process for modern CPUs...


Even without optimizations. That page is from 1997. They probably ran this on a 200MHz machine at best. Modern CPUs are 30-50x faster just in terms of instructions per second. Then once you factor in multiple cores, we have desktop machines that are easily 100x faster.

In terms of optimizations we have SIMD and GPGPUs to make it even faster. Not unrealistic to think we could bring that 6 minute figure down to one second... And then, of course, there might be algorithmic tweaks that help us even more. Get 100 people looking at that code, and for sure they'd find ways to speed it up.


Modern images are also much much higher res compared to "320x200 true-color image".

Without knowing how this scales with image size there is really no way to speculate about performance on modern hardware.


In the same magazine there is an ad for a $3795 Gateway 2000 workstation. 486DX2 66MHz, 8MB RAM, 500MB SCSI HDD, SVGA graphics with 1MB RAM, 14" CRT.

I remember looking at systems like that and drooling. Now the latest CPUs have 8MB L3 cache.


Theres 8-core i7s with 20MB L3 cache now.


This looks like a fun project!


Fractal compression is a scam. The "resolution independent" part especially.



That definitely looks like a scam ...

Figure 2 absolutely does not contain the level of detail reproduced in the--supposedly--fractal compressed versions in figure 3 and 4.


How the hell did they reverse entropy on that photo?


excuse my ignorance, but encoding would happen "once" when it was created by the author, right?


That's correct. I imagine some websites and applications create images dynamically on-the-fly but most of the time they're done by the author.


After making a few needed tweaks to get libbpg to compile on Mac OS X, I used the compiled bpgenc binary to convert a test PNG to BPG format. I also converted the PNG to a JPEG for comparison purposes. You can see the results here:

http://justinmayer.com/bpg-test/

Size of PNG before conversion: 186K

Size after conversion to JPEG: 52K

Size after conversion to BPG: 9K

I took the liberty of submitting a Homebrew formula, so hopefully this will soon be a quick "brew install libbpg" away. (^_^)

https://github.com/Homebrew/homebrew/pull/34722


I think your page has a bug - the "original png" seems to be encoded in jpg! (The usual jpg artifacts are very visible around the edges.) Thanks for posting the extra example, the bgp looks very good indeed.


I believe that's because PageSpeed is automatically converting the PNG to JPEG on-the-fly before serving the image to the browser, ostensibly for speed reasons. How ironic. ;^)


Would love to have not just the source file but also the output files posted if you can be bothered (im fine with png screenshots if you can't be arsed with actual js setup)


Sure, no problem. I updated my comment above to include a link to all the relevant files. As you'll see, despite the significant differences in file size, there is very little discernible difference in image quality.


Thanks a lot - my God this format has potential on some files!


Before I finished reading your reply, I was going to say "but someone just submitted a formula for libbpg!" Thank you, sir. :)


Most welcome! Glad to help out where I can.


"After making a few needed tweaks to get libbpg to compile on Mac OS X". What have you done?

I get "bpgdec.c:35:17: warning: png.h: No such file or directory" and lots of other errors.


You can install for Mac OS X by opening the Makefile and uncommenting the line: CONFIG_APPLE=y


Looking at the Lena pictures demo, the extremely low file size comparison at the top shows just how good .bpg is in that use case. That could make for some much lighter websites when used for less important items like image thumbnails on shopping sites, for example.

When the file size gets larger at the end, it looks like there might be a little loss of detail. Ideally I'd like to compare them by switching back and forth using the original image and the .bpg as Photoshop layers...


PrintScreen should let you do it, I suppose?


I notice the container has no ICC profile support. Trivial do add as an extension tag, but should definitely be in the first spec IMO. And if I read this correctly, extension tags are hardcoded as numbers, rather than using a tag name. I don't think that's a good idea.


The BPG spec says, "Arbitrary meta data (such as EXIF) are supported." This isn't good enough. Modern formats need to specify how ICC profile format information is to be embedded, under what circumstances, and smaller space alternatives (similar to DCF and EXIF color space tags). Otherwise this is like JFIF, where it's the ICC spec and not the JFIF spec that describes how ICC profiles are to be embedded. To replace JPEG and PNG, the BPG spec needs to define such things itself rather than deferring the work to others or making it optional. Image color space data isn't optional if high quality is to be taken seriously, and for that matter the encoding of copyright and image provenance information needs to be defined in the spec also.


Well you can think of it as a tag name limited to 4 ascii characters if you like.


Yes, that makes it even stranger. Why use an unsigned 4-byte integer when tags are mapped to a number internally. Unless they anticipate to map more than 2^8-1 or 2^16-1 different tags.

Maybe I'm reading it wrong.

   extension_tag                                           ue7(32)
   ...
  'extension_tag' is the extension tag. The following values are defined:
       1: EXIF data.
PS. I think I understand now. It's just meant as a shortcut. Not sure why this is done when EXIF fits just as well.


Seems like EXIF data was originally to be included as part of the base spec (see dangling reference to exif_present_flag).


Since the parent comment was posted, Fabrice added an ICC profile meta data tag via the libbpg 0.9.1 release.


Is there a situation where 14-bit sRGB isn't enough? There are a lot of weird color matching issues on the web caused by inconsistent application of ICC profiles to PNG images by different browsers / platforms.


Storing HDR images, where the brightest parts are greater than 1.0. That way, when you later edit the image and adjust the levels, new detail comes into view. OpenEXR is the best we have for this now. It uses half-precision 16-bit floating point color components.


BPG supports CMYK, and to re-render CMYK correctly (on a display, or an inkjet printer) means ICC support is mandatory. Otherwise, there's no point supporting CMYK at all.

sRGB is designed for an 8 bit per channel encoding, so while it makes sense to support 10bpc to mitigate quantization effects of higher compression, it makes little sense to support more than 10bpc unless wider gamut spaces are supported. If the intent is for BPG to replace JPEG in-camera renderings, rather than converting existing JPEG and PNG to BPG, then it needs to support spaces other than sRGB. This doesn't require explicitly embedding ICC profiles but the spec needs to, well, specify at least one way of preserving this metadata. Otherwise there's no guidance for encoding or decoding the information, and then it's lost. And at that point it's a huge regression from what we have now, and thus no point in implementing it.


An image that is in Adobe RGB, for example.


No doubt Fabrice is very smart. I read about his 4G base station the other day. I'd love to be a fly on the wall while he codes and thinks out these projects.

His accomplishments are impressive: QEMU, FFMPEG, TCC, JSLinux, the list goes on


Is there anything this guy can't do? Seriously.

I have been wishing there was a JPEG equivalent with an alpha channel for like forever. That allows better compositing to arbitrary background images or patterns. Now the question is how long before browsers might support it natively.


Webp is has lossless and lossy support, and alpha channel. It's also much smaller than JPG and much better looking for small JPGs (like this one).

It's also supported natively on all newish (4.0+) Android devices and Chrome.


Unfortunately given the relationship between Google and Apple and also between Web* and H.26* there is little chance of it going anywhere since without iOS you have nothing.


> I have been wishing there was a JPEG equivalent with an alpha channel for like forever.

JPEG 2000 works in Safari. JPEG XR in IE. A gracefully-degrading JPEG XT is in the works: http://www.jpeg.org/jpegxt/index.html

Lossy PNG with alpha works in all browsers: http://pngquant.org (not equivalent in compression ratios, but still 4 times better than normal PNG).


There was an interesting article about Fabrice: http://blog.smartbear.com/careers/fabrice-bellard-portrait-o...

Does anyone know if he has a day job or does he just lock himself away and work on these interesting projects?



Its purpose is to replace the JPEG image format when quality or file size is an issue.

Some of the HEVC algorithms may be protected by patents in some countries

There have been some patent disputes over JPEG, but I don't think replacing it with another possibly patented format is a good idea, even if it's technically superior in some ways.


"Most devices already include or will include hardware HEVC support, so we suggest to use it if patents are an issue."

I don't know if that suggestion makes much sense, but you missed it in your quote.


"Most" is a huge overestimation. I don't think Haswell supports HEVC encoding. Qualcomm will only support it next year with the high-end Snapdragon 810, and Apple only recently supported it in the latest iOS devices.

But the vast majority of people right now don't have support for HEVC encoding and won't have for 5+ years.


I can't imagine ever wanting to spin up a hardware HEVC decoder context every time I want to render an image. That would enormously slow down display of a web page with this format.

Edit: decoder, not encoder


That would be decoding, not encoding.


Unless they changed their ways since h.264, it's probably a non-commercial license, even for tools or devices that suggest otherwise - like Final Cut Pro (http://bemasc.net/wordpress/2010/02/02/no-you-cant-do-that-w...)


You can actually use HEVC for your content for free, even commercially.

Royalties only get involved when distributing encoders & decoders.

See page 7 here http://www.mpegla.com/main/programs/HEVC/Documents/HEVCweb.p...


Yes, the content - that means unlike h.264 licensing there's no separate license for streaming (page 7 of http://www.mpegla.com/main/programs/avc/Documents/avcweb.pdf)

But with h.264, _codec_ licensing is for non-commercial (and internal business) use only (page 8).

There's no clear statement that HEVC licensing is for any and all purposes, and their use of the term "End user" in MPEG-LA's (non-binding) public material across patent pools is quite inconsistent, so it may mean only non-commercial users.

Given that it's MPEG-LA that we're talking about here, and their history, I assume there will be nasty surprises unless there's hard evidence that there aren't.


Yep, and that bit of JS you are distributing _is_ a decoder. I've been bitten by this before, and it resulted in buying the MPEG-LA a few ferraris a month after they noticed and we were popular while we worked to create an alternate solution.

Good times.


Then surely you have an issue, don't you, if you bundle the decoder in JS?


Hmm, that does make me wonder, actually - as far as I know, you can freely distribute source code for encoders & decoders without having to pay any royalties. Binaries / compiled versions are where the royalties come in. And technically, JS code is source code that gets compiled by the browser... so I'm not actually sure where it falls in all this.


I don't think source code is exempt from patents per se. I think the rule is more like some point in the supply chain has to have a license. It's common to distribute a chip or software library with no license with the assumption that the final product will get a license. But if the source code is the final product it needs a license. IANAL.


What kinds of devices have hardware HEVC acceleration? Phones, PC graphics cards, Blu-ray players?


One of Samsung's cameras, iPhone 6 uses it for FaceTime, various 4k TV's.


None of them today, but all of them in a few years.


nVidia GPUs with GM2xx chips (currently GeForce 900 Series)


Why not use Daala to start from? The overlapping transform probably help especially for still images, and the patent situation is probably at least better.



I'm not sure that Daala has finalized it's I-frame format yet.


There's no reason to wait until the I-frame format is finalized - a still image format would probably be a separate codec anyway, so compatibility isn't important.


Especially if you're shipping the decoder in JS alongside the encoded file.


This definitely looks like it compares favourably against both JPEG and PNG. The test doesn't directly compare against JPEG-2000, JPEG XR or WebP, but the results are more convincing than any examples I've seen for any other formats, and the Mozilla study showed HEVC's format did best on quality metrics.

I hope browser vendors take note. The patent issues are concerning, but if that can be worked around and a new spec designed, then we might just actually have a new image format for the web which really is better than what we've already got.


I'm not qualified to comment on BPG, but this is the first I've heard of Mr. Bellard. I enjoy reading works from minds of this caliber. He seems to be a model citizen for the programming community. Are there others I should know about like Fabrice?


So can anyone talk about the patent issues? I hear there are multiple holders of HVEC patents and they are willing to use them. So if you use this library, wouldn't you be liable?

I'd really like to know that. Because I'd really like to use this.


I've tried it with a picture of mine. The encoding process was painfully slow, but I guess that does not concern the end-user much. The file size went from 1.2M in jepg to 164K in bpg, and the decoding was fairly fast. After turning the image back into PNG the quality seemed OK but that's tough to assess objectively.

The coolest thing is this javascript program that can decode and display the pictures on-the-fly.

Definitely an image format that could save storage space and bandwith, IMHO.


Wow, I didn't even know there were open-source HEVC encoders. I thought there were patent issues. Now I'm off to re-encode my bluray collection!


The patents are still there, but likely no one will give you a problem unless you try to distribute it widely without a license.


I wouldn't be in such a hurry - right now x265 (as far as I know the best HEVC encoder) still loses to x264 in terms of practical use (you only really get better results by using bitrates so low that the end product will look terrible anyway, and x265 loses hard to x264 quality-wise when encoding at similar bitrate and speed). Considering that x264 is backed by about a decade's worth of development, this isn't that surprising.


There are still patent issues, and you still need to obtain a license to use the software. You will never see this image format integrated into any other open source projects, like Firefox, for this reason.


Like how you'll never see H.264, DRM, or WebKit in Firefox?


WebKit in Firefox?


He is referring to <https://github.com/mozilla/firefox-ios>.

But you will note that none of those things are "in" Firefox. H.264 support in the <video> tag is implemented via platform-specific APIs (which, for example still don't work on OS X). DRM requires a CDM binary blob, which is again platform specific. You do not have the open-source freedom to port these features to some other platform that does not support them.


For use in iOS, I imagine.


webkit in firefox?


The x265 [1] Project is great. That said, you might want to wait a year for them to improve the encode before messing with your entire collection.

[1] http://x265.org


https://bitbucket.org/multicoreware/x265/wiki/TODO

Yeah, who am I kidding, I'll just buy more hard drives to keep the originals :)


Few questions and thoughts on top of my head.

1. Why a Subset of HEVC Still Picture Profile? Why not just a use HEVC Picture instead?

2. Since JS sources are readable and being interpreted by VM ( Free Speech ), patents issues should not be a problem?

3. I am assuming the quality of BGP still has Lots and Lots improvement to be made? Since all H.265 encoder hasn't had the time to be tuned. ( Compared to x264 )


I guess the JS is just for proof of concept until support is shipped with browsers etc, but it isn't rendering properly in the stock Android browser (4.2.2).

It gets to Lena's head in the first image, then becomes brigtly multi-coloured, though it looks like the difference between colours right... as if maybe an int overflow in the browser's JS implementation?


holy crap, check this out: http://img1.buyersbestfriend.com/mkg/snackspage/images/bpg.h...

186,967 ==> 29,872 and indistinguishable, 6.25:1

(and pls tell me if it breaks on your browser - I want to push this live!!!)

adam


That's far from indistinguishable to me. It's actually quite obvious that the BPG is blurring detail at that compression ratio.

Edit: it's particularly noticeable in the wood grain patterns. Also, there's a soft grey dot to the left of the top of the lamp, 100px or so, that's entirely missing in the BPG. There are loads more examples...

Edit again: the color saturation of the rug is pretty washed-out as well.


oops, I wasn't clear... it's meant as a giant background image, and the quality competes with really nasty jaggies that start appearing below ~100KB in size. Nobody will notice wood grain, a few missing details, etc.


> a giant background image

why would you do this


Breaking in Safari on my iPad 3 running iOS 7: https://dl.dropboxusercontent.com/u/18855215/bpg.PNG


The bpg image on your link has much lower contrast. I'm not sure if that can be fixed, but the images are substantially different. I wonder if jacking up the contrast on the bpg might make it look much worse.


Worked for me on Chrome, Safari and Firefox

Your issue if anything will be older browsers. Is there a way that it can detect browser side if something has failed?


I would guess, that decoding speed can become an issue for websites, where many images are already in the cache but are re-displayed rather often...

I write this, because I am building such an application and for now it has many PNGs ... and yes, using a format like BPG would be fine, because I use the PNGs only because of transparency ... but when redisplay is done via a Java Script, I doubt that I could have the same speed. Loading is not so much a limiting factor, since after some time, all relevant PNGs are already in the browser cache.

Can anybody say something to this topic?

Of course, it would be great to have this integrated into the mayor browsers soon ...


Don't we already have WebP?


Yes but webp hasn't taken off as much as some would like as a defacto standard.

BPG looks good upon cursory inspection. It seems to be more efficient than WebP and supports 42-bit color. It also has .png's features of transparency and lossless compression although I didn't see anything mentioned about animation to replace .gif.

Bonus: since it's based on h.265 hardware support will come naturally and should be just a software update for devices that already have HEVC capability.


Not to mention it has a JavaScript port that means it can work it browsers today.


There is also a Javascript port of WebP decoding: http://webpjs.appspot.com/

As a bonus, you won't have to pay for a HEVC decoder license.


Doesn't a JavaScript implementation offset most of the performance benefits? Today we have browsers that are smart about when to cache the decoded image and when not, etc; does that have to be reimplemented in javascript?


I have tried a dart webp decoder. it was so slow. decoding a small image (400*300) takes 300msecs.


Not having animation is a plus to me.


h.265 is the corresponding format for animation.


I feel like if you're going to base an image format off a compression standard made for video, that animation is probably a short way off.


I think webp supports transparency and lossless.


From the article: "Mozilla did a study of various lossy compressed image formats. HEVC (hence BPG) was a clear winner by a wide margin. BPG files are actually a little smaller than raw HEVC files because the BPG header is smaller than the corresponding HEVC header." This study includes WebP.


But he conveniently excluded webp from his comparison shots :)


WebP is in some ways superior, in particular because it has a lossless option like PNG. [Edit: NOPE, I was mistaken.] I think it's kind of a shame that it hasn't caught on all that well. Maybe it's a fantasy, but I'd sure like to consolidate all web images into a single widely-supported and open source format. Ah, to dream...


I think it's nice to have separate formats for lossy and lossless compression. Makes it easy for users to tell what's going on.


Sixth bullet: Lossless compression is supported.


Missed it, my bad!


I didn't check the code though. I wonder if it's any good. JPEG also has a lossless mode, but no-one uses or even implements it.


From what the website says, BGP does support lossless compression.


This is the guy behind Tiny C Compiler, jslinux.org (running Linux in JavaScript in the browser) and Qemu. (And several other interesting projects). This alone makes me have great faith in BPG.


Impressive! Right now it requires a .js decoder, of 75kb.

Assuming a fast C++ decoder instead,(possibly GPU accelerated if the decoding algorithm is well suited for it) and not using JS but what would be the rendering times?

PNG are 4x bigger in his experiments, but interlaced PNG makes it more pleasant to users since it can be rendered progressively, can BPG benefit from such a thing?

edit: Interlacing is also used in JPEG, isn't it ?

It's a tradeoff, as a user, it's obvisouly a win situation when we have a low bandwidth, and as a server it's obviously a win.


Progressive rendering of HEVC is not possible in the same way as it is done for JPEG or PNG. It is also not very suited to GPU rendering.

A C++ decoder with assembly optimizations could easily run four times as fast as the Javascript version. Unfortunately distributing it would not be possible due to patents.


Why would a C++ version have to worry about patents but the JS one does not?


They both have to worry, just a different person. For JS, it is the page author / publisher. For C++, it would be the browser vendor.


Four times? You can expect the C/C++ version (with some SIMD intrinsics) to run 10-30x faster than the JIT-optimized JS version.


There is also the overhead of using a canvas element as a render target instead of a normal image surface, and using the canvas API instead of just writing color values straight to a buffer.


> edit: Interlacing is also used in JPEG, isn't it ?

Yes, and contrary to PNG it actually makes files smaller.


I think we crashed the site. Coral Cache has the front page, at least: http://bellard.org.nyud.net/bpg/



bellard.org seems to be back up now..


Unfortunately, a lot of what I do with images is constrained by what can be losslessly embedded in a PDF. Even if no influential organisation is opposed to BPG it will take at least 3 years, I would guess, for BPG to become part of the PDF standard, and then another year or so for it to become reasonable to expect people to have PDF readers that can handle it. However, I wouldn't be surprised to see BPG being widely used in 2020.


How does it compare to WebP? Also, how you measures lossy image quality? What are the metrics? I hope it's not just by looking at result and judging.


Looking at the result and judging is actually the best and most important metric. There are numerical metrics, like SSIM, but they are always worse than human comparison trials. You're optimizing for what looks best to a human in the end, after all.


It's very hard to create a fair test.

The problem with human judgement that it's very imprecise. You're unlikely to notice difference between JPEG at Q=90 and Q=95, but you can't say it doesn't matter, because that can cause 40% difference in file size. OTOH objective metrics can easily spot that.

"Looks the same to me" leaves a lot of room for error and you could be unfairly telling one format to save much more detail than the other. And almost by definition these are the details you're least likely to be able to see.

There's also a pitfall of choosing image that looks subjectively "nicer" rather than closer to the original. Codecs that are better (e.g. faithfully preserve noise) may be judged as worse than another codec that's an accidental Instagram filter.


Does anyone know what percentage of traffic is JPG/PNG? My curiosity wants to try to put a dollar-value on the potential global bandwidth savings.



Pronounced 'bee-peg'?


Kinda curious about this as well. I wonder if the alternate would be "bee-pee-gee" which feels way too long to say :)


I naturally started saying in my head "bee-pee-gee", but since the P stands for portable, maybe even "bee-porg" instead of "bee-peg"? :)


I've always pronounced it "bee-pedge".


How many "better JPEGs" have been created now, without significantly displacing JPEG's market share?


JPEG-2000, for example, is very hard to decode in a CPU. Many implementations take a second to decode an image.

There are two really hopeful things about this new project: (1) by leveraging HEVC, we get cheap (energy efficient) and fast decode on future mobile devices and (2) Since he is demonstrating pretty quick decode in JavaScript now, it clearly isn't a CPU pig like JPEG-2000.


JPEG 2000, JPEG XR, and WebP are the best-known (e.g., not toy research projects).


Indeed. Without out-of-the-box support in the major image editors like Photoshop and all the popular web browsers, I don't think we'll see JPEG overtaken any time soon.


Honestly, just last night I was looking and editing photos of a recent trip, and I uploaded them to tumblr to share on a small photo blog I just recently started. After spending some time trying to get the colors right, I realized that tumblr's image compression is god awfully horrendous. I felt so bad looking at my source JPEGs next to the tumblr post that I created a new flickr account right then.

I don't necessarily think JPEG is bad, or that it's even Tumblr's fault that I first thought to share photos there, but if any service/software started using BPG I think I would excitedly try it out.

[e.g. http://www.huddug.com/ vs https://www.flickr.com/photos/127341162@N03/15762469070/]


If Chrome and Firefox implements this, then mod_pagespeed, for instance, can do the conversion on-the-fly also, services like CloudFlare can also do the conversion automatically.


I don't think this is a quest for world or market domination, but a great tool for people who want a better format. Works today.


The intra-frame format of every video codec.


This worked awesome on my MacBook and my PC, but I am getting some massive rendering errors in Safari on my iPad 3: https://dl.dropboxusercontent.com/u/18855215/bpg.PNG


I really dig it, but I'm not yet familiar with BPG. It seems that decoding it would take more processing power, and potentially be slower than JPG. Is this the case? Under the "performance" section, decoding speed was not mentioned.


Yes, it's slower and takes more RAM than JPG. I can see a delay of about a second in my browser. (Edit: I assume my browser is caching the data, but if not, some of that time could be re-downloading the data.)


If the decoder wasn't in JS, that might not be the case. It'd undoubtably be a more efficient native implementation (possibly with hardware support) were it integrated.


[Edit] I had misunderstood the quoted Mozilla performance claims[0] to be related to processing speed, whereas they refer to performance being a measurement of quality. Thanks sp332 for the correction.

[0] http://people.mozilla.org/~josh/lossy_compressed_image_study...


Compression performance does not mean speed. It just means how well it compressed the images. None of the tests on that page even mention speed.


Looks like the JS decoder is compiled using emscripten, very nice.


Very out of topic but, is lena.jpg still acceptable? I mean, tech industry is getting better at inclusion but come on, are we still using that crop from Playboy?


I'd say it has enough historical significance to still be acceptable. Consider this: every image compression format out there has an example of their work done on the Lena image. The source code for various compression algorithms may have been lost, but the encoded Lena results for that compression algorithm may very well still exist. This means we can compare between compression algorithms of the past quite easily. We dont have to dig up old source code and find hardware that will run it to compare output - we can simply provide the same input that they had.

Regarding the image itself, it's not even a good test image. There are far better images out there that test the robustness of image compression, but those test images haven't been ran though every image processing algorithm since the 1973. Should we change default test images? Yes, but Lena should still be used as well. The historical data it provides is invaluable.


> I'd say it has enough historical significance to still be acceptable.

This sounds a lot like Redskins argument.

> The source code for various compression algorithms may have been lost, but the encoded Lena results for that compression algorithm may very well still exist.

We need to compare a new algorithm to an obsolete one where we don't even have the source code and this is beneficial... how?

By the way, the image is remastered in 2013 so none of this is relevant anymore: http://en.wikipedia.org/wiki/Lenna#Remastering


Considering that Lenna herself is fine with it, I'd say so. (Unless you want to deny a woman the right to permit the use of her own body for aesthetic, erotic, or academic purposes, which is the logical if ironic outcome of some schools of feminism.)

The problem with the Lenna imagery isn't political correctness, it's the fact that it's a crappy scan from a magazine that always had crappy photography to begin with.


The "fine" thing would be to let Lena use photographs of herself in her own work.

The problem is that women in science cannot read a paper on image processing without being reminded that it's a boy's club.


The problem is that women in science cannot read a paper on image processing without being reminded that it's a boy's club.

I find this attitude far more patronizing than any conceivable choice of test imagery in a graphics research project.

Suggest reading this before posting anything else about the delicate sensibilities of "women in science": http://www.nytimes.com/2014/12/07/magazine/my-great-great-au...

Do you think any of those women would have spared a half-second's thought about this issue? Somehow, I don't think so. I think they were too busy doing science.


I'm sure they do, because I can point to them speaking up about it.

http://en.wikipedia.org/wiki/Lenna#Controversy

I'm not saying all women are crippled by it, because they have to deal with such nonsense while walking down the street every day. But can't we as scientists strive to improve the status quo?


(Shrug) I can point to outraged people "speaking up" about everything from global warming to Jar-Jar Binks. It doesn't prove much, except that some people enjoy a good bout of outrage.

The scientists in the article I linked, on the other hand, aren't speaking up about anything, because they're all dead.


You're implying that because the women you picked didn't speak up and just did their work, that somehow that is the model all women should follow (because Science, apparently). Yet you provide no evidence that they were not affected by attitudes toward women. Just because they did not speak up does not imply there were no negative effects of contemporary attitudes toward women. On the other hand, I have evidence that researchers are affected by it.


Yet you provide no evidence that they were not affected by attitudes toward women.

The New York Times Magazine article is pretty long, it'll take you a few minutes to read it. I'll wait.

Short version: they suffered real discrimination, which you've diminished by comparing it to the use of Lenna.jpg in a graphics project.


Fine, I didn't read it. What is your point? That just because something is worse for one person that we should ignore problems affecting another? That someone else has suffered in the past is not a reason to give up improving the status quo today.


> Unless you want to deny a woman the right to permit

Not doing something you are permitted to do is somehow denying their right to permit? It's nice of her permitting the use, however her permission doesn't force us to use it. By the same logic, it would be perfectly okay use goatse image if the guy permitted it, right?


By the same logic, it would be perfectly okay use goatse image if the guy permitted it, right?

If the ability to represent the goatse image were a critical part of the evaluation process for a compression algorithm, then yes. However, we evolved to recognize and respond to subtle features in human faces, not human colons. So, no, goatse would not be an appropriate reference image.


If her face is the only face that is critical to the image compression analysis, yes, you are completely right.

I have a hard time understanding the knee jerk reaction to the change and the willingness to maintain the status quo. We have nothing to lose if the image is swapped with one that is less ostracizing --which is something Mozilla did in the past. Why defend it?


Technically, the Lenna scan is indeed a bad reference image for numerous reasons, such as being blurry, heavily quantized, and composed of various shades of magenta and purple. Its value is more or less entirely nostalgic.

So it wouldn't be worth defending at all, if it weren't being attacked on grounds I strongly disagree with, by people who I believe shouldn't gain any more influence over our culture than they already have.


> So it wouldn't be worth defending at all, if it weren't being attacked on grounds I strongly disagree with, by people who I believe shouldn't gain any more influence over our culture than they already have.

I'm having trouble understanding what this means, concretely. Do you think it would be a bad thing if computer science became less of a boys club? Or do you not believe that is why people dislike the picture? Who is it exactly that has too much power over "our" culture -- women? Politically correct killjoys?


Who is it exactly that has too much power over "our" culture -- women? Politically correct killjoys?

On the Internet it's hard to be sure, but I haven't seen any negative comments about lenna.jpg that I can attribute to women qua women.

So I guess that leaves door #2, huh?


Thanks for clarifying. I can relate, since I would also defend anything if it was criticized by people I didn't like.

...no wait, I wouldn't.


Please don't put words in my mouth.


> So it wouldn't be worth defending at all, if it weren't being attacked on grounds I strongly disagree with, by people who I believe shouldn't gain any more influence over our culture than they already have.

Is it putting words in someone's mouth if said words are their own, I wonder?


He's not saying "I'm disagreeing with you because I dislike you," he's saying "I dislike you because I disagree with your ridiculous PC policemanship and the fact that a lot of people actually take it seriously."


> I'm disagreeing with you because I dislike you > I dislike you because...

Are you even aware you made my exact point?


I disagree! You have the cause and effect mixed up. It's normal to dislike certain groups of people you strongly disagree with. For example, I imagine we both dislike gay bashers pretty strongly.

You, on the other hand, were suggesting that CamperBob2 was arguing with you purely because he didn't like you, which would be pretty silly if it were true.


I suggested the sole reason of him defending the status quo was that it was challenged by people he didn't like. I still believe it's true.


People are still up in arms about the last time Mozilla made a swap to be less ostracizing.


I briefly wondered the same thing. Tradition would probably be a poor argument for continuing to use it. But being able to directly compare the results with the lena.jpg results throughout its history might be valuable?


Just one thought: If some woman (or man, or whatever, really) used a similar crop from playgirl in a paper ...

Yeah, sure, from a historical perspective, the situation is not exactly symmetrical, and if it helps with normalizing the situation, maybe Lenna should be banished ... but then again, I would think that if the atmosphere is right otherwise, I doubt anyone would really be bothered by the use of mildly erotic pictures where there is consent from the person in the picture?


The best coders seem to have the simplest websites.


Has anyone installed the bpg encoder bpg-0.9.2-win32 on a windows 7 machine, and if so how did you do it?


Looks like his site is down, too much HN traffic.


I appreciate the historical tradition of using the photo of beautiful young Lena Söderberg as a test image, but it's time to move on. It's fun for us hetero males, but like it or not, this sends a message to young women that they aren't welcome in this field. I wish Fabrice Bellard would have left them out of the demo set.

Having said that, all those demo photos do look good. I was wondering how we were going to see a demo in the browser without built-in support, but leave it to the man who put Linux in the browser to write a decoder in javascript. This is an encouraging project.


> I appreciate the historical tradition of using the photo of beautiful young Lena Söderberg as a test image, but it's time to move on. It's fun for us hetero males, but like it or not, this sends a message to young women that they aren't welcome in this field.

I find your comment a bit strange. I am an heterosexual male and I find this picture utterly uninteresting from a sexual point of view. To me, the appeal of using this pictures is sheer seventies nostalgia... I'm too young to have been alive in 1973 but I'm old enough to remember gawking in awe at the wall of the computer room adorned with output from the first desktop color printer I ever heard of or the first laser printer I ever saw. The example picture used was Lena. The picture reminds me of old computer rooms... I don't even know who the model is and I don't even find her pretty.

By all means use a better picture... But when a Hello World moment is required on a printer, this one is a very strong contender.


It's actually a very good point when you take into consideration the origin of the image. From [1]: “Lena Söderberg […] appeared as a Playmate in the November 1972 issue of Playboy magazine” and it is referring to this particular photo. Article in cyrilic (Russian?) [2] shows what the uncropped version look like. Putting things in context is important.

[1] https://en.wikipedia.org/wiki/Lena_S%C3%B6derberg

[2] http://dimka-jd.livejournal.com/3673381.html (NSFW)


No, it's not. Because people do not use the second image, nobody sees it until you linked, and it cant "send a message to young women that they aren't welcome in this field".

All this is incredibly unimportant. You just made me waste 60 seconds of my life on a pointless thing and i want them back.


I think people that pretend to be offended by things like this, unwittingly infantilizing the people that they are trying to support, do far more to discourage minorities from tech than pictures of women's faces.


I'm curious if you're basing this belief on actual information from the people in question, or just your intuition about what they probably think.

Dianne O'Leary, distinguished university professor emerita in UMD's CS department, objects to it: https://www.cs.umd.edu/users/oleary/faculty/node8.html

Sunny Bains, senior teaching fellow at UCL, also objects (see page 6): http://www.cs.cmu.edu/~chuck/lennapg/pcs_mirror/may_june01.p...


I was just reading another article on here about fractal image compression and stumbled across this: http://pointersgonewild.com/2014/01/02/tile-based-image-comp...

written by this woman: https://twitter.com/Love2Code

Maybe you can't generalize the opinions of two people to 50% of the human race?


I agree that you can't generalize the opinions of two people (or even three). But you certainly can't generalize from zero.


You're assuming that ANTSANTS is generalizing from zero. Since he's linked to one example of one individual that is mature enough to realize that the importance of that common bitmap isn't its origin but what concepts and ideas in the field of image compression those set of pixels have been used to demonstrate, and how those ideas relate to machine learning. Pointers Gone Wild is a great blog, with great technical content, written by someone that is actually accomplishing something because they are fully focused on what actually matters instead of the accidental origin of those pixels.

ANTSANTS' anecdotal evidence serves as a counterpoint to your anecdotal evidence, but you know what they say, the plural of anecdotes is not data. Instead you may want to work from a statistically significant sample size when making your point. If not, don't be surprised when people discount any conclusions you have made based on anecdotes.


> You're assuming that ANTSANTS is generalizing from zero.

No, I'm not. I'm asking: "I'm curious if you're basing this belief on actual information from the people in question, or just your intuition about what they probably think." If he's got statistically stronger evidence than my anecdotes (which are more than two, but yes, still anecdotes), I'd love to hear it! If not, I'm curious what other evidence or argument he has for the claim he made.


Not everything is about gender, sexism, feminism, or chauvinism. There are very few things in the world that are not somehow associated with male vs. female in an objectively equal way. The Lena in question is just an image with a long tradition. Anyone can find and use a male image for his/her own image compression tests if s/he wants to. It's not a statement to use Lena instead.


Just because something is a tradition does not make it a good tradition, nor does it justify continuing the tradition.


True, but you still need to balance the relative merits of continuing versus discontinuing the tradition. There is a very real loss from discontinuing the tradition in question without first augmenting references for many of the algorithms that use Lena with other standard reference images. That's a real and immediate loss.

Try a different strategy: offer a better alternative that makes it so practitioners not only don't notices the loss of Lena in common usage, but instead overwhelmingly favor alternatives. Create a better product.


Real comparisons of algorithms do not operate on a single image, but large suites of images, where the results are computed as averages. Changing the dataset by removing one image is unlikely to be a "very real loss."

Alternatives exist, and AFAIK Lena is just used as a "canonical" example to use as a nice picture (like the Zachary Karate Club network).


I understood the complaint to be not about male vs. female, but about the fact that the original is a pornographic picture from Playboy magazine.


> the [uncropped] original is a pornographic picture from Playboy magazine

FTFY


People are too sensitive. I'm a hetero male, and I don't find the image 'fun' at all... it's just an image. It doesn't send a message to anyone. If anything, I don't really care for it for image algorithm tests because the palette is fairly flat.

No matter how you feel about the image itself, though, there is value in continuing to use it... people are familiar with it, to the point of it becoming almost cliché. This takes the focus off of the image and subject and puts it squarely on the image algorithm.


The smallish downside of continuing with it may outweigh the very small upside. The image by itself might not be a big deal, but we're trying to disrupt the pattern of women being used only as objects, not perpetuate it.


If we used a male model as a photographic test subject here, is that not the sexist objectification of males? Should we use an unattractive person or an animal instead? Are you going to attack the very notion of beauty or the fact that women are the "fairer sex"?


Sure, it could be objectification. It's pretty cut and dried in this case since the image was scanned from a magazine almost at random. The woman had no agency in the decision at all.

Of course I'm going to attack that. Women are required to be beautiful or they are deemed worthless, or failures. So they put a lot of work into it from a fear of failure. Men can be beautiful too, especially to a woman's eye which you've never seen through. If men were made to worry about their appearance as much as women, we'd be "fair" too.


There are predominant gender roles. How can we determine if the predominance is wrong, or just natural? All of this assumes the predominance is wrong. If I go to enjoy a burlesque show (a largely female-dominated event that all consensually participate in), did I merely experience the predominance, or did I also reinforce it? If the majority like something even though a minority do not, does that mean the majority are somehow wrong?


>Women are required to be beautiful or they are deemed worthless, or failures.

I am truly sad for you if that is the case in your enviroment, but please, please do not generalise like that.


Fortunately, it is not the case in my environment. But it is a dominant message.


No, the upside of continuing to use it is to show the politically correct bullies that we aren't afraid, and you aren't going to get everything you want.


CS enrollment is down. I think it's because the industry is full of inconsiderate assholes.


There is nothing inconsiderate about using the cropped image "Lena". Nothing. It is not an offensive image. Only hypersensitive people who choose to be offended by everything find it offensive, and those people should be fought against, not placated.


Look, you're under no real obligation to stop being an inconsiderate asshole. I'm just pointing out that there are practical consequences.

You don't have to be offended by the image. You don't even have to understand why people are offended, that's fine. But if you don't take other people's feelings into consideration, that's the inconsiderate part.

Other people are going to have different opinions than you do. Telling other people they're not allowed to be offended is why you're an asshole.


I'll simplify this for you.

I would be an inconsiderate asshole if the picture was actually offensive, ie, if it was goatse, someone in blackface, etc. But it isn't, and unlike you, I have the courage to make a statement about the image: that it is objectively not offensive. There is no one out there who is actually offended by it. They are all people pushing an agenda.

So your argument seems to be that because some people could be offended by it, we shouldn't use it, even though the notion that anyone could be offended by it is ridiculous on its face. So what, then, about things like gay pride parades? People can be offended by those, too. According to your argument, we should end that, too. But I'm sure you'd never advocate for that. You'd say something like "it's their right to be in your face in public". And of course it is, even though it can offend people. But that same argument defends people who'd use offensive images in image processing papers, not just ones which are just a target for politically correct assholes. You only care about people being offended if they're on your side.

Gay pride parades are not bad things, and just the same, a woman posing for Playboy is not a bad thing, and using the cropped image of the face of a woman's nude picture is even less offensive than a gay pride parade. And nobody should be pressuring anyone to stop either of those things.


I'm just saying that anyone who IS offended by it is a whiny, oversensitive asshole who wants to force the world to accommodate them, and that is wrong. Standing up to them does not make me the asshole: babying them makes you the asshole.

Edit: just take another look at the cropped picture "Lena". If your argument is that that is somehow offensive, you are wrong, plain and simple.


You can't understand the meaning of a symbol by studying the symbol. You have to go look at what it means. It's like analyzing a swastika and deciding that neo-nazis aren't offensive. The problem is that only men were part of that project, they treated a woman in an exclusive manner, and now that image is a reminder of that sexist hierarchy.


> I'm a hetero male ... it's just an image.

That's the exact privilege you get to enjoy as a hetero male. You get to ignore things like this when your female colleagues (if you're lucky enough to work with any) have to look at the image and be reminded that many people continue to view them as objects.


Not sure if you're trolling, but I have a spare minute...

- If I post a picture of a cute kitten, does it mean I look at a cat as an object? That I will only under-appreciate all animals because I liked a photo of one? - If I post a picture of my mother or father, am I objectifying them? - What if I take a picture of a street performer who's doing some awesome thing and post it... am I reducing them to simply an object of attraction?

I just said it didn't affect me one way or another... that it is just an image. If someone thinks this image means I view a woman as an object, they are projecting THEIR OWN ISSUES onto me. That is the ONLY way that it could be offensive, in that they are choosing to be offended by something that 99% of the population think nothing about.


This isn't just about having a picture. I'm not sure how you got that impression. It's about the context where it comes from and the repeated use of the picture in a scientific context.

If you have a picture of your parents on your desk, that's great and I respect your love for your family. If you keep using a crop of your mother from her Playboy centerfold in the '70s as a test case for your algorithms, and you encourage other scientists to use the same picture, I might be a bit weirded out.


If you keep using a crop of your mother from her Playboy centerfold in the '70s as a test case for your algorithms, and you encourage other scientists to use the same picture, I might be a bit weirded out.

How culturally normative of you.


>If you keep using a crop of your mother from her Playboy centerfold in the '70s as a test case for your algorithms, and you encourage other scientists to use the same picture, I might be a bit weirded out.

And who wouldn't be! But that begs the question, is there a son or daughter of Lena Soderberg who is also a researcher or software engineer working in image compression algorithms?

http://www.ee.cityu.edu.hk/~lmpo/lenna/Lenna97.html


It does not "beg" any questions. It is still a question of context. Sexualizing women has nothing to do with image processing research.


>It does not "beg" any questions.

Yes it does. geofft wants to imply that it is somehow equivalent of using a nude of one's own mother, which is absurd.

>Sexualizing women has nothing to do with ...

Tell me why you think this one[1] is okay, as opposed to the Lena image; and what it has to do with whatever it is you do.

[1] http://jeremykun.files.wordpress.com/2014/09/gala-dali.jpg?w...


Sure. This picture was painted by Dali, inspired by a Scientific American article about the minimum number of pixels needed to recognize an image. He was making an artistic statement that has a direct connection to Fourier analysis (and if you read the context article that image came from you might know this). This is a painting whose context is "fine art" (i.e. you can have tasteful nudity). That is a different context from pornography, whose purpose is sexual.

You clearly do not understand how the context of an image matters in interpreting it.


>This picture was painted by Dali, inspired by a Scientific American article about the minimum number of pixels needed to recognize an image. He was making an artistic statement that has a direct connection to Fourier analysis

Blah blah blah blah blah blah blah blah... The Lena image contains a nice mixture of detail, flat regions, shading, and texture that do a good job of testing various image processing algorithms. It is a good test image. It is used by thousands of researchers in the field since image processing was a field.

So, your image isn't "sexualization" because Dali and "fine art" ?

>You clearly do not understand how the context of an image matters in interpreting it.

You cannot articulate a difference between Lena and the Dali that doesn't rely upon subjective opinion.


> So, your image isn't "sexualization" because Dali and "fine art" ?

Yes, there is such a thing as tasteful nudity, and pornography is generally not. This is why our society allows children into art museums but not strip clubs. What is so hard to understand about this? Context is subjective, but there are agreed-upon standards for professionalism. Just because the standard was different fifty years ago does not make it a good idea today. If you can't think of any "traditions" from fifty years ago that professional scientists unanimously agree are wrong today, then you're quite ignorant.

> Blah blah blah blah blah blah blah blah...

Clearly I am the one failing to articulate things.


>Sexualizing women has nothing to do with image processing research.

Good think they don't. Or how are they?


>It's about the context where it comes

Why? It's the picture and not the context. If it wasn't posted here on HN, it wouldn't have seen it.

>If you have a picture of your parents on your desk, that's great and I respect your love for your family. If you keep using a crop of your mother from her Playboy centerfold in the '70s as a test case for your algorithms, and you encourage other scientists to use the same picture, I might be a bit weirded out.

Do you know on how many portrait photos of your co-workers they wear pants? Do you think it is important they do? Also get weirded out, that's your problem. Do you know how many people get weirded out of you because you browse HN?


Why don't you find a set of great test images (under suitable licence) then run those through the wide variety of image formats at different settings, and host it all on a website? (Put a few ads on it and you have nearly passive income, if the hosting isn't too much)


Did you read the article? There were _several_ great test images used in the Mozilla study under suitable licenses, and Bellard did test with some of those. The Lena picture here wasn't particularly scientifically useful, it was presumably just used out of a sense of tradition. (Mozilla didn't use it.)

http://people.mozilla.org/~josh/lossy_compressed_image_study...


If it matters so much to you, take that image set and do exactly as DanBC suggested.

Create a site with all the different image formats out there with a larger test set.

Be willing to take matters into your own hands and change the norms and traditions in that field of study. The best way to accomplish that is by offering something better. I would say that a wiki with many test images for many algorithms is something better. If you do an good job, people will adopt your approach and tradition will change. Complaining on HN isn't going to change tradition.

Introducing new ideas usually is usually more successful long-term approach than "book burning" because you disagree with the content.


What do you know, there are other attractive women in those photosets, including at least one that appears to be wearing roughly as little as Lena. It's almost like our brains are wired for facial recognition and thus attractive faces are good test subjects for image compression algorithms or something.


Sounds like we agree, then -- there are enough other images with all the properties of the Lena image that make it useful for image-compression tests that there wasn't a scientific reason to use the Lena one in this analysis.


But none of those other images have been used as widely for as long to provide the same utility for comparison.

Getting the equivalent for all other papers involves getting the source code for all the other papers, and running those algorithms against these other images and making those images easily discoverable by someone who might come across these other papers (they are unlikely to contain links to your newly generated set).

What you're essentially advocating is reducing the available comparative evidence in an entire field of study because some people object to its origin, many of which don't even work in this field of study.

I'm sorry, but when you can't even admit that eliminating this image from use might have negative consequences from a utility point of view, it makes it very hard for others to accept your position.

Losing a common point of comparison in a field of study working on optimization problems is a step back. If you advocate for "diluting" the image by getting other reference images in common use by practitioners, then I can back that up. That approach actually balances the concerns of both sides and generally enriches the field (more common reference images can only serve to aid in objective and subjective comparison)


Are any of the pictures cropped segments of pornographic images?


I dunno, it's kinda hard to tell with this one:

http://r0k.us/graphics/kodak/kodim04.html

The fact that you call the uncropped image "pornographic" (which, yes, it technically is) when Playboy centerfolds are rather tame by modern standards kinda hints at the real problem here: the boring old American prudishness that has been masquerading as feminism as of late. I sincerely could not care less if Bellard had used Billy Herrington's toned ass as a test subject. The sooner that our society can get past this ridiculous fear of sexuality and the human body, the better.

Perhaps not coincidentally, Bellard is from France, a culture much more accepting of nudity and sexuality than ours. I would not be surprised if he had simply not considered the possibility of prudes across the pond getting outraged over such an innocuous thing.


Now show me a picture of a man from that dataset that could possibly be construed as sexual.

This is not a matter of prudishness. I would be totally fine (see my other comment in this thread) if researchers used both male and female crops, but they don't. So to make your point you need to show me all of the male French researchers who are accepting of male nudity in their image processing research. I would be very surprised if you could find me a single one.


>Now show me a picture of a man from that dataset that could possibly be construed as sexual.

>This is not a matter of prudishness.

There seems to be a contradiction here.

>I would be very surprised if you could find me a single one.

I agree, not because I think male French researchers would give a rat's ass about the cropped head of a naked guy appearing as a test subject, but because I doubt any would be foolish enough to poke the proverbial hornet's nest and incur the wrath of self-righteous west coast white guys tweeting on their iPhones.


You're missing the point, which is that it's not the specific image, but that testbeds use suggestive photos of attractive women and not men. You want people to feel free to use nudity or suggestiveness. Fine. I want them to be equal about which suggestive photos they use. As in not all women.

And now you're saying that French researchers don't care, but they do because they don't want to anger Americans? (And somehow they're all white, live on the west coast, and have iPhones?) What a crazy web you're weaving.


>This is not a matter of prudishness. I would be totally fine (see my other comment in this thread)

You mean this one? "Sexualizing women has nothing to do with image processing research." How positively libertine of you.

>You're missing the point

And you're missing my point, which is that if you really weren't a prude, you wouldn't care either way. Straight men like pretty women, straight women like handsome men, gay men like handsome men, gay women like pretty women, some people like some combination of the above, some guys 40 years ago liked a pretty woman, who gives a fuck? Stop being offended by extremely tame displays of sexuality.

>And now you're saying that French researchers don't care, but they do because they don't want to anger Americans?

You city slickers don't seem very good with jokes, do you?

I'd tell you to "lighten up" but I forgot that's code for "long live the patriarchy."


I open a men's magazine and see photographs of women.

I open a women's magazine and see photographs of... women.

These magazines exist because people are buying them. WOMEN are buying them. I don't see many men on the editorial boards of these womens' magazines, pushing for more attractive women in the magazine in ever-more-scantily-clad garb.

What do I conclude, other than that a lot more attention seems to be being "naturally" paid to women in general, especially in a superficial context?

What do I make of the fact that more than double the number of women are bisexual, than males are? And thus, a significantly higher percentage of people of all sexes find the female form interesting?

Women: The cause of, and solution to, all the world's sexism


My comment is not a response to the article, but is a response to loudmax.


> photo of beautiful young Lena Söderberg

You are either subtly trolling, or have an incredible amount of cognitive dissonance going on here. You both objectify her by using those adjectives ("beautiful, young") AND THEN go on to state that use of a female model is sexist. Is use of a well-dressed male model sexist? If not, then your entire PREMISE is, in fact, sexist! Not to mention this photo is ENTIRELY asexual. Are you suggesting we avoid use of human models entirely to dodge sexism? Good luck with that.


it's disconcerting how the PC movement in the west converges with the talliban movement in the middle east.


90% of all photos posted on the internet contains a human face. Having a human face as a test picture makes perfect sense and this one has been used forever so it's a good benchmark that people can compare and relate to. It's not like she's totally naked or anything like that either.


Clearly we should replace it with a more pornographic test image, since that's the most common use case.


What about introducing a new male equivalent image of Lena instead? This approach would at least balance things out while maintaining an image that has become a standard to compare things to for a long time.

If I were someone working on image compression techniques I would probably have seen many many algorithms and their output using that image so when I see a new algorithm and a new Lena, I might look at the results and think "Hey, this is similar to this other approach I am familiar with"

So many photos taken on any given day are photos of people, so it's simply not practical to eliminate a human subject from test images. If we're going to have an image of a human subject, that person is going to have to be somewhere on the gender spectrum. More data (images of two people) is preferable to no data (no images of people to avoid the risk of offending anyone)


There's a difference between using a picture of a human face because we've determined it's useful for the test at hand, and using a 1970s Playboy scan because that was the most convenient thing for early computer-graphics researchers in the 1970s.

In particular, the (appropriately-licensed) Kodak image set used in the Mozilla study includes multiple pictures of women, none of which are Playboy scans. The complaint is not that there's a picture of a woman, it is that _this_ particular picture keeps being used despite no particular scientific reason why it's optimal for comparing graphics formats.

http://r0k.us/graphics/kodak/


So the only real contention people have here is the source publication of the original image, right? Because AFAICT, the cropped image (as it's always been used in practice) is nor more or less offensive that kodim04.png from the kodak set.

Furthermore, why does the licensing matter? Near as I can tell every use of Lena, except for by a for-profit company, constitutes fair use. The image, every time I've seen it, has been used for non-profit educational work. It's never reproduced in its entirety. The use of the image for this image compression purposes has no impact on the original market for which the image was originally created. Does this not meet the requirements of fair use?

That all said, I agree that if their is a better images available for this purposes, those should be used as well, but that is an entirely separate issue from whether or not Lena should or should not be used. The happenstance provenance of the image is really the only controversial detail, and if we go back to the history of why it was used it was mere chance:

    Alexander Sawchuk estimates that it was in June or July of 
    1973 when he, then an assistant professor of electrical 
    engineering at the University of Southern California Signal 
    and Image Processing Institute (SIPI), along with a graduate 
    student and the SIPI lab manager, was hurriedly searching the 
    lab for a good image to scan for a colleague's conference 
    paper. They got tired of their stock of usual test images, 
    dull stuff dating back to television standards work in the 
    early 1960s. They wanted something glossy to ensure good 
    output dynamic range, and they wanted a human face. Just 
    then, somebody happened to walk in with a recent issue of 
    Playboy.[0]
Put yourself back in 1973. Imagine all the different sources of high quality glossy images of a human face with high dynamic range you might have easy access to 41 years ago. I have a hard time thinking of content that would have been available at that time that would have rivaled Playboy. I have no qualms with people judging things from today in todays terms, but presentism [1] tends to rub me the wrong way. If you want to judge something, don't do so anachronistically.

Anyways, I want to re-iterate my main point, which you did not address:

    If I were someone working on image compression techniques I 
    would probably have seen many many algorithms and their output 
    using that image so when I see a new algorithm and a new Lena, 
    I might look at the results and think "Hey, this is similar to 
    this other approach I am familiar with"
There is value in consistency/continuity. Starting to use the image of Fabio Lanzoni like Deanna Needell and Rachel Ward did is a "lossless" approach of dealing with the controversial provenance of Lena.

[0]: http://en.wikipedia.org/wiki/Lenna#History

[1]: http://en.wikipedia.org/wiki/Presentism_%28literary_and_hist...


I find the argument that there would be a lack of high-quality images in 1973 utterly unconvincing. Life magazine or national geographic could have been used. It was just a historical accident, and contrary to what you claim, I do not believe there is a pressing reason to keep using the exact same image, it's merely a relatively unquestioned tradition.


Almost none of the images from life magazine are studio images. The quality of the images vary greatly depending on the equipment used by the photo-journalist (often viewfinder Leicas not large format Hasselblads).

National Geographic has been printed in a format that is approximately have the surface area as Playboy. From a cursory search of NG covers from 1973, the images aren't the same quality as those achievable with studio photography equipment available at the time.

> It was just a historical accident, and contrary to what you claim, I do not believe there is a pressing reason to keep using the exact same image, it's merely a relatively unquestioned tradition.

Many people (including myself) do not believe there is a pressing reason to not keep using the exact same image. I do think there is good reason to publish research with additional test images including the Kodak set linked to above. "Dilution" of the image would achieve the same goal some are advocating here.

Lastly, I find the following heuristic valuable:

    function hasValidOpinion(person) {
        return person.hasPublishedContentInField();
    }
Near as I can tell from the profiles of people participating in this bikeshed, that function produces false for you, geofft, loudmax and myself. The truth is that none of our opinions are relevant here since this isn't our bikeshed to paint.

The easiest way to break from this tradition is to produce novel research in the area of image compression and publish papers without using this image as a reference. Feel free to publish a paper without it if this matters so much. In the meantime, it's not really fair to be out there criticizing those who are from the comfort of your armchair.


I'm not convinced that other magazines couldn't have provided an image of similar quality, your arguments are rather hand-wavy. The notion that only a person from the same field can have a valid opinion on this is frankly ridiculous. It's plainly ad hominem / argument from authority. I don't care to convince you that we should change the practice of using this picture, but I'm curious to know why you would think we should NOT change it, given that there happen to be people who believe that we should.


Find one. I've been through a fair amount of old periodicals from the 60s and 70s and I'm at a loss to think of something comparable. A copy of Vogue (est 1892) or W magazine (est. 1972) would probably have comparable images, but would probably not have been readily available at the time to the demographic doing this kind of research.

We should not change it for the reasons I made in several sibling comments: we lose a common point of comparison.

You can dilute it's relevance by providing many alternative reference images for many algorithms and popularized those alternative reference images. No one in the field is going to complain about having more common reference images, but they sure as heck are going to see you as "book burner" if you trying to eliminate the one common image without first providing alternatives. Merely stating there are other lossless images is not sufficient. You need to provide those same images after having been processed with every relevant algorithm someone might need to know.

How about you start off by doing this work for Bellard. Take the Mozilla set, run them through BGP and send the images to Bellard for inclusion on his website. Enrich us. Don't make us poorer.

> The notion that only a person from the same field can have a valid opinion on this is frankly ridiculous. It's plainly ad hominem / argument from authority.

The notion that someone uninvolved can have an opinion and expect others to shoulder the burden of conforming to that opinion is even more ridiculous.


>A 2012 paper on compressed sensing by Deanna Needell and Rachel Ward used a photo of the model Fabio Lanzoni as a test image to draw attention to this issue

http://en.wikipedia.org/wiki/Lenna#Controversy


Well, now that they have done it, straight males can "try on" what the oppression of Lena must feel like for women:

So everyone, please respond: Now that there's an "attractive" man in use for image testing, do you feel repulsed or discouraged that they would sexualize/objectify a man like that?

Do you feel pushed away from this industry?

Personally, this exercise is helping me see the light of the "are we treating women as delicate flowers who can't handle it" viewpoint. That image of Fabio is not threatening to me. Anyone who tries to tell me it is threaten, I think is being overly sensitive.

I'm utterly comfortable with people expressing this level of sexuality in the academic/public area. It's more akin to a "schoolkid crush" than to strip tease.


I think we should also retire the mandrill image since it strikes me as speciesist.


I just stopped reading after I got to that image. Why, with all the photos you could possibly choose, use a sexually suggestive one? And one whose context is the kind of boys-club software engineering we should be doing everything to get away from. Sigh.

Also, looking at the other comments, the fact that you happen to be fine with that image is neither here nor there. It certainly doesn't cancel out the fact that I'm not.


I think everyone is forgetting the fact that the uncropped photograph is pornographic. So the rule should be "If you use Lena, use a similarly cropped picture of a nude male model."


People aren't forgetting. It's a disgusting "in" joke. The entire point of including it is because it's pornographic.

Disappointing, but unsurprising, to see HN defending the use of literal pornography in a professional setting. I thought we moved past the Ruby on Rails jerks defending porn in a professional setting as harmless fun.


Bellard is a genius.


Fabrice Bellard strikes again!


> Supported by most Web browsers with a small Javascript decoder

Well, so much for that. Executing code in order to view an image is just begging to be exploited…


The results only show marginal improvement.

What's needed is a compression method that doesn't introduce artifacts on hard edges, as JPEG does, but is otherwise no worse at compression than JPEG. Then we wouldn't need to do some things in JPEG and others in PNG, and we'd be spared the pain of JPEG screenshots. Much better results on the Tecnick image set (which is mostly hard edges) would indicate one had been found. The results only indicate modest improvement in that area.


The results only show marginal improvement.

The improvement is far from marginal. In particular:

... I just tried to link you to a BPG image, and discovered that I can't.

Well, I'm going to ignore that little flaw for the moment, because that's just a browser feature. If BPG catches on, that's sure to change.

Anyway, the improvement is far from marginal: http://a.pomf.se/cdywsc.png

In particular, look around her face, eyes, and the background. The JPG is not just worse, but in fact very worse.

A more serious flaw is that it doesn't support animation. It doesn't need to be a video format. It just needs to be able to play a sequence of frames in succession. This is as easy as including a header that specifies how many frames are in the animation and the duration of each frame, followed by the image data itself. The fact that PNG doesn't have this has plagued the format since the internet became popular.

That may seem like "a video format," but it's not. Video decoders optimize for inter-frame compression, not intra-frame compression, so it's a different problem altogether. BPG doesn't need to do everything, but it should probably have basic animation.


That depends on what you mean by "worse" and "very". I think for most use-cases, those differences are not something likely to be noticed by the majority of users.


The purpose of a test like this is to measure visual performance, and the JPG result is performing very badly.

Open these two images in a separate browser tab, then switch back and forth between them:

http://a.pomf.se/fsnfxz.png

http://a.pomf.se/lfzyrh.png

Those are the upper images of the test. Also try these, the lower images:

http://a.pomf.se/bgqcag.png

http://a.pomf.se/hxlwcm.png

Again, JPG is not merely worse, but very worse. In fact, JPG makes it look like she's wearing a hat that's made of crosshatch material at the top, when in fact the top is composed of rings of fiber, not crosshatch.


Animation is frequently overused and unnecessary. If you're going to do animation, you should have to go through an animation-focused format.


a compression method that doesn't introduce artifacts on hard edges, as JPEG does, but is otherwise no worse at compression than JPEG

JPEG 2000? JPEG XR? Wavelets tend to produce softness instead of ringing or noise.


JPEG 2000 was encumbered by patents the last time I looked into it. Has the situation changed much?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: