Look at the details on the jpeg settings from the image itself.
Subsampling is turned off for some and on for others, which gives the target size far fewer bytes to work with.
This is a common problem with photoshop users, they use the highest settings which turns off subsampling but then reduce the filesize allotment which gives it less room to work with. You get better results if you have a target filesize by turning off subsampling first, which photoshop does not do by default until you drop the quality target very low.
This entire test has to be redone.
Use SUBSAMPLING OFF and PROGRESSIVE ON for all (jpeg) images for the web.
(and do not use default photoshop settings ever for web images)
ps. every time you save a file or image in adobe products it embeds a hidden fingerprint (beyond exif) that identifies your specific install - so not only does it add extra file size, every image you post can be traced on the web - use jpegtran or jpegoptim to strip it
I would be very happy to see a few examples of photograph compressed with the OP method and your method.
It should say SUBSAMPLING ON and PROGRESSIVE ON
Not subsampling off. Off is the incorrect setting and makes much larger images (or reduces the available space when restricting file size).
What you're describing sounds to me like someone recommending Text Edit, Apple Script, and Automator to do Unix commands because they didn't know Terminal was in the Utilities folder.
All the extra stuff you don't like are used by design studios. The metadata keeps track of the color profile, thumbnails, comments, and other metadata commonly used for managing large libraries of images.
When doing "Save For Web" Photoshop disables subsampling for Maximum and High (the default) and also does not enable progressive by default. It also adds meta.
How much of your critique still applies to 'Save for web'?
Nobody (should) be using the normal save dialog for web images so I'm not sure how much of what you say remains valid.
> Some people like interlaced or "progressive" images, which load gradually. The theory behind these formats is that the user can at least look at a fuzzy full-size proxy for the image while all the bits are loading. In practice, the user is forced to look at a fuzzy full-size proxy for the image while all the bits are loading. Is it done? Well, it looks kind of fuzzy. Oh wait, the top of the image seems to be getting a little more detail. Maybe it is done now. It is still kind of fuzzy, though. Maybe the photographer wasn't using a tripod. Oh wait, it seems to be clearing up now ...
But the real reason is it also produces smaller file sizes.
What we did for the non-tech people was simply tell them to always use setting #6 on photoshop and use the progressive setting. Two steps seemed the most they could handle.
http://i.imgur.com/vct3D.png (best one-shot photoshop settings for web jpegs)
I had to go into photoshop and save the same image repeatedly under all the different settings and then examine the resulting jpeg under different tools to see exactly what it was doing.
It also doesn't help that photoshop bloats jpegs by adding hidden adobe meta to every jpeg (beyond and different from exif).
Here is a technical analysis someone did on the photoshop settings:
I believe if you use "Save for Web", it will strip out most metadata and EXIF from the image before saving. Here's a result of using "Save for Web" (1.jpg), passing through JPEGTRAN with `-copy none -optimize` (2.jpg) and `JFIFREMOVE` (3.jpg):
It can also do a lossless conversion to progressive format.
Most people do not know about it though. JPEGOPTIM is another one.
You can examine what's embedded in the image here http://regex.info/exif.cgi
but there are better offline tools.
* It sounds great in theory but the previous example only saved < 200 bytes. That's not really optimization, that's overkill.*
That's a fair point, but if you've got a site that's getting hundreds of thousands or millions of views, or a large number of thumbnails, the one-time effort to shrink image size might be worth it.
Web designers don't "Save" web images, they use "Save for Web" which strips out the extras.
Photoshop won't do 8 bit opacity (as opposed to hard transparency), whereas Fireworks will.
That is simply impossible for two reasons:
1) Retina display is a trademarked Apple phrase and no other company will ever have retina displays.
2) Retina is not a technology.
I think the word technology is being overused these days.
The author was simply talking about high-PPI displays when he said "this new Retina technology", which other companies have already "implemented" in their smartphone displays. Unless one is talking only about Apple products (which is not the case here) the term retina display should not be used.
edit: deleted first sentence, was more off-topic than useful.
Sharp is now producing 5-inch 1920x1080 443 PPI displays. Obviously this is much higher than anything we have ever seen, so it is not a "retina display". What should we call it? Instead of "cornea display" or "very high PPI", I prefer the term "443 PPI".
Useless pedantic "correction" of the day award.
If you want to be fully pedantic though, you're wrong on both counts:
1) Retina might be trademarked by Apple, but nothing stops another company to sign a deal with Apple to use the name for their displays. So, "impossible"? Hardly.
2) Retina is very much a technology. Or rather, what is the definition of technology? Something that requires specific construction that can be identified qualifies as a "technology". In this case, Retina is: a high dpi screen, where high is the level that is impossible or extremely difficult for a person with 20/20 vision to separate pixels when looking from the average viewing distance for that particular class of device. If it's also the name of a SPECIFIC implementation of such things by a PARTICULAR company doesn't matter much, people are not lawyers.
We use a brand name as a substitute for the technology it represents all the time in other fields too. Even PC was once "IBM PC".
So, this raises the question: should this become standard practice from now on. If not, why not?
Poor headline though.
An outdated computer/browsers sucks at resizing jpegs. A two year old high end Android smartphone too.
Minimizing HTTP requests, avoiding FOUC, using only one version of JQuery (or, even better, none), using CSS sprites... there are countless optimisations which are more important and seldom used.
Retina is a buzzword and a buzzword resolution. If you target the 0.1% rich hipsters, yeah, it's important. If you've real users browsing in 1024x768 on a four year old laptop and a 2Mbps broadband, it's not.
When Amazon'll use Retina img, that's when it'll be a standard practice.
I have a hard time believing it would be a significant issue to do a 2x image resize, especially if you provide exact image dimensions in your img tag to start with to avoid the renderer having to wait until the image has downloaded to layout the page properly. In any case, I think someone should do some benchmarks to see what kind of an issue it is in practice.
Either way, it's not a bad thing to be forward thinking with design. The amount of 'retina' devices in the world is growing exponentially. At the moment, it's largely an Apple problem, but with high DPI panels now on the market it'll soon be industry-wide.
Old people with an iPad3 won't see the difference between a non retina and a retina website. They'll see which is the fastest and the easiest to use (because usually, pixel perfect guys are not too smart about UI.)
Retina-ready is a scam. We have neither the tools (SVG support too weak, no <picture> element, no good navigator.connection...) to provide a meaningful retina experience while respecting other users.
Speed trumps "beauty". Most often.
Anyone old or young that cares about speed of browsing will be running the latest browser, with good upscaling...
Retina isn't a scam, however putting an SEO spin on the need to even consider preparing for the future is
IME the best bandwidth optimization is an adblocker :)
Most of us don't care about those anymore. Anything older than IE8 is out in modern web design.
>A two year old high end Android smartphone too.
Those matter even less. Two year old is near end-of-life, since people get new contracts. And Android, despite the higher market share sees less web usage (of the 20-80 scale), probably because more of them are sold to less tech-savvy users.
In the 60-90 range, differences are always minimal, especially when applied to images lacking detail 'coverage' to begin with (like the test set on the site).
Bottom line: I to think that the blogger is onto something.
Even if the "less than original size" thing doesn't always pan out due to inefficiencies in his compression process, it makes sense that in-browser sharpening would allow reduced file sizes for images displayed at lower than native resolution.
PS Significant savings in JPEG size with barely any perceptible loss of detail can be achieved by anyone with JPEGMini (which, unlike JPEG2000 is 100% compatible with all browsers these days).
Firstly - the 8x8 blocks become smaller relative to the image but in addition to this I think that the it's just in the nature of compression algorithms in general and lossy compression in particular to produce better results when there's more source material to work with.
However I didn't expect it to improve enough to enable one to but the Gordian knot that Retina displays have forced upon us by using 2x resolution images across the board.
I would imagine that not all source images respond quite as well as others.
Also - using 2x images for all devices will surely create a quadrupling of RAM requirements which might cause performance issues.
I found that a 30 jpeg at retina size generally looks better than a scaled 80 and is smaller.
Plus you can zoom in on any Mac, not just an iPad (something I do all the time).
See this gist: https://gist.github.com/3848834
And an OLPC-style device might have different ratios if you're browsing using the e-ink display rather than the lcd.
You want to reduce the size of an image file from X kB down to y kB. Which method will give better-looking results?
1. Dumb, across-the-board by-two resolution reduction?
2. Smart, perceptually-tuned jpeg compression?
We probably should have been using this all along. That we can also benefit from the extra resolution thanks to touch interfaces and high-dpi displays is icing.
1) He didn't sharpen the small images.
2) He only displayed one type of image - very bright with no shadow detail.
This theory breaks completely if you actually compare apples to apples. Like these two, both 80KB generated from a high quality image:
(No offense to lynx users.)
Snark aside, it seems like high dpi will inevitably become standard in the next few years.