Look at the details on the jpeg settings from the image itself.
Subsampling is turned off for some and on for others, which gives the target size far fewer bytes to work with.
This is a common problem with photoshop users, they use the highest settings which turns off subsampling but then reduce the filesize allotment which gives it less room to work with. You get better results if you have a target filesize by turning off subsampling first, which photoshop does not do by default until you drop the quality target very low.
This entire test has to be redone.
Use SUBSAMPLING OFF and PROGRESSIVE ON for all (jpeg) images for the web.
(and do not use default photoshop settings ever for web images)
ps. every time you save a file or image in adobe products it embeds a hidden fingerprint (beyond exif) that identifies your specific install - so not only does it add extra file size, every image you post can be traced on the web - use jpegtran or jpegoptim to strip it
Forgive me for asking, but are these settings really relevant? You've said you don't use the Save for Web feature which is Photoshop 101 for optimizing web images.
What you're describing sounds to me like someone recommending Text Edit, Apple Script, and Automator to do Unix commands because they didn't know Terminal was in the Utilities folder.
All the extra stuff you don't like are used by design studios. The metadata keeps track of the color profile, thumbnails, comments, and other metadata commonly used for managing large libraries of images.
"Save For Web" has the exact same problem and makes no difference unless the user is proactive and adjusts settings, the defaults are just as bad as the regular "save as".
When doing "Save For Web" Photoshop disables subsampling for Maximum and High (the default) and also does not enable progressive by default. It also adds meta.
> Some people like interlaced or "progressive" images, which load gradually. The theory behind these formats is that the user can at least look at a fuzzy full-size proxy for the image while all the bits are loading. In practice, the user is forced to look at a fuzzy full-size proxy for the image while all the bits are loading. Is it done? Well, it looks kind of fuzzy. Oh wait, the top of the image seems to be getting a little more detail. Maybe it is done now. It is still kind of fuzzy, though. Maybe the photographer wasn't using a tripod. Oh wait, it seems to be clearing up now ...
Progressive JPEGs are not smaller. They contain the same data, just rearranged. You can losslessly transform a progressive JPEG into a baseline JPEG and vice versa, without recompressing it. The jpegtran tool, included with the standard JPEG library, can do this.
Not a fan of progressive myself, but it should be noted that iOS (at least iOS5), will downsample jpgs larger than 2MP; saving as progressive apparently circumvents this 'feature.'
I suppose this feature is there so that the thing takes less RAM and is less taxing on the GPU. So strike a balance and choose wisely, as you don't want four or so images killing all unfocused Safari tabs.
I assume you've learned this from somewhere in the bowels of the photoshop documentation or years of domain knowledge? I ask because as a photoshop user of 5 years I've always wanted to learn a LOT more about the nitty gritty details (I am on hackernews :P) of what I was exporting but I didn't find a solid end-all be-all resource. Any ideas where I can find one?
I rarely use photoshop myself - I learned it from having to support photoshop users and solve why the images they upload to the CMS system were so massive, ie. 400k for a 600x600 photo
What we did for the non-tech people was simply tell them to always use setting #6 on photoshop and use the progressive setting. Two steps seemed the most they could handle.
I had to go into photoshop and save the same image repeatedly under all the different settings and then examine the resulting jpeg under different tools to see exactly what it was doing.
It also doesn't help that photoshop bloats jpegs by adding hidden adobe meta to every jpeg (beyond and different from exif).
Here is a technical analysis someone did on the photoshop settings:
>It also doesn't help that photoshop bloats jpegs by adding hidden adobe meta to every jpeg (beyond and different from exif).
I believe if you use "Save for Web", it will strip out most metadata and EXIF from the image before saving. Here's a result of using "Save for Web" (1.jpg), passing through JPEGTRAN with `-copy none -optimize` (2.jpg) and `JFIFREMOVE` (3.jpg):
Because it strips any meta information contained in the file which reduces file size. It can also losslessly optimize the image. See here for details (I've written about it a short while ago): http://hancic.info/optimize-jpg-or-jpeg-images-automatically...
* It sounds great in theory but the previous example only saved < 200 bytes. That's not really optimization, that's overkill.*
That's a fair point, but if you've got a site that's getting hundreds of thousands or millions of views, or a large number of thumbnails, the one-time effort to shrink image size might be worth it.
how does this translate into something like ImageMagick ? I mean, if I have users uploading large (>10mb) images which I want to transform into web-optimized images, is there a server-side pipeline that would give the most meaningful results ?
Even that being the case, the higher resolution image would need to have awful artifacting to be worse than the lower res image on a high res display. Scaling up a low-res image in the browser will always make it look fuzzy, you'd need to apply filters to sharpen it up, and you'd still definitely lose detail lost in the original shrink.
Great point. I use Fireworks instead, especially for PNGs. Fw let's you use PNG8, and let's you set the way the transparency is rendered (alpha or index). Large PNG files produced by Photoshop's Save for Web dialog are two to three ones the size of the PNGs I generate w Fireworks.
Look at the details on the jpeg settings from the image itself.
Subsampling is turned off for some and on for others, which gives the target size far fewer bytes to work with.
This is a common problem with photoshop users, they use the highest settings which turns off subsampling but then reduce the filesize allotment which gives it less room to work with. You get better results if you have a target filesize by turning off subsampling first, which photoshop does not do by default until you drop the quality target very low.
This entire test has to be redone.
Use SUBSAMPLING OFF and PROGRESSIVE ON for all (jpeg) images for the web.
(and do not use default photoshop settings ever for web images)
ps. every time you save a file or image in adobe products it embeds a hidden fingerprint (beyond exif) that identifies your specific install - so not only does it add extra file size, every image you post can be traced on the web - use jpegtran or jpegoptim to strip it