Hacker News new | past | comments | ask | show | jobs | submit login

This test was done COMPLETELY WRONG.

Look at the details on the jpeg settings from the image itself.

Subsampling is turned off for some and on for others, which gives the target size far fewer bytes to work with.

This is a common problem with photoshop users, they use the highest settings which turns off subsampling but then reduce the filesize allotment which gives it less room to work with. You get better results if you have a target filesize by turning off subsampling first, which photoshop does not do by default until you drop the quality target very low.

This entire test has to be redone.

Use SUBSAMPLING OFF and PROGRESSIVE ON for all (jpeg) images for the web.

(and do not use default photoshop settings ever for web images)

ps. every time you save a file or image in adobe products it embeds a hidden fingerprint (beyond exif) that identifies your specific install - so not only does it add extra file size, every image you post can be traced on the web - use jpegtran or jpegoptim to strip it




Can you write a blog article with this information, please?

I would be very happy to see a few examples of photograph compressed with the OP method and your method.


Correction to my suggestions for settings, since I cannot edit the original anymore.

It should say SUBSAMPLING ON and PROGRESSIVE ON

Not subsampling off. Off is the incorrect setting and makes much larger images (or reduces the available space when restricting file size).


Forgive me for asking, but are these settings really relevant? You've said you don't use the Save for Web feature which is Photoshop 101 for optimizing web images.

What you're describing sounds to me like someone recommending Text Edit, Apple Script, and Automator to do Unix commands because they didn't know Terminal was in the Utilities folder.

All the extra stuff you don't like are used by design studios. The metadata keeps track of the color profile, thumbnails, comments, and other metadata commonly used for managing large libraries of images.


"Save For Web" has the exact same problem and makes no difference unless the user is proactive and adjusts settings, the defaults are just as bad as the regular "save as".

When doing "Save For Web" Photoshop disables subsampling for Maximum and High (the default) and also does not enable progressive by default. It also adds meta.


As someone else pointed out from looking at your screenshot - you're not using 'Save for Web'.

How much of your critique still applies to 'Save for web'?

Nobody (should) be using the normal save dialog for web images so I'm not sure how much of what you say remains valid.


Counterpoint on progressive JPEG [1]:

> Some people like interlaced or "progressive" images, which load gradually. The theory behind these formats is that the user can at least look at a fuzzy full-size proxy for the image while all the bits are loading. In practice, the user is forced to look at a fuzzy full-size proxy for the image while all the bits are loading. Is it done? Well, it looks kind of fuzzy. Oh wait, the top of the image seems to be getting a little more detail. Maybe it is done now. It is still kind of fuzzy, though. Maybe the photographer wasn't using a tripod. Oh wait, it seems to be clearing up now ...

[1]: http://philip.greenspun.com/panda/images


On web sized progressive, the loading is over quickly and the benefit is an immediate placeholder on even slower connections instead of whitespace.

But the real reason is it also produces smaller file sizes.


Progressive JPEGs are not smaller. They contain the same data, just rearranged. You can losslessly transform a progressive JPEG into a baseline JPEG and vice versa, without recompressing it. The jpegtran tool, included with the standard JPEG library, can do this.


Larger images saved as pjpegs tend to be slightly smaller than a standard jpeg.

http://www.yuiblog.com/blog/2008/12/05/imageopt-4/


If you actually try it using the tool you yourself mentioned you'll see that progressive JPEG's are in fact smaller.


Stream compression benefits from locality of data, so the ability to merely rearrange data can be quite beneficial.


Not a fan of progressive myself, but it should be noted that iOS (at least iOS5), will downsample jpgs larger than 2MP; saving as progressive apparently circumvents this 'feature.'


I suppose this feature is there so that the thing takes less RAM and is less taxing on the GPU. So strike a balance and choose wisely, as you don't want four or so images killing all unfocused Safari tabs.


I assume you've learned this from somewhere in the bowels of the photoshop documentation or years of domain knowledge? I ask because as a photoshop user of 5 years I've always wanted to learn a LOT more about the nitty gritty details (I am on hackernews :P) of what I was exporting but I didn't find a solid end-all be-all resource. Any ideas where I can find one?


I rarely use photoshop myself - I learned it from having to support photoshop users and solve why the images they upload to the CMS system were so massive, ie. 400k for a 600x600 photo

What we did for the non-tech people was simply tell them to always use setting #6 on photoshop and use the progressive setting. Two steps seemed the most they could handle.

http://i.imgur.com/vct3D.png (best one-shot photoshop settings for web jpegs)

I had to go into photoshop and save the same image repeatedly under all the different settings and then examine the resulting jpeg under different tools to see exactly what it was doing.

It also doesn't help that photoshop bloats jpegs by adding hidden adobe meta to every jpeg (beyond and different from exif).

Here is a technical analysis someone did on the photoshop settings:

http://www.impulseadventure.com/photo/jpeg-quantization.html...


>It also doesn't help that photoshop bloats jpegs by adding hidden adobe meta to every jpeg (beyond and different from exif).

I believe if you use "Save for Web", it will strip out most metadata and EXIF from the image before saving. Here's a result of using "Save for Web" (1.jpg), passing through JPEGTRAN with `-copy none -optimize` (2.jpg) and `JFIFREMOVE` (3.jpg):

    330090  1.jpg
    329916  2.jpg
    329898  3.jpg


Yup, jpegtran is a must for post-processing adobe images.

It can also do a lossless conversion to progressive format.

Most people do not know about it though. JPEGOPTIM is another one.

You can examine what's embedded in the image here http://regex.info/exif.cgi

but there are better offline tools.


I don't get it. Why is jpegtran a must?


Because it strips any meta information contained in the file which reduces file size. It can also losslessly optimize the image. See here for details (I've written about it a short while ago): http://hancic.info/optimize-jpg-or-jpeg-images-automatically...


It sounds great in theory but the previous example only saved < 200 bytes. That's not really optimization, that's overkill.


Not if you have many images on the same page. Then id adds up.


If you place 1800 images on a page, you'll save enough bandwidth to now have 1801.


In response to Radley, who said:

* It sounds great in theory but the previous example only saved < 200 bytes. That's not really optimization, that's overkill.*

That's a fair point, but if you've got a site that's getting hundreds of thousands or millions of views, or a large number of thumbnails, the one-time effort to shrink image size might be worth it.


how does this translate into something like ImageMagick ? I mean, if I have users uploading large (>10mb) images which I want to transform into web-optimized images, is there a server-side pipeline that would give the most meaningful results ?


The extra weight is for library management features (color, compatibility, previews, Bridge).

Web designers don't "Save" web images, they use "Save for Web" which strips out the extras.


Aha, the web page talks about "chroma subsampling", makes sense now. So saving less chroma than luma (color vs intensity) information.


Even that being the case, the higher resolution image would need to have awful artifacting to be worse than the lower res image on a high res display. Scaling up a low-res image in the browser will always make it look fuzzy, you'd need to apply filters to sharpen it up, and you'd still definitely lose detail lost in the original shrink.


You mean like on the teeth and in the eyes in the third image set?


Great point. I use Fireworks instead, especially for PNGs. Fw let's you use PNG8, and let's you set the way the transparency is rendered (alpha or index). Large PNG files produced by Photoshop's Save for Web dialog are two to three ones the size of the PNGs I generate w Fireworks.


You're doing it wrong then. PNG8 is the same in both products. PNG24 isn't optimized in Adobe products. You need to run it through a PNG optimizer.


You're right that PNG optimizers will yield better results, but PNG8 isn't the same in both products.

Photoshop won't do 8 bit opacity (as opposed to hard transparency), whereas Fireworks will.


Is the conclusion wrong?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: