Hacker News new | past | comments | ask | show | jobs | submit login
Do bigger images mean improved conversion rates? Three case studies (econsultancy.com)
84 points by scholia on Apr 1, 2013 | hide | favorite | 40 comments

1. Bigger images usually improve click-through rates. Why? There are a number of reasons. An image is used to to tell part of the sales message. People relate ideas to images and understand the message you are trying to send better. Also, a lot of people love reading what is under images. As a result, your copy gets read more often. A bigger image amplifies this by turning the landing page into a sort of short story.

2. Big buttons work. They are the call to action, and big call to actions get a big amount of attention.

3. When picking images, be mindful of the market you are selling to. The race of the people in the images are very important. Not because of racism, but due to how people relate to those who look like themselves. A self-taken photograph usually nets better results than stock photos. Even one from a mobile phone.

4. If you are selling a service, then include a photo of the dashboard/main area in the main page. People want to see how it looks like without signing up.

There is an old adage A picture is worth a thousand words for exactly that reason. It helps people scroll less and get the message without having to read too much text.

All one needs to do is consult the artist within and choose the right image for better conversion. That's exactly the point no. 3, like you said, kudos. IMHO, images and their messaging are probably one of the spots where art and math can converge.

"There is an old adage A picture is worth a thousand words for exactly that reason. It helps people scroll less and get the message without having to read too much text."

I may be combining your first and second sentences together in a way that you did not intend, but I think the adage is older than scrolling ...

Yes, unintentional it was. My apologies, English is not my mother tongue.

Yes, they do converge. I have a lot of data from using images to increase conversion. Such as using simple pie graphs to demonstrate price/value over competitors, or Venn diagrams to help people choose which option is best fr them.

The cases presented don't give much evidence for the impact of bigger images on conversion rates. Only the first case seems to be a direct comparison of images of different sizes. In the two others, the copy and design in general have changed significantly as well. In the second case, the image on the page is actually the same size, the button has been resized.

In general, though, I do completely agree with the article that the only way to find out what works is to test, test, test.

Examples 2 and 3 as presented in the article couldn't be more horrendous examples of invalid inferences from test results if the author had tried.

It's the cargo cult approach to testing: "let's change everything and if the revamp increases conversions we'll guess what factor had the most effect".

Only testing specific examples is not enough. Without testing against a theory you gain little knowledge from these experiments. I would like to see reports of people testing their beliefs about how web design decisions affect user behaviour instead of finding out which design works better for a given site. Of course this is more time consuming but it will also yield more valuable insight.

Also I think it is important to include feedback from actual users in your test results. How did they feel using the different sites? What are the users feelings about your own concerns (invisible content below the fold, ridiculous big buttons, etc)?

Without these considerations you are just poking around the mud with a stick.

> Responsive web designers know that on many sites, as few as 20% of visitors will bother to scroll down.

Is this really an "understood" truth? In our own study for our own site, we found nearly everyone scrolled the page all the time, and that seems the be the case with every colleague I've ever talked to as well. Who goes to a page and says, "fuck it, that scroll wheel is just way too damned difficult"?

No idea where they got that number, or what a "responsive designer" is (answers phone calls quickly?). This myth has been debunked long ago.

A responsive designer in this context is somebody who designs webpages to scale to screen width. So the same codebase services smartphones and 1900x1200 browser windows.

The thing that is puzzling to me is the whole thing about people scrolling & responsive design. How do you design something to avoid scrolling when your layout is designed to accommodate any screen size? It seems like those concepts are in conflict.

Link to the source (http://www.nngroup.com/articles/scrolling-and-attention/) is in the comments.

However, as the author admits in the comments (after being corrected by a reader), he has interpreted the results of the source incorrectly. The cited report actually only says that "Web users spend 80% of their time looking at information above the page fold. Although users do scroll, they allocate only 20% of their attention below the fold."

That could be because we tend to put the most important stuff above the fold. They analyzed real websites that (we would think) would tend to do that.

My guess is: if you capture their curiosity, they will scroll. Otherwise, they probably won't. Simple, I know, but I'm going with it.

I have been surprised to observe the number of older people who still click on the up and down arrows in order to scroll. Moving the cursor onto the scroll arrows takes precision and for someone with poor mouse skills is even more difficult.

Using the keyboard shortcuts is just faster and more convenient if you don't have a scroll wheel.

In the case of an online auction, it seems pretty obvious to me that more detailed images of the item being sold will increase interest (I've been scrolling past lots of potentially good deals on graphics cards on ebay recently, because there wasn't a detailed photo of their output sockets...)

In the other cases, it seems more like "a change happened" and "conversion rates went up" - far too many changes at once to imply one specific change caused it :-/

The better product content you present the customer: bigger pictures, different product shots, video, reviews, comparisons, product alternatives, better copy, detailed descriptions, line-art with dimensions, etc., the better the conversion rate.

It's a big differentiator when the same (or equivalent) product is sold by multiple merchants. Customers in my experience are even willing to pay a premium to the merchant with the better content, presumably because they seem more legitimate and less-risky.

How can you call something a baseline control with completely different messaging and layout than the variation? The last two examples can't even come close to attributing the increase in conversion to larger images. This is just an advertisement for WhichTestWon. Coincidentally, WhichTestWon is the author's employer...

I would love to see more Case Studies on Mobile App Design. For example how big should images within a mobile app be, to increase user engagement. Especially inside a listview bigger images can mean fewer items, but might still increase conversion because it looks more appealing.

I think it looking more appealing is the key. Sure you can fit less, but if people don't want to use it, it doesn't really matter does it? People are usually more willing to make subconscious tradeoffs when it looks, feels, and works awesomely.

I understand all your points. But that is why I like to see some mobile specific Case Studies, just to help and give some basic rules of thumbs. Otherwise all you have is opinions. At the end you obviously have to test your different designs / approaches. But some basic and well tested rules would help to start up the design a lot.

If you find this mythical mobile case study please post it here.

Massively improved conversion rates..on what sample size? Can we trust these results?

So try it for yourself. Track results. And make sure you do the statistics right to know if you're looking at signal or noise. (Early noise can be very, very impressive.)

Would be nice to see actual the numbers to judge whether the results are statistically significant.

At first blush all but the last Dell treatment are absolutely horrendous to my eye, so I just don't know how to even judge these.

How would you calculate what would constitute statistically significant results for each of these sites? How about any other site?

It doesn't really have anything to do with the site... it has to do with what the difference is vs. the total size of the sample. If you see a 50% increase but only had 3 people in your sample, that is not statistically significant because the probability of that happening by chance is so high. To make this concrete, if the actual proportion was 4:1 against that figure (as in, you "should" have measured a whopping decrease to 25% the original conversion), you'd still measure the totally incorrect 50% increase in one out of every ten attempts at that measurement.

I failed to make the point of my question clear enough. Nickpotier critiqued the results without offering up any numbers. I wanted to see what he thought the numbers should have been for each example given.

Roughly a good sample size is around -log(d/2)/(2 e a^2) where d is the significance you want and a is the difference in conversion rates you hope to measure. See http://www.win-vector.com/blog/2013/03/a-bit-more-on-sample-...

I read that and I have no idea what those values are. Is there an example of this with actual real-life data?

He gave you a reference. Google "statistical sample size". There are several calculators in the results that you can play around.

Everybody should take a college level course in statistics, even poets and painters.

At a certain point, bigger won't be better, right? What's the maximum effective size? When an image is so big it just takes over your screen?

At some point, it's less, also, about the size as much as it is the quality of the picture. This reminds me of OkCupid (the dating site). They did an analysis of the quality of picture (and camera used) to measure their effects on click rates and messages. See http://blog.okcupid.com/index.php/dont-be-ugly-by-accident/

Looking at how some really great sites are doing, too, there are sites with massive pictures, but if they're not high quality, turns out to look like a cheap site.

I'd point out that the call to action is also much larger and visible in that Dell case study. The control for that one is pretty poor.

Has anyone found a source for any publicly released raw data on this sort of thing? Something one could re-sift and discuss?

When I saw this I thought for sure it was an april fools joke, but then I saw that it was posted on March 21st.

Does anyone know if there's a daily/weekly newsletter email with conversion tips?

Don't know if it's regular, but this looks like it might have some good stuff: http://www.conversion-rate-experts.com/learning-zone/

Is that site for real?

There's only one sentence on that page and, unless there's a new term with which I'm not familiar, it has one heck of a credibility-destroying typo:

"Get our converison [sic] tips newsletter starting now:"

Does HN filter out April Fools' pranks?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact