
The effect of typefaces on credibility - gamzer
http://opinionator.blogs.nytimes.com/2012/08/08/hear-all-ye-people-hearken-o-earth/
======
aw3c2
Those columns graphs were very misleading to me. Here is some adjusted image
showing the whole image for easier grasping of the dimensions:
<http://i.imgur.com/QS8PA.jpg> (I am not 100% my math is correct but a quick
calculation in my head says the dimensions seem correct)

~~~
asolove
Thank you for this. There's a reason why any formal discussion of graphics
mentions that the numerical axis of bar graphs must start at 0.

[0] <http://had.co.nz/ggplot2/geom_bar.html> [1] [http://www.b-eye-
network.com/view/index.php?cid=2468&fc=...](http://www.b-eye-
network.com/view/index.php?cid=2468&fc=0&frss=1&ua)

------
waratuman
I'm not that great of a writer. My papers were usually in the B range and I
almost never got an A. Then I started using LaTeX and now my papers are used
as examples in classes.

~~~
cpeterso
+1

I'm not sure why people like the Computer Modern typeface so much. I think it
looks terrible. :\

~~~
chipotle_coyote
I used to dislike CM when it was printed on laser printers, but discovered
that when it was printed on a Linotype--the _Art of Computer Programming_
books, for instance--it seemed to take on an entirely different character. It
looks clean and pretty, rather than spindly.

Clearly that's subjective, and you might hate it no matter the output device.
:)

------
anusinha
An empirical anecdote: when I was in high school, all of my lab reports for
chemistry and physics were typeset in LaTeX while most of my friends either
handwrote the mathematics or used MS Word's Equation Editor. There were
multiple occasions where a friend and I made the same mistake (we usually
worked together; yes, we cited each other) and the deduction on my report was
less than the deduction on his. It wasn't huge, usually -1 point vs -2, but
there was consistently a difference.

~~~
jedberg
I've been using LaTex for my resume[1] for years, and I always get compliments
on how "professional" it looks.

You can see the resume and source code here if anyone is interested:
[1]<http://www.jedberg.net/hire_jeremy_edberg.html>

~~~
tikhonj
Not only does LaTeX make your resume look better, it also makes it easier to
manage. I have a whole bunch of sections for things like different projects,
experience, awards, education and so on. I can easily switch any of them out
in any particular resume I want to print because they're all in separate
files. The actual resume itself is just a bunch of includes, which means I can
easily have multiple different permutations that are kept in sync whenever I
update the appropriate sections.

Of course, this hasn't helped me at all because I haven't updated my resume in
a year and a half. This might not seem too bad, but I was actually in the
middle of freshman year last time I touched it :P.

~~~
pjscott
Oddly enough, your resume's source code is probably a more effective resume
than any output it could possibly generate.

~~~
tikhonj
Haha, you assume my code follows good TeX style and conventions. If the only
code sample I ever saw by myself was that, even I wouldn't hire myself. (That
was a confusing sentence to write :P.)

If you can imagine a site with IE-specific code and a layout that's half
specified with absolute positioning and half with &nbsp;, you wouldn't be that
far off from what my resume code looks like. It's essentially held together
with metaphorical duct tape and \vspace{}.

It's like sausage--tastes great, but never visit a sausage factory :).

------
JoelSutherland
When the survey initially came out I was randomly given Computer Modern. The
day before, I had painstakingly converted Computer Modern it to a webfont for
a friend's blog (<http://www.krisjordan.com>) and I was shocked to see it on
nytimes.com.

I quickly went over to a different (Windows) machine to try it out because I
couldn't believe my eyes. That one was given Georgia, so I mistakenly assumed
that Errol Morris was such a type hipster that he included Computer Modern in
his type stack if it was installed locally. It was pretty funny to see this
today.

One thing I will say, is that the Computer Modern webfont they used is a
disaster. It had tons of aliasing issues. I wonder how they sourced it since
natively it isn't in a normal font format. (Knuth!) That certainly would skew
the results.

~~~
kragen
So if you want to use cmr, cmi, and friends, on your web site, what's the best
way to do it? I see that you seem to have put them in a few different formats
in /fonts/ and even stuck a base64 version into the stylesheet itself. I'd be
delighted to hear more about the pains you had to take and why!

~~~
JoelSutherland
I started with the OTF files here:

<http://canopus.iacp.dvo.ru/~panov/cm-unicode/>

Then I messed around a bunch with the advanced options on the Font Squirrel
@font-face generator:

<http://www.fontsquirrel.com/fontface/generator>

I don't remember the exact settings I used, but it needed a decent amount of
tweaking to end up looking acceptable.

The base64 encoding doesn't contribute to anything other than website
performance. It saves an additional request in modern browsers. The other
linked fonts are for IE/Older browsers.

~~~
kragen
Cool, thanks! What kind of aesthetic problems did you have? Poor hinting,
lousy kerning?

------
pierrefar
How did they control for whether the fonts are actually installed on the
participants' computers or not?

Also, did they control for desktops vs smartphones vs tablets? It's reasonable
to hypothsize the device's screen (and zoom level on mobiles) affects typeface
rendering and its perception.

All in all, intersesting and worthy of more work, but I want more to believe
the result more.

~~~
dazbradbury
> How did they control for whether the fonts are actually installed on the
> participants' computers or not?

They could have used @font-face and distributed the fonts with the page, but
to be safer, they could have just created 6 different images, and distributed
the passages as a png's.

I have no idea whether this was the approach - but if they do it in the
future, this is certainly something to think about. Simply swapping the order
of the font-family css line isn't going to cut it.

I look forward to this being researched more - Who thinks Google/Facebook have
done this A/B test already? Maybe they could release the results!

~~~
CrazedGeek
> They could have used @font-face and distributed the fonts with the page,

Bingo. These CSS[0] and JS[1] are in the quiz.

[0] <http://dl.dropbox.com/u/2891540/Fonts/fonts.css> [1]
[http://graphics8.nytimes.com/packages/js/multimedia/bundles/...](http://graphics8.nytimes.com/packages/js/multimedia/bundles/20111122_errollfonts.js)

------
thebigshane
I really hate those "Weighted Agreement/Disagreement" charts.

For Weighted Agreement, it looks like Comic Sans had a way lower agreement
rate (it looks like 60% lower) but Comic Sans had only a 4.5% lower agreement
rate than Baskerville, _including_ their weighting system.

For Weighted Disagreement, Georgia had only a 7.7% increase in disagreement
than Baskerville whereas the chart makes it look more than double.

Still interesting, but not _nearly_ as substantial as they make it out to be.
Is there a term for this type of manipulation of charts (whether intentional
or not)?

EDIT: Indeed, the term for this is "Truncated graph"
[http://en.wikipedia.org/wiki/Misleading_graph#Truncated_grap...](http://en.wikipedia.org/wiki/Misleading_graph#Truncated_graph)

And as a bonus (thanks wikipedia!), according to Edward Tufte's "Lie
Factor"[0] (where 1 is considered accurate), the Weighted Agreement chart has
a lie factor of ~15 and the Weighted Disagreement chart has a lie factor of
~17.

[0]:
[http://thedoublethink.com/2009/08/tufte%E2%80%99s-principles...](http://thedoublethink.com/2009/08/tufte%E2%80%99s-principles-
for-visualizing-quantitative-information/)

------
jere
>Georgia is enough like Times to retain its academic feel, and is different
enough to be something of a relief for the grader.

I've thought for years Georgia was a great choice on a resume/paper.

a) You want to stand out

b) You also don't want to appear too "starchy"

~~~
kadavy
Actually, while Georgia is a good font overall, the design is Georgia is
optimized for the screen. Because of this, it has some peculiarities, such as
a very large x-height (so the pixels in the counterforms can be read).

Here's a little more about what I'm talking about:
[http://www.kadavy.net/blog/posts/design-for-hackers-why-
you-...](http://www.kadavy.net/blog/posts/design-for-hackers-why-you-dont-use-
garamond-on-the-web/)

~~~
jere
Good points. Verdana is one of my favorites, but I believe it suffers from the
same issues because it was also designed for being legible on screen.

<http://en.wikipedia.org/wiki/Verdana#Usage>

~~~
kadavy
Yeah, Verdana is fantastic on screen (this was especially so before anti-
aliasing was standard), but looks awful in print.

EDIT: also, didn't know about IKEA's Verdanagate (from your link above).
Fascinating!

------
rubergly
> Baskerville seems to be the king of fonts. What I did is I pushed and pulled
> at the data and threw nasty criteria at it. But it is clear in the data that
> Baskerville is different from the other fonts in terms of the response it is
> soliciting.

No amount of 'pushing' and 'pulling' at data can compensate for a poorly
designed experiment. Georgia can't be used as both the control and a measure
of how effective Georgia is—clearly fonts that stood out from the rest of the
page would have a different effect than the one that looks exactly like the
rest of the page. To give any of this credence, the sample should have stood
alone, or the typeface of the surrounding page should have been randomized as
well. What we're looking at here is "Are there certain typefaces that compel a
belief that the sentences they are written in are true when contrasted with
Georgia?".

------
sgoranson
Generally I detest font geeks, but I'm going to defend them here. First off:
yes it's true that many intellectuals, designers, and hipsters have a genuine
prejudice for Comic Sans. Facts need to work a little harder to prove
themselves when written in that font. But in a world where we're deluged with
typed information from the second we glimpse at our alarm clocks, I think it's
okay to have a little prejudice because we need filter out at least some of
the noise. Like most stereotypes, the Comic Sans prejudice is based on a grain
(beach?) of truth. Can anyone really claim that the percentage of trustworthy
Comic Sans based webpages they've seen in their life is equal to the
percentage of trustworthy Georgia pages? Sorry. Geocities happened people, and
I, for one, will never forget it.

------
lubujackson
This misses the most obvious difference to users, which is that the font
changes in the middle of the article. Taken in conjunction with all the other
fonts on the page, the harmony of the specific font to all other fonts in the
article and on the page is probably the most important factor here.

------
Danieru
I would like to ask Patio11 if he has ever done font A/B tests. I'm working on
my sales website and the results would be very welcome. In my case I lack the
traffic to do any proper testing.

~~~
patio11
I've never done an A/B test on typeface, specifically, but I've made a few
over the years on e.g. font sizes. No major results to report. My intuition is
that I'm extraordinarily skeptical that this would matter and largely think
the only people capable of perceiving strong differences in the character of a
typeface are people who do that professionally plus Thomas. I'm prepared to be
wrong about that, though, as six years ago I would have told you that colors
could not possibly meaningfully impact conversions and that's just
catastrophically wrong.

------
brendano
Following up on blahedo's comment and the questions about what the heck their
p-values mean --

This is a nice example that you can get statistical significance for small
effects, if your sample is big enough. Their p-values are explained very
badly, so I did my own analysis by transcribing their data from those plots.
Let's take their weighting scheme for granted. I agree with some other
commenters that the sums and counts are misleading, and instead took average
scores per font, and computed confidence intervals for those means. The means
are indeed a little different, and for some pairs, statistically significantly
so.

[http://brenocon.com/Screen%20shot%202012-08-10%20at%202.03.0...](http://brenocon.com/Screen%20shot%202012-08-10%20at%202.03.07%20AM.png)

But does it matter much? Take the pair with the largest gap, Baskerville vs.
Comic Sans, of 0.95 versus 0.79: a difference is 0.16. This is out of a
10-point scale (ranging -5 to +5).

In fact, the standard deviation for the entire dataset is 3.6 -- so just 0.05
standard deviations worth of difference.

Or here's another way to think about it. If a person does Comic Sans example,
versus could have done Baskerville example, how often would they have score
higher? (This ignores the weightings, it's a purely ordinal comparison. I
think this is related to the Wilcoxon-Mann-Whitney test statistic or
something, I forget.) So with independence assumptions (if they had proper
randomization, hopefully this solves), just independently sample from the
distributions many times and compare pairs of simulated outcomes. 22% of the
time it's a tie, 40.3% of the time Baskerville scores higher, and 37.8% of the
time Comic Sans scores higher. I guess then it sounds like the difference is
better than nothing.

Not sure what's a good and fair way to think about the substantive size of the
effect. I wanted to take the quantile positions of the means, but realized you
can't exactly do that with ordinal data like this (zillions of values share
the same quantile value).

I probably missed something, so here's the transcribed data and R/Python code
probably with errors: <https://gist.github.com/3311340>

Now that I'm thinking about it more, averaging the agreement scores seems
weird. Maybe it's clearer to use the simple binary agree/disagree outcome.

------
ryanricard
I don't know about typefaces, but taking a screenshot of rendered black-on-
white text and saving as .jpg sure has an effect on credibility.

------
eliasmacpherson
I would like to see it controlled for age group, I remember liking Comic Sans
as a child. Would Comic Sans have an effect on children, the same way it seems
to for adults?

------
wisty
I'll take the other side here:

There's two axis - engagement and authority. Baskerville is not engaging, but
it looks authoritative. So you tend to agree, even if you don't know what it
says (like a boring professor or politician). Comic Sans is like a boring
person in a clown suit - you can't follow what it's saying, and you tend to
disagree just because it looks a little stupid.

The more respectabel Sans are engaging, but not authoritative; Times is both
engaging and authoritative.

If you read something in Baskerville, you agree because it looks so boring
that you can't be bothered reading it. Georgia, on the other hand, encourages
both strong agreement and strong disagreement - people take it seriously, but
actually pay attention. No-one takes Comic Sans seriously, because it's hard
to read _and_ looks stupid.

------
SaulOfTheJungle
For those who don't have Baskerville: <http://klepas.org/openbaskerville/> or
<http://openfontlibrary.org/font/open-baskerville/>

------
wwweston
"It's going to work! I'm using a very convincing font; it's bold, and has a
lot of serifs."

<http://www.youtube.com/watch?v=APcuJjCZTMU#t=4m07s>

------
tolos
I find it odd that Comic Sans and Georgia change places in the weighted
totals.

Now I'm going to petition Randall Munroe to gather more data (thinking of the
color survey).

------
blahedo
There are a lot of problems here.

The bar charts used to illustrate that article are terrible. They present raw
counts for each font, but each font was not presented to the same number of
people---they varied from 7,477 (CM) to 7,699 (Helvetica), which is a pretty
big swing given the other numbers they're displaying. In fact, when you run
the percentages, CM has a higher percentage of agreement than Baskerville
(62.6% to 62.4%)!

When we turn to the "weighted" scores, which don't follow any clear
statistical methodology that I'm aware of, the bar chart is again presented
with counts rather than proportions, and this time with an egregiously
misleading scale that makes it seem like CS gets half the score of gravitas-y
fonts like CM and Baskerville, when in fact its score is only about 5% lower.

Finally we get to the "p-value for each font". That's... not how p-values
work. The author admits that his next statement is "grossly oversimplified",
but there's a difference between simplification and nonsense. He says that
"the p-value for Baskerville is 0.0068." What does that mean? What test was
being performed there? Can we have a little hint as to what the null and
alternative hypotheses were?

~~~
nhebb
The biggest problem I see is that Baskerville isn't standard on Windows. They
may have specified a Baskerville font-family, but that's not necessarily what
the reader saw. The original test article displayed the asteroid passage as
text, not as an image, so unless they accounted for the rendering differences
among OS's, the entire test seems questionable.

I'm on a Vista system at the moment, and it does have a Baskerville Old Face
variant, but "Gold has an atomic number of 79" does not look like the text
shown in the article.

~~~
colanderman
Computer Modern is not standard on any system either. Presumably they used web
fonts.

~~~
nhebb
You're probably right about the web fonts, but there is still the issue that
the same font will look different on Mac and Windows. Heck, on my system
there's a noticeable difference between browsers.

~~~
atworkatwork
I know Firefox renders web fonts (at least Ubuntu from Google web font api) in
an awful manner. Webkit does it perfectly.

------
jakeonthemove
Interesting read.

I certainly agree that Comic sans nudges me towards disbelief (and I'd never
read a full article written in this horrible font :-)), while Georgia seems
more 'professional' and believable.

Baskerville in my mind is instantly associated with all the books I read -
most of those on scientific topics had this or a very similar font. Don't know
whether it affects my judgement of what's written compared to any other normal
fonts.

Typewriter-style fonts do make texts seem older and therefore, more believable
(since they've been around for so long, there must be some truth to them - the
standard logical reasoning).

------
vorg
So font has an effect on how seriously readers take what's written in it. The
names of the fonts alone (i.e. Helvetica, Georgia, and Comic Sans) also give
off the same vibes.

I wonder... Do the names of programming languages have an effect on how
seriously people want to read what's written in them? If given 3 names (e.g.
Python, Ruby, Groovy), do people subconciously rank their seriousness???

------
deadmike
I took a class that was very writing-intensive. I was one of the only people
to ever actually change the font in my essays from Calibri to Times New Roman,
and always wondered if this contributed to the fact that I did substantially
better than most other people with very comparable essays.

------
christofd
regarding comic sans use at CERN... often found that scientists like hideous
designs. it's a way of saying that we work on serious stuff.

a lot of sites at MIT, CMU have that mark... the more prestigious, the uglier.

of course it has to be a certain style of ugly.

~~~
captaintacos
I'd say not just scientists at CERN. This applies to most (nah... ALL)
scientists: researchers, PhDs, Professors, etc. They use very hideous design,
or more accurately, nothing at all, on their research webpages. And I am
talking about a geocities-esque type of "design" here. Sometimes I even wonder
if there is an implicit rule around that goes on the lines of "real
researchers must PRETEND they just learned html".

------
mmcnickle
Our lecturer give us notes on particle physics in Comic Sans. I found them
impossible to study from.

Edit: incidentally, she works at CERN on the ATLAS experiment too.

------
cmancini
The ironic thing about this article is that it encodes the text images as
JPEG. I wonder about image encoding's effect on credibility.

~~~
pjscott
This article does, but the experimental article itself embedded the fonts so
they would render properly.

------
sirtophat
Honestly, the font had nothing to do with my decision - I just trust NASA.

------
scoith
As a scientist, I wouldn't worry about someone's lowered opinion who judges a
scientific text by the font.

