Hacker News new | comments | ask | show | jobs | submit login
The effect of typefaces on credibility (opinionator.blogs.nytimes.com)
252 points by gamzer on Aug 9, 2012 | hide | past | web | favorite | 93 comments



Those columns graphs were very misleading to me. Here is some adjusted image showing the whole image for easier grasping of the dimensions: http://i.imgur.com/QS8PA.jpg (I am not 100% my math is correct but a quick calculation in my head says the dimensions seem correct)


The followup to this article will be about how unlabeled graphs convince people of dubious findings.


Thank you for this. There's a reason why any formal discussion of graphics mentions that the numerical axis of bar graphs must start at 0.

[0] http://had.co.nz/ggplot2/geom_bar.html [1] http://www.b-eye-network.com/view/index.php?cid=2468&fc=...


I'm not that great of a writer. My papers were usually in the B range and I almost never got an A. Then I started using LaTeX and now my papers are used as examples in classes.


When I was in Uni I had a similar experience. I never collected enough data to really show a conclusive relationship and obviously it was possible my writing improved around the same time that I switched.. but it was still something that I noticed.

One of my theories was that professors expected rushed papers to have poor or missing formatting, but my rushed papers wouldn't. The formatting was therefore sending a signal to the professor saying "He didn't write this in the morning, so give him the benefit of the doubt." I never considered that the typeface that I was using could have been involved.


In the case of (La)TeX I'd actually also be curious if i.e. the rules governing hyphenation/justification makes any difference as well, not just the typeface.


I'm pretty sure they do. Justified text looks more "professional" than ragged right one. The problem with it is, that it can create rivers without proper hyphenation. TeX is pretty good at this stuff and that's one reason why text set with TeX looks so good. I wish browsers would catch up on this.


I'm pretty sure the grader for my algorithms class just assumed that anything typeset in LaTeX must be correct, as I routinely got full credit for completely incorrect things, while classmates not using LaTeX got dinged for every minor mistake.


I was a TA for some computer classes at Stanford a little while ago. Part of that meant that we would spend many hours grading exams at the end of the term. Every time I wondered to myself how much student handwriting affected their grades. How could a student write almost entirely illegibly and expect to get a good grade on the exam? Sometimes I couldn't find the correct answer even if it was there on the page.

Other students had very clear handwriting, while others had very crisp and confident handwriting. I'm sure there are numerous subtle cues in handwriting that give it the air of professionalism or confidence.

Handwriting differences could easily account for a few points on an exam, which might translate into a few crucial percentage points.

Soon we'll just have to force everyone to type their answers in a standard format and then the graders can view the results in the editor of their choice.


I wonder if that has ever affected my grades. I have truly horrendous, mostly irregular handwriting but it is readable to most people. I can, however, write in calligraphic styles but it isn't automatic.

Does handwriting say anything more about a person than that they spent a thousand hours or so in their early schooling repeating symbols on paper?


Perhaps people who give attention to readable handwriting also give attention to making themselves more presentable in person.


Latex commands so much authority over people who have never used it- things typeset with it carry incredible gravitas! I just now realized that must be why I was drawn to it. You can say the same thing, and sound/look smarter :)


+1

I'm not sure why people like the Computer Modern typeface so much. I think it looks terrible. :\


I used to dislike CM when it was printed on laser printers, but discovered that when it was printed on a Linotype--the Art of Computer Programming books, for instance--it seemed to take on an entirely different character. It looks clean and pretty, rather than spindly.

Clearly that's subjective, and you might hate it no matter the output device. :)


There are nicer typefaces that come with most LaTeX distributions - I particularly Like Charter BT [1].

Also, with XeTex/XeLaTeX, you can use opentype typefaces and it still typesets them far better than a word processor.

1. http://www.tug.dk/FontCatalogue/charter/


I'm a Scottish teenager. Perhaps the only possible reason I managed to scrape a 2 (2 or 1 is high enough to do Higher level course the year after) in my Standard Grade English was because all my work was in Georgia.


An empirical anecdote: when I was in high school, all of my lab reports for chemistry and physics were typeset in LaTeX while most of my friends either handwrote the mathematics or used MS Word's Equation Editor. There were multiple occasions where a friend and I made the same mistake (we usually worked together; yes, we cited each other) and the deduction on my report was less than the deduction on his. It wasn't huge, usually -1 point vs -2, but there was consistently a difference.


I've been using LaTex for my resume[1] for years, and I always get compliments on how "professional" it looks.

You can see the resume and source code here if anyone is interested: [1]http://www.jedberg.net/hire_jeremy_edberg.html


Not only does LaTeX make your resume look better, it also makes it easier to manage. I have a whole bunch of sections for things like different projects, experience, awards, education and so on. I can easily switch any of them out in any particular resume I want to print because they're all in separate files. The actual resume itself is just a bunch of includes, which means I can easily have multiple different permutations that are kept in sync whenever I update the appropriate sections.

Of course, this hasn't helped me at all because I haven't updated my resume in a year and a half. This might not seem too bad, but I was actually in the middle of freshman year last time I touched it :P.


Yeah, that is another advantage. I use a combo of commenting things out and storing all the changes in git, so I can easily pull back old stuff if I need to for some reason.

Git and LaTex play nicely together.


Heh, when I TeX-ified my resume, I used your code as a template. Just thought you might like the kudos :)


That's why I published the source. Because it took me hours to find all the docs for all the different parts and put it all together. I figured I could help others out. I'm glad I was right!


Oddly enough, your resume's source code is probably a more effective resume than any output it could possibly generate.


Haha, you assume my code follows good TeX style and conventions. If the only code sample I ever saw by myself was that, even I wouldn't hire myself. (That was a confusing sentence to write :P.)

If you can imagine a site with IE-specific code and a layout that's half specified with absolute positioning and half with  , you wouldn't be that far off from what my resume code looks like. It's essentially held together with metaphorical duct tape and \vspace{}.

It's like sausage--tastes great, but never visit a sausage factory :).


Quite possibly. Certainly doesn't hurt!


I think it would be nice if you expanded the look of your resume to your webpage. Your resume looks impressive, your landing page, much less so.

I like the look of consistency.


Thanks for the feedback. I've been meaning to update the website, but it's pretty low on the priority list.


Maybe it's my browser on this ancient laptop, or maybe it's my ancient eyes, but the two 2011s in the start/end dates in your reddit position don't look identical to me. It looks like there's more horizontal space between the 1s in the second line than in the first, and in both cases, it looks like the first 1 is closer to the zero than the second 1 is to the first 1.

I wouldn't make a hire/no-hire decision based on something as insignificant as that, but in the context of you holding it out as a example of a well-formatted resume, it kind of grabbed my attention right away and wouldn't let go.

It is a great resume, though, and I might steal a few of your layout ideas for mine, next time I update it, if you don't mind.


Maybe those teachers have preferential treatment towards LaTeX users? Heh. Someone should test this out sending resumes to companies to net the initial interviews.

And what about the type of parchment one uses? There are stationery conventions for that sort of jazz: http://www.nationalstationeryshow.com/


American Psycho - Business Card scene:

(http://youtu.be/-U3SXzbYLOA)


When the survey initially came out I was randomly given Computer Modern. The day before, I had painstakingly converted Computer Modern it to a webfont for a friend's blog (http://www.krisjordan.com) and I was shocked to see it on nytimes.com.

I quickly went over to a different (Windows) machine to try it out because I couldn't believe my eyes. That one was given Georgia, so I mistakenly assumed that Errol Morris was such a type hipster that he included Computer Modern in his type stack if it was installed locally. It was pretty funny to see this today.

One thing I will say, is that the Computer Modern webfont they used is a disaster. It had tons of aliasing issues. I wonder how they sourced it since natively it isn't in a normal font format. (Knuth!) That certainly would skew the results.


So if you want to use cmr, cmi, and friends, on your web site, what's the best way to do it? I see that you seem to have put them in a few different formats in /fonts/ and even stuck a base64 version into the stylesheet itself. I'd be delighted to hear more about the pains you had to take and why!


I started with the OTF files here:

http://canopus.iacp.dvo.ru/~panov/cm-unicode/

Then I messed around a bunch with the advanced options on the Font Squirrel @font-face generator:

http://www.fontsquirrel.com/fontface/generator

I don't remember the exact settings I used, but it needed a decent amount of tweaking to end up looking acceptable.

The base64 encoding doesn't contribute to anything other than website performance. It saves an additional request in modern browsers. The other linked fonts are for IE/Older browsers.


Cool, thanks! What kind of aesthetic problems did you have? Poor hinting, lousy kerning?


I don't know if you're aware of this, but just in case: http://www.gust.org.pl/projects/e-foundry/latin-modern/


normal font format

METAFONT predates TrueType by more than a decade.


'Normal', not 'old'. The relevant stat for 'normal' is install base, or frequency of use.


How did they control for whether the fonts are actually installed on the participants' computers or not?

Also, did they control for desktops vs smartphones vs tablets? It's reasonable to hypothsize the device's screen (and zoom level on mobiles) affects typeface rendering and its perception.

All in all, intersesting and worthy of more work, but I want more to believe the result more.


> How did they control for whether the fonts are actually installed on the participants' computers or not?

They could have used @font-face and distributed the fonts with the page, but to be safer, they could have just created 6 different images, and distributed the passages as a png's.

I have no idea whether this was the approach - but if they do it in the future, this is certainly something to think about. Simply swapping the order of the font-family css line isn't going to cut it.

I look forward to this being researched more - Who thinks Google/Facebook have done this A/B test already? Maybe they could release the results!


> They could have used @font-face and distributed the fonts with the page,

Bingo. These CSS[0] and JS[1] are in the quiz.

[0] http://dl.dropbox.com/u/2891540/Fonts/fonts.css [1] http://graphics8.nytimes.com/packages/js/multimedia/bundles/...


Presumably the variation in devices would be observed across all the fonts and therefore average out. I.e. there is no reason why people who got font A would disproportionately be using device X.

You make a good point about the possibility that people did not actually see the passage in the intended font.


Something like TypeKit would account for the vast majority of encounters with the fonts on participants' computers.


I really hate those "Weighted Agreement/Disagreement" charts.

For Weighted Agreement, it looks like Comic Sans had a way lower agreement rate (it looks like 60% lower) but Comic Sans had only a 4.5% lower agreement rate than Baskerville, including their weighting system.

For Weighted Disagreement, Georgia had only a 7.7% increase in disagreement than Baskerville whereas the chart makes it look more than double.

Still interesting, but not nearly as substantial as they make it out to be. Is there a term for this type of manipulation of charts (whether intentional or not)?

EDIT: Indeed, the term for this is "Truncated graph" http://en.wikipedia.org/wiki/Misleading_graph#Truncated_grap...

And as a bonus (thanks wikipedia!), according to Edward Tufte's "Lie Factor"[0] (where 1 is considered accurate), the Weighted Agreement chart has a lie factor of ~15 and the Weighted Disagreement chart has a lie factor of ~17.

[0]: http://thedoublethink.com/2009/08/tufte%E2%80%99s-principles...


>Georgia is enough like Times to retain its academic feel, and is different enough to be something of a relief for the grader.

I've thought for years Georgia was a great choice on a resume/paper.

a) You want to stand out

b) You also don't want to appear too "starchy"


Actually, while Georgia is a good font overall, the design is Georgia is optimized for the screen. Because of this, it has some peculiarities, such as a very large x-height (so the pixels in the counterforms can be read).

Here's a little more about what I'm talking about: http://www.kadavy.net/blog/posts/design-for-hackers-why-you-...


Good points. Verdana is one of my favorites, but I believe it suffers from the same issues because it was also designed for being legible on screen.

http://en.wikipedia.org/wiki/Verdana#Usage


Yeah, Verdana is fantastic on screen (this was especially so before anti-aliasing was standard), but looks awful in print.

EDIT: also, didn't know about IKEA's Verdanagate (from your link above). Fascinating!


Georgia is a great choice for almost anything except lists of numerals.


Yea because the numerals are offset randomly. I happen to like that for a resume.


It's not random, they're basically numerals fitting lowercase characters. The idea is to integrate them into text flow. Numerals that are all at the height of upper case would LOOK LIKE THIS IN TEXT ;-)

Not sure about Georgia, but most larger fonts have various versions of numerals for tables (where you want them all the same height, and width monospaced width) and for text (as in Georgia). In OpenType print fonts you can select those types of numbers by turning on/off certain 'features' of the font (the glyphs will be exchanged without changing the text itself). That should be possible soon in webfonts too, I think Firefox already supports that, and IE 10 will follow.


Jeez, I knew someone was going to call me on "random." Each numeral is vertically offset. The offset of each seems to have no rhyme or reason... thus "random" (3,4,5,7,9 are lowered; 6,8 are raised). Apparently, however, it is a common pattern: http://en.wikipedia.org/wiki/Text_figures

>they're basically numerals fitting lowercase characters

Nope. I think you may be forgetting what Georgia looks like: http://www.identifont.com/samples/microsoft/Georgia.gif

>Numerals that are all at the height of upper case would LOOK LIKE THIS IN TEXT ;-)

I disagree. Lots of fonts have numerals that are the same size/location as the rest of the text. http://desktoppub.about.com/od/glossary/g/Lining-Figures.htm


> Lots of fonts have numerals that are the same size/location as the rest of the text.

Actually there's not a 'rest of the text' that is homogenous, unless it's completely uppercase. In normal body copy it's a rhythm of ascending and descending letters, and to that those numerals are meant to fit.

The meaning of 'Lining Figures' is numerals that are spaced proportionally (the opposite of their tabular version). They are a good option in technical texts (as those set with LaTeX typically) where you want to integrate numerals, but still have them stand out. They are also the default numerals in many digital fonts, even if there are the 3 other options as well (it's a 2x2 system of proportional vs monospaced and text numerals vs lining numerals).


> The offset of each seems to have no rhyme or reason

As others point out, those are lowercase numbers, adjusted to look better alongside lowercase text. The rhyme and reason is in how the digits fit within x-height of the font. If you look here: http://en.wikipedia.org/wiki/File:Mediaevalziffern.svg, notice how the dominant "circular" part of each digit fits within the x-height. The digits 3, 5, 7 and 9 don't have a dominant circular part, so their lowest arc is considered a descender, similar to how a 'g' or a 'j' is rendered.


It's more than a common pattern- they are lowercase numerals, even if they don't get used much. Although I see what you meant by random in the sense that lowercase 'g' "randomly" descends while lowercase 'l' randomly ascends.


I saw Microsoft's OpenType options demo for IE 10. I was rather impressed. (I believe it's on IE TestDrive somewhere)



I agree that the numbered offset is beautiful and interesting and perfectly appropriate for the numbers you'll see in a resume.


> Baskerville seems to be the king of fonts. What I did is I pushed and pulled at the data and threw nasty criteria at it. But it is clear in the data that Baskerville is different from the other fonts in terms of the response it is soliciting.

No amount of 'pushing' and 'pulling' at data can compensate for a poorly designed experiment. Georgia can't be used as both the control and a measure of how effective Georgia is—clearly fonts that stood out from the rest of the page would have a different effect than the one that looks exactly like the rest of the page. To give any of this credence, the sample should have stood alone, or the typeface of the surrounding page should have been randomized as well. What we're looking at here is "Are there certain typefaces that compel a belief that the sentences they are written in are true when contrasted with Georgia?".


Generally I detest font geeks, but I'm going to defend them here. First off: yes it's true that many intellectuals, designers, and hipsters have a genuine prejudice for Comic Sans. Facts need to work a little harder to prove themselves when written in that font. But in a world where we're deluged with typed information from the second we glimpse at our alarm clocks, I think it's okay to have a little prejudice because we need filter out at least some of the noise. Like most stereotypes, the Comic Sans prejudice is based on a grain (beach?) of truth. Can anyone really claim that the percentage of trustworthy Comic Sans based webpages they've seen in their life is equal to the percentage of trustworthy Georgia pages? Sorry. Geocities happened people, and I, for one, will never forget it.


This misses the most obvious difference to users, which is that the font changes in the middle of the article. Taken in conjunction with all the other fonts on the page, the harmony of the specific font to all other fonts in the article and on the page is probably the most important factor here.


I would like to ask Patio11 if he has ever done font A/B tests. I'm working on my sales website and the results would be very welcome. In my case I lack the traffic to do any proper testing.


I've never done an A/B test on typeface, specifically, but I've made a few over the years on e.g. font sizes. No major results to report. My intuition is that I'm extraordinarily skeptical that this would matter and largely think the only people capable of perceiving strong differences in the character of a typeface are people who do that professionally plus Thomas. I'm prepared to be wrong about that, though, as six years ago I would have told you that colors could not possibly meaningfully impact conversions and that's just catastrophically wrong.


http://www.bingocardcreator.com/abingo/results

Looks like he's tested a size change and letting users pick their own font, and neither helped.


Following up on blahedo's comment and the questions about what the heck their p-values mean --

This is a nice example that you can get statistical significance for small effects, if your sample is big enough. Their p-values are explained very badly, so I did my own analysis by transcribing their data from those plots. Let's take their weighting scheme for granted. I agree with some other commenters that the sums and counts are misleading, and instead took average scores per font, and computed confidence intervals for those means. The means are indeed a little different, and for some pairs, statistically significantly so.

http://brenocon.com/Screen%20shot%202012-08-10%20at%202.03.0...

But does it matter much? Take the pair with the largest gap, Baskerville vs. Comic Sans, of 0.95 versus 0.79: a difference is 0.16. This is out of a 10-point scale (ranging -5 to +5).

In fact, the standard deviation for the entire dataset is 3.6 -- so just 0.05 standard deviations worth of difference.

Or here's another way to think about it. If a person does Comic Sans example, versus could have done Baskerville example, how often would they have score higher? (This ignores the weightings, it's a purely ordinal comparison. I think this is related to the Wilcoxon-Mann-Whitney test statistic or something, I forget.) So with independence assumptions (if they had proper randomization, hopefully this solves), just independently sample from the distributions many times and compare pairs of simulated outcomes. 22% of the time it's a tie, 40.3% of the time Baskerville scores higher, and 37.8% of the time Comic Sans scores higher. I guess then it sounds like the difference is better than nothing.

Not sure what's a good and fair way to think about the substantive size of the effect. I wanted to take the quantile positions of the means, but realized you can't exactly do that with ordinal data like this (zillions of values share the same quantile value).

I probably missed something, so here's the transcribed data and R/Python code probably with errors: https://gist.github.com/3311340

Now that I'm thinking about it more, averaging the agreement scores seems weird. Maybe it's clearer to use the simple binary agree/disagree outcome.


I don't know about typefaces, but taking a screenshot of rendered black-on-white text and saving as .jpg sure has an effect on credibility.


I would like to see it controlled for age group, I remember liking Comic Sans as a child. Would Comic Sans have an effect on children, the same way it seems to for adults?


I'll take the other side here:

There's two axis - engagement and authority. Baskerville is not engaging, but it looks authoritative. So you tend to agree, even if you don't know what it says (like a boring professor or politician). Comic Sans is like a boring person in a clown suit - you can't follow what it's saying, and you tend to disagree just because it looks a little stupid.

The more respectabel Sans are engaging, but not authoritative; Times is both engaging and authoritative.

If you read something in Baskerville, you agree because it looks so boring that you can't be bothered reading it. Georgia, on the other hand, encourages both strong agreement and strong disagreement - people take it seriously, but actually pay attention. No-one takes Comic Sans seriously, because it's hard to read and looks stupid.



"It's going to work! I'm using a very convincing font; it's bold, and has a lot of serifs."

http://www.youtube.com/watch?v=APcuJjCZTMU#t=4m07s


I find it odd that Comic Sans and Georgia change places in the weighted totals.

Now I'm going to petition Randall Munroe to gather more data (thinking of the color survey).


There are a lot of problems here.

The bar charts used to illustrate that article are terrible. They present raw counts for each font, but each font was not presented to the same number of people---they varied from 7,477 (CM) to 7,699 (Helvetica), which is a pretty big swing given the other numbers they're displaying. In fact, when you run the percentages, CM has a higher percentage of agreement than Baskerville (62.6% to 62.4%)!

When we turn to the "weighted" scores, which don't follow any clear statistical methodology that I'm aware of, the bar chart is again presented with counts rather than proportions, and this time with an egregiously misleading scale that makes it seem like CS gets half the score of gravitas-y fonts like CM and Baskerville, when in fact its score is only about 5% lower.

Finally we get to the "p-value for each font". That's... not how p-values work. The author admits that his next statement is "grossly oversimplified", but there's a difference between simplification and nonsense. He says that "the p-value for Baskerville is 0.0068." What does that mean? What test was being performed there? Can we have a little hint as to what the null and alternative hypotheses were?


The biggest problem I see is that Baskerville isn't standard on Windows. They may have specified a Baskerville font-family, but that's not necessarily what the reader saw. The original test article displayed the asteroid passage as text, not as an image, so unless they accounted for the rendering differences among OS's, the entire test seems questionable.

I'm on a Vista system at the moment, and it does have a Baskerville Old Face variant, but "Gold has an atomic number of 79" does not look like the text shown in the article.


Computer Modern is not standard on any system either. Presumably they used web fonts.


You're probably right about the web fonts, but there is still the issue that the same font will look different on Mac and Windows. Heck, on my system there's a noticeable difference between browsers.


I know Firefox renders web fonts (at least Ubuntu from Google web font api) in an awful manner. Webkit does it perfectly.


I think you might be on to something, but I am not sure.

One thing that seemed odd to me was that Comic Sans has a negative association that is heavily biased by cultural factors rather than any real, intrinsic human determinant. I would suspect that all the typefaces are going to be more heavily influenced by social norms than anything else. But that's just a hunch.


I'm also confused about the p-value. Also, I'd love for someone who really knows statistics to explain: how are tests like confidence interval and p-value meaningful when (1) your only two choices are Agree & Disagree? (2) when your weighted agree/disagree curve is the opposite of a normal curve (low in the middle and high on the edges)?


This part was glossed over a lot, I had no idea what they were doing either.

On your #1, you could do a comparison of (% Agree) for one font versus another. For #2, the weights on the scale don't matter, you can just compare the means between the groups.


Thanks for answering! I'm sure it's just my lack of elementary statistics, but I still don't understand re #2. I get that the weights aren't important, but the curve shape seems to invalidate the notion of a p-value, because each font had more "strongly X"s than "weakly X"s. When your sample results are clustered at the extremes, what do you do to apply these statistical techniques?

Here is something I tried: arbitrarily choose a font as the "control," in my case Georgia. Then to measure each other font, say Baskerville, randomly pair a Georgia data point with a Baskerville data point, and measure how much Baskerville improved agreement. (Comparing different shuffles, each font's mean improvement is pretty stable.) That gives what looks like a normal curve, at or least it is big in the middle and small on the edges. So now I can find a p-value, and my null hypothesis is that changing the font has no effect. I ran a t-test with R, and I got a p-value of 0.2069. So much less impressive than the article claims. But I assume my approach is wildly invalid, so what is the right approach?


I thought re-analyzing Morris's data would be a fun "homework" assignment to give myself as I try to learn statistics. It looks like the simplest approach is nothing like what I proposed above, but a "two-sample t-test." I performed that analysis and wrote it up here, if anyone is interested:

http://illuminatedcomputing.com/posts/2012/08/font-credibili...


About the percentage of agreement, you swapped the values. CM has 62,4% and Baskerville 62,6%. But I agree with the other criticisms.


Check again: 4680+2797 is 7477 and 4680/7477 is 62.591%, for CM, and 4703+2833 is 7536 and 4703/7536 is 62.407%, for Baskerville. The raw number of Baskerville agreements is higher than the raw number of CM, so its bar is higher, but that disparity is just what I'm complaining about (well, among other things).


This is how geeks get such a bad reputation. The bar charts do appear directly below stacked bar charts, though I agree the weighted charts without axis markings are content-free. The rest is a pedantic quibble by someone who happens to understand statistical jargon. This is a newspaper article not a paper in a peer-reviewed journal.


Interesting read.

I certainly agree that Comic sans nudges me towards disbelief (and I'd never read a full article written in this horrible font :-)), while Georgia seems more 'professional' and believable.

Baskerville in my mind is instantly associated with all the books I read - most of those on scientific topics had this or a very similar font. Don't know whether it affects my judgement of what's written compared to any other normal fonts.

Typewriter-style fonts do make texts seem older and therefore, more believable (since they've been around for so long, there must be some truth to them - the standard logical reasoning).


So font has an effect on how seriously readers take what's written in it. The names of the fonts alone (i.e. Helvetica, Georgia, and Comic Sans) also give off the same vibes.

I wonder... Do the names of programming languages have an effect on how seriously people want to read what's written in them? If given 3 names (e.g. Python, Ruby, Groovy), do people subconciously rank their seriousness???


I took a class that was very writing-intensive. I was one of the only people to ever actually change the font in my essays from Calibri to Times New Roman, and always wondered if this contributed to the fact that I did substantially better than most other people with very comparable essays.


regarding comic sans use at CERN... often found that scientists like hideous designs. it's a way of saying that we work on serious stuff.

a lot of sites at MIT, CMU have that mark... the more prestigious, the uglier.

of course it has to be a certain style of ugly.


I'd say not just scientists at CERN. This applies to most (nah... ALL) scientists: researchers, PhDs, Professors, etc. They use very hideous design, or more accurately, nothing at all, on their research webpages. And I am talking about a geocities-esque type of "design" here. Sometimes I even wonder if there is an implicit rule around that goes on the lines of "real researchers must PRETEND they just learned html".


Our lecturer give us notes on particle physics in Comic Sans. I found them impossible to study from.

Edit: incidentally, she works at CERN on the ATLAS experiment too.


The ironic thing about this article is that it encodes the text images as JPEG. I wonder about image encoding's effect on credibility.


This article does, but the experimental article itself embedded the fonts so they would render properly.


Honestly, the font had nothing to do with my decision - I just trust NASA.


As a scientist, I wouldn't worry about someone's lowered opinion who judges a scientific text by the font.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: