Hacker News new | past | comments | ask | show | jobs | submit login
B612 is a highly legible open source font to be used on aircraft cockpit screens (b612-font.com)
429 points by bobowzki 88 days ago | hide | past | web | favorite | 154 comments



Their own legibility study shows it's about around the same as Verdana: https://i.imgur.com/M0C3TkY.png (excerpt from the PDF in the download .zip, vertical axis is misidentifications). Their design goal was beating their current font (CDS), which seems a bit of a sandbagged goal if you look at the sample in the report. I wonder if we've reached a plateau of legibility for technical fonts after identifying the easy wins in typeface design for screens.


Not disagreeing, but note that Verdana requires a license for certain kinds of commercial use (I think). https://www.fonts.com/font/microsoft-corporation/verdana/lic...


You have mistaken the development version (V0.2) for the final version. A subsequent graph in page 13 shows that there had been indeed a big improvement over CDS.


You misread the post, as you are both saying the same thing.


I guess I'm not good at reading nuances ;-) As you can see from the originally linked plot, B612 V0.2 is not universally better than Verdana (say, t is much worse). The original paper also states (p. 13) that the plot was used to guide the development, so they clearly knew that V0.2 is not as optimal. Henceforth I have assumed that the OP thinks B612 tried to optimize easier bits (explicit) and did it badly (implied). I should have been overly sarcastic...


For those not aware, Verdana happens to be the font Hacker News uses.


If you have Verdana. Otherwise the fallbacks are Geneva and then sans-serif.


For me, Hacker News uses whatever I have set as my sans-serif font, which at the moment happens to be B612.


But is Verdana provided as a webfont?

On mobile, almost no one can see Verdana.


> On mobile, almost no one can see Verdana.

But it's included in iOS (http://iosfonts.com)?


I’m surprised there’s a need for this kind of thing or even much of a difference between most sans serif fonts. It seems there are dozens if not hundreds of open source programming fonts which are all quite legible.


> I’m surprised there’s a need for this kind of thing or even much of a difference between most sans serif fonts.

I mean look how asymmetrical the characters in this typeface are compared to almost every other sans serif typeface. The way the body of the characters flairs out opposite the ink traps is pretty wild. It almost looks like it was designed using deep learning or something, because how the hell could any human possibly come up with those forms.


Huh? I wouldn't even notice it's a special font if not for the fact that I clicked the article to their website because it was on HN. The edges of the characters look slightly different to me, but otherwise it just looks like most other sans-serif fonts to me.

Maybe I need to get my eyes checked. (although last time I went doctor said I had better-than perfect vision in both eyes)


Nope. There's nothing wrong with your eyes. Some people are just better at noticing certain things, compared to others.


Assuming you're in your 30s, statistics says your hearing is about the same as mine, but my ability to distinguish chords is better. (Meaning: if I had to bet either way, I'd say this - I could lose the bet.)


Take a look at some of the "dyslexic" fonts. The asymmetry is very useful in dynamic situations: low/high/variable G environment, shaking, smoke in cockpit, O2 mask on, loud noises, lost eyewear, multiple distractions (alarms), etc.


Indeed, and none as legible as a serviceable serif font.

Legibility was a solved problem literally centuries ago.


> Legibility was a solved problem literally centuries ago.

For print, maybe? But digital displays are a whole different kettle of fish which didn't exist centuries ago.


How much of sans-serif's digital advantage was predicated on low-resolution displays compared to print?

(Not a rhetorical question; I'm genuinely curious. What was true for NTSC-interlaced isn't necessarily true for 8K.)


> I wonder if we've reached a plateau of legibility for technical fonts after identifying the easy wins in typeface design for screens.

I mean most likely the real design goal was to create a cool futuristic looking typeface that would make the planes themselves feel more technologically advanced. I have no doubt legibility was one of the main requirements, but let's not kid ourselves on why this was really commissioned.

They did a great job though. I'm not a typography expert or even a designer, but to my untrained eye this looks like one of the most original and visually impressive typefaces I've seen in a while.


I'm surprised they didn't cross or dot the zero. Surely differentiating O from 0 is important when eye fatigue is a problem?


https://en.wikipedia.org/wiki/Slashed_zero#Similar_symbols:

”The slashed zero causes problems in all languages because it can be mistaken for 8, especially when lighting is dim, imaging or eyes are out of focus, or printing is small compared to the dot size.”

I think that can be a valid reason to use regular zeroes on all kinds of numeric indicators.


In danish and norwegian it can look like ø Ø which really ruins readability

Edit: mostly a problem if you are from said countries tho


https://en.wikipedia.org/wiki/Slashed_zero#Similar_symbols:

”The slashed zero causes problems in all languages because it can be mistaken for 8, especially when lighting is dim, imaging or eyes are out of focus, or printing is small compared to the dot size.”

I think that can be a valid reason to use regular zeroes on all kinds of numeric indicators in airplanes, where context makes it clear that O’s cannot be present.


The page header on the site shows a sample avionics display, some of the 0's are slashed, some aren't.

http://b612-font.com/images/photo-header.jpg


Looks like there's a variant for digits only (no slash) and a variant for mixing in with regular alphabet where it gets a slash. Am not sure what to make of it, but great observation!


"FL380" on the far right got no slash.


The picture in the header does include slashed 0's though.

Edit: the pdf included in the download has both variants of 0, the slashed one at unicode point E007. So it seems it's up to the user to pick one.


I imagine that a slashed zero would be used in places where both letters and numbers are possible, and a non-slashed one where only numbers are contextually expected. This seems to match the header picture: "5240N" uses a slashed zero, but all numbers-only indicators to the left are non-slashed.


I would assume this is true. In French (where Airbus comes from) 'West' is called 'Ouest'. "5240O" would be a pain.


On the other hand, in Danish and Norwegian, East is "Øst", making slashed zeros awkward together with compass directions. Dotted zeros would be better.


English is the universal language of aviation, so I'd expect the displays would be in English only.


I can’t think of any particular instances where a O or 0 would mix in the cockpit. Switches and panels are always labeled with letters in form of words or abbreviations. Gauges use exclusively numbers for measuring some quantity, for example EGT. In cases where numbers and letter might mix there are specific rules about not using O or 0 or even 1 and L.


Airport identifiers. Both O and 0 are valid.


Yes but I believe they’re not permitted to both be used in an individual airport identifier. KORD or 07G are valid but 07O would not be. Just the same as they’re not both permitted in the same aircraft registration.


A lot of things that people do not actually do are permitted. And this specific rule (how to use numbers in airport codes) is completely country dependent, there is no standard.


Interestingly the picture in the background of the header seems to have some 0s with a cross through and some without.


The font provides a slashed zero in the private unicode area at E007: 


How often do you see 0/O ambiguity inn non-programming context?


Serial numbers for warranty claims, though commonly look alike characters aren't in the allowed character set... But if slashed zeroes where used, it'd add another character to the character set.


Last night my wife was trying to enter a Tesco (supermarket) voucher code in a website and got 0 and O wrong. The code she was emailed did not use a dotted 0. She was reading the code on her phone and entering it into a web page.


Daily, serial numbers for regulated cannabis lots have IL10O - humans reference them hundreds of times a day


Passports. German passport has alphanumeric number but no O to avoid 0/O ambiguity.


Yes. Is that only an issue in German passports or generally applies to all passports?


It is a rule for German passports. US passports don't have characters. AFAIK there is no general rule for passports.

Also, some passport numbers stay with you for life (China), other change with every passport.


Probably you know usually if it's a number or text. It's more a problem in product codes etc. But yes, I agree, it would be better with crossed zeros. I even cross zeros in my handwriting.


What if upper case O was mapped to lower case o. Then only o/0 are available. No chance of confusion but could be distracting or annoying to read.. end random thought..


If both uppercase and lowercase "o" look the same, doesn't that ensure confusion?


For comparison, Boeing uses Futura in their cockpits:

https://en.wikipedia.org/wiki/Futura_(typeface)


The 'p' and 'q' are mirror images here, which could be a problem for people with dyslexia. The new B612 font has clearly distinct shapes even after mirroring.


This sounds like a fair critique, but I'm not a dyslexia expert. Some honest questions:

Doesn't dyslexia involve transposing the positions of characters, not mirroring individual characters? Or do some/most dyslexics also sometimes misread letters as their mirror-image counterparts?

Considering dyslexics have difficulties with already very distinct shapes (i.e., most letters are pretty unique), would a relatively subtle difference between p and mirrored q actually reduce the error rate? Or is that "premature optimization" in that the drop in accuracy due to the p-q mirror images is dwarfed by the gain in accuracy of using Futura?


I remember reading about dyslexics liking Comic Sans because the letters have distinct shapes even after transformation. This then inspired the creation of specialist fonts that are supposedly better, but since I last read about it there's been some research suggesting it doesn't actually help, so I retract my previous post.

"Dyslexie font does not benefit reading in children with or without dyslexia":

https://link.springer.com/article/10.1007%2Fs11881-017-0154-...


The ‘i’ and ‘j’ seem like they could easily be mistaken.


Looks like it's all caps, so it doesn't matter: https://blog.epicpxls.com/the-font-that-landed-on-the-moon-a...


To this day, I marvel during turbulence at how much easier it was to read pointer gauges than modern strip/numerical display EFIS displays. Don't even get me started with the touch screen trend. Thanks for trying to make the EFIS more readable. Airbus.


What does EFIS stand for? And, could I get you started on the touch screen trend?


Touch screens installed in the instrument panel are very hard to use in even light-to-moderate turbulence. Have someone shake your phone around while you try to dial a number and you’ll have some idea of the difficulty. Knobs, and to a lesser extent physical buttons, give you something to hold onto. It may not sound like much of a difference but in my experience it’s much, much less error-prone to have hardware inputs when the aircraft is getting bounced around.


Most touch screen panel mount avionics provide some kind of anchor to connect the static part of your hand to operate. (Rest your thumb here or pinky side here and operate with the index finger.)

I was worried about the issue, but in light turbulence, it’s no issue IME and in moderate, it’s maybe less certain than knobs, but I bet even there that I could still enter an accurate flight plan update, whether a new full route and just a direct-to shortcut) faster on the touch GTN 750 than on an old (multi-knob) GNS 530. (That’s not just touch; it’s partially the discoverability and access to “more than the -D-> key” functions.)

I am glad the critical autopilot inputs (HSI heading bug) and other flight controls are not touch, though.


I like the anchors - but I use them to anchor me for pushing the hard buttons and twisting knobs instead. Of course, I personally avoided putting in (mostly) button free avionics like the mentioned Garmin GTN series in my plane. I fly them in other planes, and they are well featured, but I just personally prefer hard buttons.

Also, knobs in turbulence are fun. "Turn left 15 degrees" can become a random amount on your heading bug with an unfortunately timed thump!

Anyway, bottom line is that I find it easier to see the relative angles and trends of the pointers in old analog gauges in turbulence than read the tapes. The angles meant something easily interpreted as being unexpected at a glance even if the absolute value might take another moment to read.

Imagine if our AI were two tapes, like the EFIS horizontal and vertical deviation indicators!!


Touch screens in cars are also a huge step backwards in usability from the buttons and knobs they replaced. I'm not sure why the switch happened.


For cars, my guess would be that a) it allows you to entirely disconnect the UI work from the work on actual car, and run it in parallel, and b) it looks cool and futuristic in ads. I don't see how either would apply for planes, though. There, I suspect it's an attempt to manage the amount of data a pilot needs to see at any given moment, and to enable equipment changes without rebuilding the cockpit.


The worst part of this trend is that now that the touch UX is handled entirely in software, the remaining physical controls often are as well. In my 2018 Subaru Outback, for example, the physical volume knob doesn't work until the system has "booted" completely, even though it might already be blasting music or radio resumed from the last time it was shut down. So there's those 2-3 seconds where there's no way to reduce volume, mute, anything.


> Touch screens in cars are also a huge step backwards in usability from the buttons and knobs they replaced. I'm not sure why the switch happened.

Because 'ooh, shiny!' That's my best guess, anyway.

Although there is a benefit to being able to deploy a new control configuration with a software change, I think it's far outweighed by having something physical to hold when operating the controls.

… and one wouldn't want the controls one is used to changing anyway


Specifically, you can't use a touchscreen without looking at it. You can locate and press a button or turn a knob without needing to look at it.


More stuff to configure, then cost.

Switching to modal screen-based interfaces happens when there isn't enough room on the panel in question to supply a separate physical control per thing that needs controlling.

Once you've had to do that, though, every remaining knob is a cost that could be removed by adding a few lines of code to the interface software.

A single touchscreen for everything is the natural endpoint of this process.


I can already notice this just from driving a car with a touchscreen-driven center console v. one with buttons and knobs. I'm hesitant to buy a new car for a myriad of reasons, that being among them.

Plus, fixed touchscreens on cars, planes, equipment, etc. tend to be resistive ones, and those can be very annoying to use even when they're perfectly still.


Electronic Flight Information System. The flight deck information display, but not mechanical, digital.


Two other sans-serif fonts designed for legibility and widely tested:

"Transport" font used in UK road signs: https://en.wikipedia.org/wiki/Transport_(typeface) http://www.roads.org.uk/fonts

"Mandatory" font used in UK number plates: https://en.wikipedia.org/wiki/Mandatory_(typeface) http://www.k-type.com/fonts/mandatory/



Highway Gothic for the U.S.- https://en.wikipedia.org/wiki/Highway_Gothic


What's the deal with the license? This page mentions only the Eclipse Public License, while the actual Polarsys page and the GitHub repo mention/include three different licenses: the EPL 1.0 (!!!), Eclipse Distribution License 1.0 (BSD-like), and SIL Open Font License 1.1. Should I take that to mean it's multi-licensed and I can use this font under whichever of those licenses I choose? If so, what's the point of making EPL and SIL licensing options when the EDL has far fewer strings attached (and is more obviously compatible with other licenses, notably the GPL)?

Relevantly, the EPL 1.0 is incompatible with any version of the GNU GPL (as is the EPL 2.0 by default).


B612 is the name of the asteroid from Le Petit Prince (The Little Prince). The narrator is a pilot.


So was the author Antoine de-Saint Exupery


I had a chuckle at the background image using ALLCAPS, which is infamous for its lack of legibility - e.g. https://en.wikipedia.org/wiki/All_caps#Readability


That link claims that ALLCAPS tend to be read letter by letter.

I find that claim to be completely false, as I am learning to read text written in Cyrillic alphabet, when I do actually read the words letter by letter and I even get surprised when I finish reading a word.

In contrast, I can read text in all caps at about the same speed as lower case text. If there's any slowdown, I say I read all caps at 95% of the speed I read text in lowercase.

Later the same page says this: "all-capital text was read 11.8 percent slower than lower case, or approximately 38 words per minute slower".

In that case that later claim completely contradicts the former claim of letter by letter reading, which is IMO about five times slower, not just a few percent points slower.

In any case, ALL CAPS text is perfectly legible, and it is actually more legible than using something like cursive, including lowercase cursive.


Hmm, I could imagine that the legibility of sentences and paragraphs, where you are pretty much reading a word at a time, is different from the legibility of things like codes where you are reading a character at a time. And for short words like table column headings, I suspect there isn't much difference either way.


The all caps for readability likely comes from handwriting vs block print.

I know for my self, when writing for others I block print rather than use cursive so that they can understand it easier.

This likely has moved over to type as tradition rather than actual usability.


Only the parentheses look like brackets. Everything else looks quite nice. But for lisp programming, those bracket-looking parentheses are ugly...


Lisp is the canary in the mind shaft.

All programming languages have a bracket problem; Lisp just has a more intense bracket problem. Until the recent advent of 4K and "retina" displays, it wasn't practical to have significant weight variations in screen fonts for programming. It's now practical, and it's easy in font editors such as Glyphs Mini to swap in or modify brackets to be a lighter weight.

I've made many Lisp coding experiments: with preprocessors that eliminate most parentheses, with alternate symbols, and with various weights and sizes of parentheses. There's something to be said for the theory that parentheses should be lower case, just like identifiers. (I'm reminded of the "GET OFF MY LAWN" effect in Common Lisp; one can shut off the upper case defaults but most people don't stay around long enough to find out.) However, my favorite approach is to make parentheses light, but not too light. There's a sweet spot where you actually love the lightweight brackets, in Lisp code and everywhere else. You're conducting an orchestra, not engaging in a knee-jerk debate; if the trumpets play too loud, teach them to play at the right volume.

It took me ten minutes to swap in ()[]{} from SourceCodePro-ExtraLight, and copy a period into the center of the zero. This a fine Lisp font but not my favorite. I like Input Mono, or Courier Prime Code (both similarly modified).

http://input.fontbureau.com/

The Input download can be customized to look nearly identical to B612, only better, and without the O0 and () problems.


Until the recent advent of 4K and "retina" displays, it wasn't practical to have significant weight variations in screen fonts for programming.

Really? Bolded keywords and other tweakery was common on 512×342 OG Mac screens.


Did you like how that looked? I didn't. The resolution wasn't ready. The resolution certainly didn't support going lighter.

I'm trying to understand why lightweight Lisp parentheses haven't been a thing for 50 years. It's so obviously right once one tries it, with sufficient resolution. My guess is that "sufficient resolution" is both key, and recent.


I'd guess that most people who value typographic nuance and clarity find Lisp's lack of concessions to syntax makes reading code an extremely clunky experience. I think Lisp's (lack of) syntax adds up to a set of tradeoffs that actively repels visual thinkers/typography folks, compared to Miranda-style syntax.

NB Haskell folks are not any more likely than Lispers to think visually. A lot of Haskell code is difficult to read. But it does feel like the Miranda-style syntax (particularly pattern matching) has made the first few steps towards eliminating unnecessary structure-related cognitive load on the programmer.

Lisp partisans will probably reject this opinion, but IMO the presence of strings of matching parens in Lisp isn't something that needs to be restyled with a better font, it's something that needs to be designed out of the language.

Edit: I know this ground has been well-covered in the past. Usually the advice is that by using a proper editor and relying on indentation, parens just stop being a noticeable problem. Whether just restyling parens is cosmetic or not, it seems less powerful even than that advice.


Scheme is a stunningly beautiful language with inferred parentheses:

    define | edge? g e
      let
        $ es | edges g
          e2 | reverse e
        or (member e es) (member e2 es)
I used to rely on a preprocessor that translates back and forth to standard Scheme:

    (define (edge? g e)
      (let
        ( (es (edges g))
          (e2 (reverse e)))
        (or (member e es) (member e2 es))))
One gives up being able to use standard tools. For Emacs or Vim, one can write one's own tools, but it's nice to be able to try out everyone else's work first.

The central question for any programming language isn't how comfortably it welcomes casual newcomers; it's how effectively it creates a human:machine interface for the committed. Most criticisms of how Lisp looks can be likened to criticisms of a frozen screen from a crashed video game. One needs to play the video game. Lisp acquires its meaning in short time scales from the experience of syntax-aware editing, a live experience. Lisp acquires its meaning in long time scales from the experience of using macros to rewrite the language to exactly suit the problem at hand. Haskell is safer, faster, more densely expressive, but not as plastic. Lisp appeals to woodworkers; other languages appeal to people who like to buy nice furniture, sit in it and get to work. I can see both sides, and I've worked hard to experience both sides.


> beautiful language with inferred [...] a preprocessor that translates back and forth to standard

Most focus is on transforming code for machine consumption (compilation). But underappreciated I think, is that code can also be transformed for humans. Reversibly (editing), and not (analysis). Colorization and outlining are common. Less common are syntax transforms like yours. Fortress-like mathification. Reverse templating and macros. Intensional programming. The idea of a collaborative-compilation environment with an editor-like UI.

So much potential fun. But so little R&D funding. And a discoordinated profession. So decades slide by.


> But underappreciated I think, is that code can also be transformed for humans.

It's called pretty-printing. The name is underwhelming, but there are some advanced techniques for it.


> One gives up being able to use standard tools.

Including rudimentary, POSIX vi.

E.g. in the normal Scheme code, I can put the cursor on this line:

        ( (es (edges g))
          (e2 (reverse e)))
then type J to join¹ the following line to it:

        ( (es (edges g)) (e2 (reverse e)))
The result is still the same program. In the inferred notation, the program is buggered if we do the same thing.

1. http://pubs.opengroup.org/onlinepubs/9699919799/utilities/vi...


I don't have much experience with Lisp, so I'm definitely trying to convey the impression of a typography enthusiast rather than a coder. I'm familiar with the kind of claims you are making for Lisp, but I'm also conscious of the bad PR that Lisp enthusiasts get sometimes. That might sound like a superficial thing to be worried about, but it absolutely is not, for those inside and outside the Lisp world, because every newbie has to choose whether or not to invest time in a particular programming culture.

In terms of Lisp vs the ML family of languages (including Haskell), "absence of syntax" as a guiding principle of Lisp seems to be in contrast to a language community, like Haskell's, that spends more time considering how it will write its own compiler than anything else.

With that in mind, I think it's a bit tendentious to dismiss Haskell users as passive consumers who don't consider the nuts and bolts of their programming language, and Lisp programmers as woodworkers. It's actually very difficult to get anywhere significant in Haskell without deeply committing to learning the Haskell way of doing things (currying, evaluation strategies, monoids and monads, type classes, even continuations etc.) I'd say, from what I've seen, that Haskell has more of a particular, distinct flavour than Lisp, which goes along with coming from that "MetaLanguage" tradition, concerned with codifying (in the language) the routine ways to manipulate structures. It's a culture, and one that happens to be a little bit elitist and not very good at transmitting itself to outsiders. But maybe the only languages that have no fixed cultural elements (abstractions/tropes) to speak of are rudimentary Turing tarpits.

Metaphors about craft vs consumerism aside, a good Lisp programmer and a good Haskell programmer will both know the techniques they are applying. It's not like Haskell is a high-performance car with the hood bolted shut. It can appear that way because they're always trying to polish the abstract principles of the thing and make it more elegantly expressive—it's a research project in that sense. But by the same token, to get the real benefits you have to buy into that academic culture and do the same kind of conceptual thinking you would in Lisp.


Though, in which mainstream non-Lisp language can you feasibly work without a good editor and relying on indentation for grokking the program structure?


> I think Lisp's (lack of) syntax adds up to a set of tradeoffs that actively repels visual thinkers/typography folks

It's more of being aware that Lisp syntax works different from what most people are used to. The current Lisp syntax was originally an accident, but it survived because it has some interesting and unique properties.

Lisp code is relatively unique because it is written on top (!) of a data syntax.

Thus is has a two stage syntax: a the bottom there is a syntax for s-expressions - which is purely about light-weight syntax for data - not a programming language.

On top of that there is Lisp syntax, which is defined in terms of structured data. This enables code manipulation in terms of programs over hierarchical list-like data. It also enables user extensible syntax via Lisp macros. Thus a program in Lisp syntax can be written as text by serialized data, but it can also be computed in memory.

Thus we'll see code which visually has more of a layout in terms of hierarchical nested lists. For this programs have been written which can produce such a layout from code as data.

Lisp has an aspect of direct manipulation of code as data. You may want to design it out of the language, but few actual Lisp users would do want that - additionally it has been tried multiple times and it has often failed. For example there was a large effort to design a Lisp 2 (as a 'second generation Lisp') with non-s-expression-based syntax. A large effort, which failed and which in the end had very little influence on how Lisp programs look like. There are versions of Lisp like RLISP which had a syntax not based on s-expressions and which is still used in a software which is a computer algebra system. Dylan was another attempt of a Lisp-like language without s-expression based language.

> Usually the advice is that by using a proper editor and relying on indentation, parens just stop being a noticeable problem.

Another advice is to use an interactive programming environment for Lisp to actually make practical use of the unique features of Lisp syntax.

To a Lisp programmer it is more useful to support this dynamic aspect of code as data AND its visual representation - more than losing these capabilities and supporting the idea of code as static text. Tools which support Lisp were for example the Interlisp-D structure editor for Lisp code and the Lisp listener for Symbolics Genera. The latter keeps the connection of the external textual representation of Lisp data and code with their underlying data representation.

Otherwise we would be using ML (nowadays SML or OCAML), which was initially a functional language on top of Lisp written without s-expressions. Languages with more conventional syntax exist already - languages which combine syntax with a data representation are not so common...


> All programming languages have a bracket problem

Haskell doesn't. It's possible to fix the bracket problem, but it's not C-like so people will be forever complaining and refusing to learn your stupid crazy language.


Lighter??? You mean thin? Lowercase is what, short like the 'x'?

That is exactly the opposite of a good programming font. These characters should be extra-tall. They should go at least as low as any other ASCII character, and at least as high as any other ASCII character. An extra pixel (or several on a 4K display) upward would be even better, up there with the accented uppercase letters like A-with-ring.


I'm using the standard terminology in the font industry. My borrow font for brackets is called SourceCodePro-ExtraLight, not SourceCodePro-ExtraThin. And yes, lowercase as in the same height as lowercase letters. That was an interesting experiment that I rejected. I prefer light (thin) brackets that enclose everything, as you say.

Lisp is in decline in part because of the community tendency to circle the wagons in the face of any criticism. My single point is that Lisp parentheses are always too heavy, and readability improves with lighter parentheses.

I've also long used Unicode replacements for => and other combined symbols in Haskell. Then I discovered Hasklig, solving this problem instead through ligatures. Any problem one can solve in the font itself is transparent to the rest of one's tool chain. I'm a convert.


Instead of using thinner glyph shapes, just change your syntax highlighting to reduce contrast on the brackets.


> lowercase as in the same height as lowercase letters

Lowercase letters are just as tall as capitals...


AaCcEeGgMmNnOoPpQqRrSsUuVvWwXxYyZz.


You're missing some letters.

And even among the ones you listed, g, p, q, and y are fully as tall as every capital except Q.


This might be a weird question , but within topic.

What makes fonts convey other information? Why is comic sans so derided? (It obviously is.) What makes the emotion of a font come out?


I always thought it was because it was used by people who thought it made their poorly written crap seem more personal and human, when in reality all they were doing was one exceptionally low-effort action: pick the one wacky font that shipped by default with Windows. But instead of making their writing actually seem more personal or human, it just made it look cheap and unprofessional because it was equally associated with the sort of output you'd expect from any child playing with a 1990s-era Windows computer.

(The right answer is instead to embed personality and humanity into the words themselves, which is comparatively much higher effort and indeed beyond the skill of many people.)

Over time we've grown accustomed to working within pre-packaged emotional bounds (e.g. the limited set of emojis) and the meme-status of the font has long since passed so it would probably not be quite the same faux pas today as it was two decades ago.



This is a great question and I’d love to see some links / wild-ass guesses


It probably has a lot to do with things like how much it looks like actual handwriting. Whether you think it's bunk or not, there are people who think they can tell a lot about your personality through handwriting analysis and there are neurological conditions that impact handwriting, so it's not just outright crazy talk to think handwriting reflects something important about the author.

So my assumption would be that to whatever degree humans associate handwriting with things like personality or personal disability, we will tend to associate those concepts with fonts that touch on similar patterns, if that makes sense. (Maybe not explained very clearly.)


Possibly. And I don't think its bunk.. but there's something with emotion tied up with fonts or handwriting. As to what it is, I have no good idea how to discern.

But again, I cannot refuse the fact that comic sans is so.. derided and laughed at. Nary a week goes by without someone guffawing at that typeface on reddit or social media.

Any ideas how to measure this? Seems like emotion is the key here.


When you look at neurological studies, emotion is tied to ability to make snap judgements. People with low affect can't make snap judgements. They lack that file that says either "I have a good feeling about this!" or "Nope!"

Emotion is basically a summary of past experience. Relying on "your gut reaction" is a bit more error prone than actually analysing it, in part because it is rooted in personal prejudice, in essence. But in a survival situation, being able to quickly go "Nope! I'm out of here!" can be the difference between life and death.

So emotion based judgements tend to be fairly strong ones. They tend to be of the "Don't confuse me with the facts, my mind is made up" variety.


Connotation. The glyphs themselves are mere shapes. We just come to associate certain styles with certain contexts through personal experience.


How does it compare to OCR-A? OCR-A font was made to be recognizable by 60s OCR tech. https://en.wikipedia.org/wiki/OCR-A


OCR-A is my favorite monospace font. Sadly unicode support is obviously rather limited if present at all.


From your link:

> The design is simple so that it can be easily read by a machine, but it is more difficult for the human eye to read

So, unless your plane is piloted by a robot from the 60s, quite favorably.


I made my company logo with that font (c2000) - LOVE IT



The period is really close to the word it follows, it looks very strange in the monospaced version for programming.


Personally, I still haven’t found a better font for reading and writing code than source code pro: https://fonts.adobe.com/fonts/source-code-pro

B612 seems much more like a Helvetica derivative.


I wish I understood why you like Source Code Pro so much. For example, in Emacs, I just switched to Source Code Pro (on "normal" or "medium" weight). With the frame maximized vertically, I could see 60 lines of text, and the letters were small and hard to read, with lots of space between lines. https://i.imgur.com/M1BFDty.png

I then switched back to Hack, at the same size. With the same frame size, I could then see 64 lines, and the letters were all larger and much, much easier to read. https://i.imgur.com/99Hi6kh.png

If I were to set Source Code Pro to a size at which the letters were the same vertical height as Hack, I'd see something like 50-55 lines of text, instead of the 64 I see with Hack.

What am I missing? Am I doing something wrong that makes Source Code Pro look bad?


You're not doing anything wrong. That's just how SCP looks. I agree it is not the best programming font and find it a bit odd to hear people saying it's the best, but that's subjectivity for you. Hack is a good one, though I still find it too vertically compressed in terms of line spacing. Input Mono Narrow is what I prefer personally. I get fewer lines per screen as a result, but hey, that's what scrolling is for. And if a chunk of code is too big, that's a cheap signal that it might benefit from refactoring.


Source Code Pro was a step down from Inconsolata that preceded it.

It's not hard to see why people want to make more infinitesimally-different fonts with their own name and a funny ampersand or (in this case) parentheses. What is never explained is why anybody else should care.


> ”Source Code Pro was a step down from Inconsolata that preceded it.”

I’m not aware of any shared history between Inconsolata and Source Code Pro. Do you know of any? On the on there hand, Inconsolata is heavily influenced by Lucas De Groot’s Consolas.


As I noted, Inconsolata existed before they began work on SCP. It is an objective improvement on Consolas, and thus worthy of attention and change.

Normally, when putting in the effort to make something new, one tries to make it better than what came before, on some axis. Introducing a new thing less good than what came before is a waste of our time and a distraction from the better things. I don't doubt that designing SCP or B612 was educational for its designers, but that is not a good enough reason to adopt it, or even to study whether to adopt it.

The new thing should be better. Why is that hard to process?


> "Introducing a new thing less good than what came before is a waste of our time and a distraction from the better things.... The new thing should be better. Why is that hard to process?"

Using "good" and "better" implies that there's some objective measurement against a common set of criteria, which is not necessarily the case from the standpoint of the designers, nor for those who choose to use the fonts and for what purpose. Better according to you? Sure, but I'd frame that as an opinion rather than objective fact.


Why should B612 be better for programming than Inconsolata, Consolas, or Source Code Pro?

For that matter, I don't think you can say that any one of those fonts is better than any other, for anyone but yourself. I find Inconsolata too narrow and cramped on the screen, compared to the wider SCP. And the cute lowercase "t" in Inconsolata is by comparison lost on me after 8 hours of coding, and I don't notice in absence in SCP.


In fact, aside from its weird glyph choices for some code points, SCR is very much like Consolas or Inconsolata of one point size up, but with a scan line removed -- e.g., SCP 9 is pixel-for-pixel a squashed Consolas 10, with trivial alterations. So it might be said to be "Consolas Squat with some weird glyphs".


I appreciate the education on typeface design, but that's not what I asked.


I wouldn't call Inconsolata an improvement on Consolas. They're certainly similar, but personally I find Consolas to be far neater. Of course, this is subjective - but that's the point; it's not an objective improvement.


Source Code Pro is a very good font. My only real problem with it is that it's a touch too wide. One of the things I really like about it is the wide range of weight and the true italic.


Note that raphlinus is … the creator of Inconsolata (https://levien.com/type/myfonts/inconsolata.html)!

FWIW, I really like Source Code Pro — it's my default font everywhere. Inconsolata is nice, too, but I like the little hook on the lowercase 'l' in SCP. Also, SCP abbreviates to the name of a Unix command, which is cool


I use, to this day, Screen from SGI. Someone converted it to ttf.


Iosevka is better


This is one of those areas where personal choice will come into play, but, since you brought it up, I haven’t used it, but have looked through it:

The 0 can easily be confused with 8. I personally don’t like the serifs on the bottom of a few letters, like ’i’. And then some others like ‘v’ have an awkwardness to them.

Do you have specific reasons why you prefer it?


You can download variants with open zero or no serifs on the GitHub page. Also I think the way it is spaced is above average. I can for more text on screen with it and it is still readable, due to the letter height.


Why didn’t they put horizontal lines on the top & bottom of the uppercase I? I appreciate that I/i/l/L all look different, but if I was reading a word in that font for the first time and it had an I but not an L it could be easily confused.


I thought that was quite strange as well, particularly with the stated objectives of the font.

Would be interesting to see if there was any discussion about it.


On a similar note has anyone ever come up with a way to differentiate k/K more in regards to "not all characters are on screen to compare"-ability?


If the font works, I suggest we use it on airports flights information displays too. Those are not always very legible... (with the slashed 0, of course)


On Windows, it has quite an odd look in the font viewer. It seems like it is not being anti-aliased for some reason. The unusual cutouts also seem very prominent on a 4k screen. I'll probably still try it at work, but I am guessing this is not going to displace my current font of choice. Thank you for sharing!


I can't seem to get the font to show in vscode. I've installed the ttf fonts to ~/.fonts and refreshed my font cache, but "B612" as a font name doesn't seem to match anything, neither does "B612 Mono" or "Regular"


If you are using Linux put the (ttf) fonts in ~/.local/share/fonts/TTF/ instead.


The fonts are still accessible if they are in home, I found `fc-list` which gives the name for each of the font files.


On a vaguely related theme, are there any open-source frameworks or designs for building things like embedded industrial dashboards or mission-critical control software? I’d love to build on something that already incorporated best practices.


The colon and semicolon seem unusually small compared to other characters, making than rather hard to distinguish from one another. Trivial for a pilot perhaps, but rules it out for me as a possibility for programming.


I’ve got a couple of different radar screens that will look great with this font!



Why leave zero/oh confusion possible by not slashing the zero?


it is a very wide font, compared to programming fonts which are more tall than wide.


I'd love it if HN could switch to this font. We use Verdana anyways, and B612 is close enough to it.


This is not a font to read prose in. It's also restricted to classical latin letters, which makes it a poor choice when text includes other letters.


You can switch to it using an user style sheet for your browser.


Yet Another sans-serif face, with dubious claims for legibility, as usual only vs. other sans-serif faces.

When will they learn? When will we?


Learn what?


That where legibility matters, a sans font is the wrong place to start, or stop.

Sans faces appeal to graphic designers for graphic-design reasons that have no connection to merit as a means to convey useful information clearly and quickly. Graphic designers prefer details that make text less legible for the actual people obliged to read it, who get no say in what is used.


I've been trying out many different fonts recently and settled on sans-serifs being more comfortable to read (in terms of word-shapes).

Maybe it's a matter of exposure? If I spent more time reading books than blogs, I might find serifs more comfortable?

Update: it looks (to me) like the biggest differences are in peripheral vision.


I have experimented with this for a long time, and I find that serifs make more sense for books, and generally large volumes of uninterrupted text, while sans-serifs make more sense for labels and other chrome.

Interestingly, on the web, I still prefer sans-serif (Verdana especially). But perhaps it's because more than a page of two of text is relatively rare, and most sites are really more like apps in layout - a menu, various labels and buttons etc. Or interactive discussion like here, where each comment stands by itself.

For book readers, Amazon's Bookerly is just perfect for reading hours straight. It's not as condensed as many serif fonts, and it keeps all strokes full-bodies, which makes it a lot easier on the eyes, IMO.


https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4612630/

I did a brief search for scientific evidence on the topic, and came up with nothing in favor of your position.


If you look closely you’ll see it’s a serif font.


Not really. A handful of letters have serif-like features, but that's true of a lot of typefaces normally classified as "sans serif".

The reality, of course, is that there's a vast grey area between "serif typeface" and "sans serif typeface", and this one does sit in that grey area, albeit much closer to the "sans serif" side.


I looked closely, again, and see that it remains yet another sans font, of no greater interest than any of the last hundred.


Great, thanks for your input.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: