Hacker News new | past | comments | ask | show | jobs | submit login
How to Design Software Good (haiku-os.org)
512 points by davisr on Nov 8, 2018 | hide | past | favorite | 178 comments

I saw a quick flash of unstyled content (in Firefox) where all the text was jammed into one paragraph with no breaks. And then the page rendered normally.

So I glanced at the source code, and it surprised me to see that it was not a typical HTML document, but instead an XML document with a stylesheet. It is very rare to see a non-HTML page on the web, whether it is the XHTML serialization of HTML, or fully custom XML.

Thanks for reporting this problem! I filed a Firefox bug:


This bug is actually a regression in Firefox 58 from the fix for a use-after-free bug (CVE-2018-5097):


I thought it was intentional and not a bug - thanks for reporting! It's true that I don't see flash-of-unstyled-content on HTML and XHTML pages, unless the CSS file is delayed by many seconds. I noticed another bug though - the Firefox inspector tool doesn't show style rules for the XML document.

Once upon a time, the whole internet was going to be built this way!

Client side XSLT has one terrific advantage : it allows splitting up a page into dynamic and static parts. This allows you to cache almost-static-but-not-really pages very efficiently. E.g. a news paper front page with a “logged in as…” section top right, but otherwise equal content. Or HN comment pages for logged in users, which currently are uncached and can cause quite heavy load on the server for heavy threads, simply by virtue of people viewing it. I remember a situation where they explicitly asked: please log out when viewing this comment page—that shouldn’t be necessary.

I still think client side XSLT is a good thing. I write simple documents and it is automatically converted client side. The world's best static site generator.

I gives me the best of two worlds: simple markup with complete control of output and CSS styling and instant changes of all documents without a compilation step.

XSLT got an unfair shake. Yes, XML is ugly, and verbose, and painful to write code in, but the ideas were very good. Most users never understood the processing model or the declarative/functional nature of XSLT, and so tried to write procedural code with it, with predictably nasty outcomes (hint: <xsl:if> is almost always a code smell). But those who did get it, could use it to write some pretty elegant little programs.

I liked XSLT for XML to XML but using it for creating HTML was a nightmare.

I agree this is very unusual. It's also somewhat annoying, for example I can't just send this to my Kindle to read it, and I can imagine other tools failing on this as well.

Which is more the fault of the Kindle than the authors. It's Docbook, a standard that was invented for the purpose of writing books.

It's great to use DocBook for writing books! (Actually, I don't think this is true for most books, but you do you.) But I would argue that processors that are designed to take HTML as input are not at fault for failing to process non-HTML documents. (Many, but not all) web browsers can render and style XML like this, but I'd argue that this is -- somewhat ironically given the article's subject -- probably not the best approach to putting DocBook files on the web.

The only way to build a fully conformant browser is to support rendering HTML as SGML with a DTD. Toy/specialty renderers skip over that step since you can assume 99.9999% of web documents will be HTML. However, they would completely fall apart on standard SGML, styled XML, etc.

Writing good software is hard I guess :)

I just tried it on Pocket, and it keeps switching me to the web version... which is a bit frustrating, since the source code / DocBook [1] XML format looks ideal for describing a Pocket article.

Seems DocBook has been around since 1999, judging from when the first O'Reilly book was published about it [2].

I just flagged it with the Pocket app's "Report An Article Display Problem" feature, but it is such a niche use case that I can't imagine them spending time on it.

[1] https://en.wikipedia.org/wiki/DocBook

[2] https://www.oreilly.com/openbook/docbook/book/docbook.html

Kindle will probably take it away from you anyway.

ya and here I am trying to save this to instapaper and wondering wtf is wrong with this guy.

Also cannot read with dark reader. My eyes! -.-

This is XML, Dark Reader works for HTML only.

Wow, this is more like a master class on GUI application design than the usual guidelines you see. I don't see a byline but someone poured a lot of love and wisdom into this.

Seems the DocBook XML styler broke; there is a byline in the original source: https://github.com/haiku/haiku/blob/master/docs/interface_gu...

DarkWyrm hasn't been active on Haiku for some years now (though I think he has an occasionally-updated Twitter account?), so it's a community-maintained document at this point.

> Good Software Does Not Expose Its Implementation

Excellent goal but way harder than people think. I would say it is not possible in the fullest.

I kind of feel that you just read the title here. It's not talking about the 'fullest':

  An example of this would be if a music composition
  program has an easily-reached maximum song size because
  the code monkey who wrote it used a 16-bit variable
  instead of a 32-bit one. *While there are sometimes
  limitations that cannot be overcome*, the actual code
  written and the architecture used when it was written
  should have as little effect as possible on what the user
  sees and works with when using your software.
(emphasis mine)

I think it might be better rephrased as:

1) Don't add unnecessary constraints (or: Don't prioritise efficiency/etc. over the interface).


2) A good interface abstracts away technical problems, rather than presenting them in a different form.

The most common example I see is the password requirement that it must be between 8 and 16 characters... The 16 character limit is the main issue there... You know it has to do with the wya the data is stored.

Between 8 and 16 characters, but only 7 bit ASCII. But there is no error message if you use § in your password, it just kills the server.

At my workplace the official password policy requires /exactly/ 8 character passwords...

It had better not...

And yet...

I'm sure we agree 99%.

I don't think the goal is correct. In fact, I think the implementation should reflect the user's mental model. The closer the correspondence between the implementation's model and the user model, the less likely are surprises, bugs and limitations.

There's also the aspects of form follows function, and mechanical sympathy. The use of a tool should strongly guide its shape and design, and similarly, a user should be able to develop an intuition about what the tool is good for and what it's not good for, so they can use it to maximum effectiveness, and not be frustrated when it can't cope with a problem it's not designed for.

Totally agree. From the angle of UX and usability, the more closely the interface reflects the internal model of the system, the more accurate the users mental model of the software will be.

Indeed. Also referred to as "Leaky Abstractions"

Spolsky talked about it 16 years ago:


Indeed. Software is just like a person: what you see is an "interface," but what you are actually dealing with is the "implementation."

The most annoying (and visible) example of this is people naming their programs after the language/toolset used. As a user, I need not care that your code is written with Go/Python/KDE/Gnome, so why name your program with it?

Half of those don't quite belong in that list.

KDE/Gnome are much more than implementation details: writing a program for either is using a certain design languages and sets of UX rules, and those are user-visible.

Uh, it does matter though. If you're using KDE as a desktop environment, you might not want to pull in all the GTK dependencies just to run a single Gnome-based application when there's an equivalent one based on KDE. It's a common convention in Linux.

Definitely not possible. The implementation becomes the interface.

Classic example from when iOS/macOS had manual reference counting: fixing an over-retain bug in the framework causes the app with an over-release bug to crash.

Even more extreme example: https://twitter.com/gparker/status/1050903609142009856?s=20

Good software should expose its interface, not implementation.

The implementation will shine through sometimes, or will be (easily) detectable via "side-channel attacks", e.g. an interface feature can be slow due to an implementation.

OTOH this allows to upgrade and entirely change the implementation without affecting the user interface, or at least while keeping it largely backwards-compatible.

Looks like most of this is derivative of Molich and Nielsen's heuristic evaluation of user interfaces guidelines from 1990. https://dl.acm.org/citation.cfm?id=77486

Sorry to be a pedant, but the grammar and capitalization in this guide do not instill confidence. "How to Design Software Good" <- design well? "How will Your Users Do Their Work?" <- Seriously?

Maybe its meant to be tongue-in-cheek.

I thought it was a vague zoolander reference.

This is also what I assumed, and I'm fairly sure it is.

Especially since the grammar in the article is relatively OK, whereas that title would be really terrible if it were not a pop-culture reference.

Thats how I interpreted it. Sort of like when people say happy christmas instead of merry.

"Happy Christmas" isn't tongue in cheek. It's just literally what people in England actually say.

Everything people in England say is tongue in cheek though... but I get your point :)

Or possibly a non-native english speaker.

I came here to say this. It's not that I believe someone who is poor at grammar could not be an expert software architect, but every article makes a first impression with the title, and this one failed.

It's a joke, sheesh.

I also had the same reflection; maybe the original writer is not using English as his native language. It's true that you then start reading the page with a lousy apprehension.

I feel these conventions, while written about HaikuOS, apply equally to all domains of software authorship.

Indeed, a lot of these conventions are part of most of the platforms' Human Interface Guidelines.

I want to mention the classic book "The Design of Everyday Things" which shares a lot of points with this post, though not about software at all.

I read this book a few years ago and was really disappointed by the content. Apparently the version I read was the "revised and expanded edition", so maybe it just got worse with the update...

OT, but you are not alone - I also thought it wasn't that good as it was claimed. I've seen it recommended many times on HN.. I can't understand why. Now when I read the title, I immediately think of doors and handles.

> Haiku is an operating system which is known for its speed and being easy for anyone to use.

That's a fairly bold claim. Haiku is an OS that basically no one uses for any purpose. Why should I take any sort of design advice from them?

Being "known for its speed and ease of use" and "very low user base" are ... not mutually exclusive in the slightest, so I fail to see how your comment has relevance?

By your logic, since macOS has ~1/10 the install base of Windows, clearly this means that "very few" people use it and we should not take Apple's advice on anything design-related.

Haiku, unlike Mac OS, is barely known at all, even among highly technical and knowledgeable computer users. I'm a long term computer enthusiast and regular hacker news reader for many years and I'm just barely aware that something called Haiku exists (I'd forgotten until reading this article then vaguely remembered hearing about it before) and I certainly don't know it for its speed and ease of use.

I'm old enough and enough of a computer enthusiast to remember when BeOS came onto the scene. Even most computer enthusiasts today probably haven't heard of it and of those who have I imagine only a subset have heard of Haiku.

Now maybe Haiku really is great, fast and easy to use but I think it's a stretch to claim it's "known for" those traits when (perhaps undeservedly) I think it's more accurate to say to a first approximation it's not known at all.

In this regard it's nothing like Mac OS which while much less popular than Window, even my parents have heard of.

>> Haiku is an OS that basically no one uses for any purpose.

> By your logic, since macOS has ~1/10 the install base of Windows,

Does not follow. MacOS use relative to Windows is not at all a fair comparison. Firstly "everybody" uses Windows, so 1/10 of huge number is still a huge number. Second, except for gaming and some particular areas like EDA, MacOS has a general software library with a broad-base of technical and non-technical users throughout the world. Haiku OS cannot claim to have anything like that.

> MacOS has a general software library with a broad-base of technical and non-technical users throughout the world. Haiku OS cannot claim to have anything like that.

That was the second half of my comment. Please re-read the first half.

My point in that part, though, was that macOS has ~1/10 the users of Windows, but Windows' UI/UX was/is widely regarded as absolutely atrocious, and macOS' as being pretty good to great, depending on who you ask. So "if it has less users, their thoughts on design are irrelevant" is ... not a good argument.

I reread and it didn’t make any difference. MacOS may have 1/10 the Windows base but that is not having “very low userbase”, while I think it is quite reasonable to say that about Haiku. Reread what I said, the MacOS base is large and it is a general base of highly technical and non-technical users alike that are not all hobbyists. This does not describe Haiku in the slightest.

There is a valid point to be made about verifying usability and the need for a large and diverse userbase.

How much you want to bet that the Haiku userbase is overwhelmingly male? And this is not to make some concerned sexism argument but to illustrate that the community is more representative of the IT crowd rather than the general population.

> There is a valid point to be made about verifying usability and the need for a large and diverse userbase.

OK, that is indeed a valid point, but not one I got from the first comments.

I think Haiku has a diverse enough following that we are past most of these concerns, and a lot of our major paradigms which differ from other OSes were validated in the BeOS days. There are some areas where we still have work to do -- e.g. interfaces for the blind -- but overall we are doing quite well.

> How much you want to bet that the Haiku userbase is overwhelmingly male? And this is not to make some concerned sexism argument but to illustrate that the community is more representative of the IT crowd rather than the general population.

Oh, I know it's true of Haiku, but I practically can guarantee the same is true of Linux/BSD/etc. So then, can only "mainstream" operating systems have anything to say here?

Being fast and easy doesn't conflict with having a low user base. Being known for anything really mostly requires having some userbase.

Haha no one using it means they can go ham on trying good designs as they are unbound by legacy reasons.

Actually, looks like Haiku is a compatible reimplementation of a "legacy" OS.

Depends what you mean by "compatible." We preserve binary compatibility on 32-bit x86 with BeOS, indeed. But it hasn't stopped us from re-arranging parts of the GUI, or adding new APIs, or experimenting with the kernel, or ... etc.

Slightly off topic, but has anyone tried Haiku?

We recently ported nbdkit to Haiku and it was a pleasant experience dealing with the developer. However I never actually used Haiku (just reviewed and applied patches).

I have. It was a joy to use; fast, cohesive but familiar due to partial POSIX compatibility, inspiration from other operating systems and a GNU command-line environment, with object-oriented design and unique concepts. One of which is its package manager, which installs into a filesystem image that is mounted read-only to ensure the files are incorruptible.

It's easy to grab an .iso and run with Virtual Box. Every five years or so I do that with Haiku and ReactOS to see how they are progressing.

I absolutely love long articles like this. This is full of brilliant information. Fantastic.

I am not sure I agree with all of this. I worry that the article encourages you to dumb-down software so that your users feel completely helpless. That is the opposite of what you should do. Your users should feel enabled and encouraged to explore and experiment. Sometimes they may encounter jargon, but if it ultimately helps them gain a better understanding if they choose to research it more, I think it's helpful. There is a difference between saying "the file is corrupted, and there is nothing you can do" and "the file is corrupted, and you are too dumb to fix it so there is nothing you can do". Omitting details in the error forces users into the second class; even if most users are unwilling to google for a solution, you need not remove their ability to do so. (And in fact, most of these cases are the developers projecting -- the problem is not that the user did something wrong, but that the programmer messed up the implementation. Why should a file be corrupted when reed solomon codes exist? Very rarely have I seen a case where something was so damaged that it could not be recovered; rather, the engineers chose not to design in proper error recovery. Then they blame the user for a disk sector going bad when the engineers know full well that disk sectors can go bad.)

The other thing I think I agree with is the philosophy on designing complicated operations. Every operation a user performs should leave them with no question as to what the outcome will be when they click the Big Red Button. And they should be able to painlessly undo it. Imagine typing without a backspace key. Your words per minute would be near-zero because of the fear of making a mistake. Undo is essential.

The lack of transparency around state changes is something that makes users hate software. Who will this share button share with? What does "make my YouTube comment publicly accessible" actually mean? Users don't know, then they make the wrong decision based on incorrect information, and get mad -- and rightfully so. We blame the user, but it's our fault for laying hidden traps. You can implement social media without tricking users, but we choose not to.

I see a lot of people fall into the "great is the enemy of good" trap with regards to undo. They see some operation that creates some immutable effect on the Universe, so the only remedy offered to the user is "email the dev team and we'll try and fix it." This is a pain for the dev team and a pain for the user. Yes, there are certainly permanent changes that software can make. Send an email, launch the nukes. But those should not be barriers to letting users undo. If they want to undo something that sent an email, you'll just have to send another email apologizing. That is all the development team is going to do when they have to manually fix it. Nukes can be killed while they're in flight; better to have some fallout near the launch site than to start World War III. And yet, we consistently fail to offer undo in most applications, so users learn to be very scared of doing anything, and that fear prevents them from accomplishing their goals. That's our fault, not the user's.

I have seen this all in action at work. We have something that is a "big red button" (we're an ISP, it's the "change service plan" button). People will not click this, and it's because they don't know what it will do. If we said "clicking this will cause the customer to pay $x a month, with the next invoice being sent on 12/01 (click here to see a preview), and their bandwidth profile will be set to XMbps in the next 30 seconds", then people would click the button, because they know it will solve their problem, rather than create a problem.

Anyway, you have to give your users understanding and the ability to safely experiment (which is how you gain understanding). That is what makes software a tool to enhance productivity, rather than an annoyance that makes the user do extra work. Just dumbing down your error messages isn't going to cut it.

I really love this philosophy. Thank you for taking the time to share.

Many developers can see the value in open source software. This philosophy claims even more value (user freedom/liberty) can added to groups outside developers by creating "open implementation software". The practicality and feasibility of this design philosophy are one story (that is probably not worth arguing over without evidence). The idea of empowering users to understand the implementation at even a rudimentary level resonates with a lot of frustrations I've had as a babied user of some software. I will try to design with this philosophy on my next personal project and see how it works!

> We blame the user, but it's our fault for laying hidden traps. You can implement social media without tricking users, but we choose not to.

Aside from the fact that it's a minefield of dark patterns, how did social media get into this discussion?

Just as an example where the results of using software surprise users and upset them. I was specifically referring to when Google integrated G+ and YouTube; I mentioned it in passing a couple of sentences before.

> Aside from the fact that it's a minefield of dark patterns, how did social media get into this discussion?

"Aside from [the thing that makes it relevant to this discussion], how is [x] relevant to this discussion?".

The layout of casinos is also a minefield of dark patterns, but I'd have been surprised to see it crop up in the middle of that comment without explanation or follow-up.

A great talk on the topic - John Ousterhout: "A Philosophy of Software Design": https://youtu.be/bmSAYlu0NcY


> Good Software Uses Everyday Language

I'm curious why keyboard shortcuts are called accelerators.

Overall looks like a pretty good set of things to think about as a UX designer.

That's what keyboard shortcuts were called in some systems like BeOS, early Windows and probably others. I would guess it's because they provide faster access to a feature or function than going through menus.

> depend only on the formats installed by default in the system.

Funny fact I've actually "invented" this philosophy (drivers for every file format being installed, managed and exposed to applications and scripts by an OS) when I was a child. I was amazed to see somebody has actually went implementing this idea (as far as I understand it works approximately this way in Haiku)

> As an aside, MP3 is a format which requires licensing for decoding and encoding to be legal,so depending on the MP3 format is not a good idea unless your program deals specifically with it.

This is not true, the patent has expired AFAIK and every desktop Linux distribution can play it out-of-the box for quite a long time already (and even if it would not have, people living in countries that don't have software patenting should be let to make use of their advantage). In fact MP3 is a great format to depend on as this is the only lossy audio compression format that really plays everywhere. AAC is better and can be played on the majority of modern devices and OSes but still not really everywhere, OPUS is the best but there still are so many devices and apps that don't support it unfortunately.

> as far as I understand it works approximately this way in Haiku

Yep, it is indeed. You can of course write your own translators for custom file formats and install them along with your application; but then anything which already handled whatever representation they produced (e.g. PNG/JPG/etc. images -> BBitmap, Haiku's image buffer class) could then handle those filetypes also.

> This is not true, the patent has expired AFAIK

Only last year, and this document was mostly written a decade ago :)

Haiku can now play MP3 out of the box also (and since the servers and buildbots are mostly in Germany, we don't really pay much attention to U.S. patents anyway at this point.)

Some in Germany do. For example the owners of the aforementioned mp3 patent.


> Funny fact I've actually "invented" this philosophy (drivers for every file format being installed, managed and exposed to applications and scripts by an OS) when I was a child.

The Amiga got something like that with Workbench 2.0, actually:


This is also very cool:


> ARexx can easily communicate with third-party software that implements an "ARexx port". Any Amiga application or script can define a set of commands and functions for ARexx to address, thus making the capabilities of the software available to the scripts written in ARexx.

> ARexx can direct commands and functions to several applications from the same script, thus offering the opportunity to mix and match functions from the different programs. For example, an ARexx script could extract data from a database, insert the data into a spreadsheet to perform calculations on it, then insert tables and charts based on the results into a word processor document.

I'm not sure datatypes was before 3.0?

But as a great demonstration of why this is fantastic: Amiga software written in the early 1990's [1] can open e.g. WebP files without modification, despite the format being released only in 2010.

[1] Actually even some older ones - since AmigaOS lets you patch OS functionality quite easily, there's a tool - DTConvert - that patches library calls to intercept attempts to read non-IFF image formats and passes it through datatypes, to let even older programs load files supported by datatypes...

This page is from 2006.

With a title like that, at least it wasn't an article about "How to Speak English Good"

The title's intended, it's meant to poke fun and be light hearted.

> Writing good software can be hard, but it is worth the time and effort.

Is it? I want this to be true because I want to write good software. But I've worked with some very senior developers who would disregard all software engineering and user experience concerns and just spew large quantities of low-quality code that made the managers just as happy, especially since it got done quickly. And the end-users in many niche industries are used to being shipped garbage, so they're just as happy too.

Very senior developers know things that are not intuitive, but very true. So I'll let you in on a few secrets.

"Good enough is always good enough". You probably want to do more than good enough. Those developers you talked about, probably knew the amount of quality that was actually needed, no more, no less.

Clean code doesn't mean "no bugs" and "good software":

- I've seen very cool, popular, money making games that had terrible code (all in 1 C file for example).

- I've seen a terrible mess of code, that was practically bug free (because it was running for years in production). And have seen that same code refactored to clean code by a junior (who thought he would 'fix things'), and introduced a lot of bugs (because that is just what happens when you write code, even clean one, even by a senior developer).

If you deliver a final product (such as a game for example), that will not need extra features, you can hack stuff in at the end. When you would hack code together at the start, you will shoot yourself in the foot by needing to go through your mess all the time. But when you do this at the end of the project, you can save time by adding in quick hacks. The technical dept that you introduce doesn't need to be paid off. Of course you cannot do this with a long running project. But with a deliverable product go right ahead.

The main thing that you have to consider is this: EVERYTHING IS A TRADE-OFF. I see juniors make this mistake all the time, selecting the best programming language, the best VCS, the best whatever. You know, that 'best' thing also has drawbacks. So it's always about making trade-offs, selecting something despite the drawbacks, because of the benefits.

General rule: if something only has benefits, it's probably because you didn't figure out the drawbacks yet.

How to make the proper trade-offs? Experience ;).

Very true, I agree on all points.

I would add the amount of effort put into designing and polishing any project that you can defend is correct is related to how much future development and use the project will see and how critical the project is for the company.

So, if you want to work o a nice quality project or want to spend a lot of your time on improvements to quality find yourself a project that will either see a lot of future development or a lot of future use or is extremely critical to the business or, best, all of these characteristics.

I see this mistake happen all the time -- people complaining on quality, disparaging previous developers or demanding effort to improve it just for quality sake ("because I just read this book and it can be done better") without reasonable arguments to support from business POV.

Also I see people categorizing all project deficiencies as technical debt where what they should be doing is to first establish what would be reasonable expectation regarding quality and then the debt is only what would be absolutely necessary to reach that reasonable expectation.

So for example, if you make a small app that will not reasonably see a lot of future development but is critical to work reliably in production (and it isn't) your debt would most likely be centered around making the application reliable enough and probably not around making the code nice (as long as it isn't preventing makeing the application reliable).

> Clean code doesn't mean "no bugs" and "good software"

This gets missed too often: clean interfaces are important, whether it's a UI or an API. Clean code? It depends.

I once interviewed a dev who had all the right answers when it came to software development practices, TDD, etc, but when we looked at the product he was working on, the UX was a stuttering mess.

I don't care what your code looks like if you did not achieve a good end-product.

> the UX was a stuttering mess.

Is that a good metric to judge a developer, though? Good UX requires either formal training in the field, or a very gifted individual. In my opinion it has very little to do with the quality of either the code or the developer.

The UX design is not something I would judge someone on, but UX as a function of UI performance definitely is. If the UI is hitching, non-responsive, or getting into strange and incomprehensible states without giving me any indication how or why, I place the fault of that on the developer who implemented it.

It is a very good metric.

To have baseline UX awareness is crucial for every developer on any team that sets out to build a great product, just like any effective designer has to be aware of physical limitations of his design space (for example speed of light, bandwidth)

If your UX designers job is to point out that responsive UI is indeed important for your product, he is not actually doing UX design and you are likely in deep trouble.

You're assuming all products are apps or websites or something with a GUI. A product could be a library, or an OS kernel, or a web API, or any number of things, where knowledge of how to make a GUI with a good UX is completely irrelevant; or the programmer could be working on a very technical part; the engineers working on, say, Safari to optimize the javascript engine's performance similarly don't need any knowledge of what good UX means for a graphical interface.

Apple and Google shouldn't fire their best compiler/JIT writers just because they write lousy GUIs.

There's a UX for those things too. The users would be the consumers of the API.

Obviously, but the kind of UX discussed here is obviously that of graphical applications; nobody would say that a web API's or a compiler's UX is a "stuttering mess".

Basically everything computer that interfaces with humans is graphical. That doesn't mean it's in the designers domain to fix.

A designer is not responsible for making sure that the implementation of their design runs at stutter free at 60fps in a browser -- that is an engineers job. If the engineer who implements such a design is insensitive to these things and needs convincing that it's important to not have the implementation not stutter, jerk or jank you are never gonna get good results.

Not necessarily, but good developers need to be thinking about their upstreams and downstreams. This is often uncomfortable because a developer personality often wants to know all the minute details and not have to worry about areas outside their core competence, but you can't build large scale software without at least a critical mass of developers thinking big picture. UX in particular depends on the task at hand, but if it's a consumer product and you're working fairly close to the surface (client or API tier) then yeah I'd expect some awareness of UX.

> don't care what your code looks like if you did not achieve a good end-product.

I do care if your code is a mess if other engineers have to maintain it.

I'm not saying clean code is not desirable. Code which is easy to maintain and modify is absolutely better than messy code.

My point is that "good code" should be prioritized behind a good product. The purpose of code is to deliver a product, not to be well-factored or to embody certain principals. If your product is bad, your code is already categorically bad, no matter how beautiful it is.

But it's really difficult to make a good product and good product experience on a messy code base. Quality code leads to better product IMHO. (of course good quality code base can still suffer from badly designed UX)

The devil is in the details. Sure, if your codebase is systemically messy, it's probably going to take forever to get anything done, and you will probably end up with bugs in the final product.

But what if you have a component here or there with well defined, well tested interfaces, but internally it's a bit of a mess? Or what if you hack together a proof-of-concept for a new feature to get it out the door and see what users do with it before investing a lot of time in making it perfect? Those kinds of things can help your organization move faster.

As with most things, clean vs messy code is not a binary. It's the sum of a lot of individual value judgements about when it's pragmatic to focus on code quality.

Without fully qualifying what “easier”, “good product”, and “good experience” mean as well as delineating project constraints such as time, business value, and life it’s not possible to truly have this discussion.

Even so, I am inclined to argue that in most cases the exact opposite is true. That is, it is actually “easier” to create a “good product” with a messy codebase, because in most cases “easier” and “good product” are understood by primary stakeholders to mean business value at this moment.

I would love to do the following experiment:

- One coder writes his product with a high quality code base.

- Another coder sits next to 3 users of his product, and hacks stuff in as fast as possible.

I would bet on the 2nd one for having the better user interface.

Agree with all points. Here's another - depending on the size of the shop and the population of users it will be delivered to, there are trade-offs that must be made as you go towards zero on both of those axes. For example, if you have two developers delivering in-house apps to 250 end-users in a SMB company, then the answer is never, "That will take two years and $2M." It is, "We can have that for you in a month," and then you deal later with things that necessarily got skipped. Sometimes it means refactoring and bug fixing, if the pain points to the business are large enough. Sometimes that may mean reaching into a database to fix data that is caused by a complex bug that only manifests itself once a year and isn't worth fixing outright (because there are other things you're working on now). That outlook is probably way to "Realpolitik" for the purists, but it is reality in small shops (which still do exist).

> General rule: if something only has benefits, it's probably because you didn't figure out the drawbacks yet.

And that, of course, has it's exceptions too. Non-optimized stuff exist, and everything that isn't on the optimum line can be improved in some way without drawbacks. Besides, the optimum line is always moving.

So you optimized some code, cool!

- How certain are you that you didn't introduce a bug? - Is it also running faster on this exotic platform that we still need to support? - Did it involve API changes? - Do we need to put this into production now, or wait for the next optimization to save some time? - How much time did we spend on this? Was there something else that would have made a bigger impact?

In theory you can improve things without drawbacks, but in practice you always have to make the right trade-offs.

Just to point, my comment wasn't about code optimization (but can apply to it too).

>- I've seen very cool, popular, money making games that had terrible code (all in 1 C file for example).

Are you sure that wasn't an optimization to get "LTO" with a non-LTO linker?

No, in that case you would #include all your .c files in one .c file and compile that for release, but would otherwise work in the usual manner. CMake actually has builtin support for doing just that.

That would be a nicer way to do it, but it doesn't contradict what I said.

Any halfway decent developer that wants single file optimization but doesn't want to author the source code in one file will do this.

See "Unity build".

I talked to the developer, and it was him who mentioned this, and he was not proud of that part ;).

There is a very real truth to what you say, and I don't believe it is bad.

We don't want golden-polished software. Software that only just does its job, makes the company money, pays the bills, and allows for future improvements where possible/necessary. Lean and mean, then refine (if the bills have been paid).

The distinction has to be made between good software and well-written software. It is very possible to have well-written bad software and awfully-written good software.

I think realising this, is what makes a junior into a senior. Having the confidence to not always do what that little birdy in your head says :)

Customers buy software that solves problems, therefore good software solves problems. Customers buy good software, not well written software. These are not always mutually exclusive.

For a long time, I kept pushing deadlines because I wanted to write things the "right way". At the end a code that doesn't reach its destination on time or doesn't do what it should be doing is useless, no matter how well written. A good engineer knows when to obsess over code quality and when to look the other way.

Agree. It helps to cut your software-chops or steer yourself into an industry where software is an overhead, rather than the main product (said in the best possible manner).

For example, controller software attached to a turbine engine. What I said above is especially true in those scenarios and is actually quite folly to do more than necessary in the name of software perfection. Plus unnecessary complication hurts brains :)

It depends so much on what you're doing. I've been in situations where junior or mid level devs spend a ridiculous amount of time on details that add almost no business value even in the long term. The best developers from the employer pov understand, depending on the project, which level is good enough.

>And the end-users in many niche industries are used to being shipped garbage, so they're just as happy too.

I'm one of the end-users of many terrible B2B products. I'm not happy with them, it's just that the department that buys the software from IBM & Co. doesn't use it. They don't care. But it costs a lot of money in the long run because of bugs and slowness.

Pretty much my experience too. I don't think it's ever been worth the time or effort to spend time on writing quality software, the people that smash it out quickly are the ones rewarded.

I'm starting to feel like the whole movement for 'software craftsmanship' is making a lot of developers miserable. It's maybe better for them to realise they are not artisans, they are bricklayers. To focus that creative energy on their own projects rather than getting burnt out fighting their managers over writing 'good software'.

I don't think you can turn it off like that. There are no Michaelangelos spending their day slapping emulsion on drywall.

Opportunity is not evenly distributed.

>I'm starting to feel like the whole movement for 'software craftsmanship' is making a lot of developers miserable.

Not as miserable as working on some 7 year old system hacked together without any craftsmanship. Plus if its a long term project the maintenance costs must spiral.

I think the point is that it's better to work on some 7 year old system hacked together without any craftsmanship with the belief that craftsmanship is nonsense and spaghetti code is a legitimate design pattern, than to work on it knowing there is a better way to design software.

The end users aren't happy, they mostly hate the crap that is foisted on them.

Very much this. People have incredibly low expectations of software, so of course the software industry feels no need to push for higher quality.

Generally business considerations >> user happiness.

Software is primarily shipped to make money. If a shitshow product - it's not hard to name a few - makes money, there is no business feedback loop which will raise quality.

Senior devs tend to justify this on the basis of experience, and management justifies it on the basis of shareholder value.

In reality it's just cheap, shoddy, dispiriting, and cynical. In a mature industry developers would be able to take pride in making end users happy [1], as opposed to making them frustrated and enraged.

[1] Not the same goal as impressing developer peers.

They hate it, but they pay for it.

Perhaps you could make something they love instead, but if the production then costs 10x more, and they are only willing to pay 2x more, it does not make sense economically.

I'm reminded of, of all things, WW2 tanks.

The Russians built a huge volume of very low-quality tanks, expected to go for only a few hundred miles, and to win through superior numbers. Their crews were poorly trained. The designers suggested easy-win improvements, but the brass blocked the changes as it would have impacted production rates.

The Germans built the best-engineered tank the world had ever seen, but its design tradeoffs made it ineffective, and it arrived too late in the war.

Now for a different context: consider Russian cars vs German cars.

Also Russian fighter jets vs American/NATO, in the 70s-90s timeframe for example mostly. I love the excercise of comparing the 2 design styles: completetly different aproaches to the same problem, both very effective and elegant. Just looking at the surface (copckits) of it gives one a very good impression of the differences.

Really a lot of russian vs american style examples.

Also technical (engineering) books for example. Generally russian books where a lot more condensed, raw, pure numbers/formulae etc. American school was a lot more colorful, pictures, graphics, plots, diagrams. There were people that liked better the russian style, however I think the "prettier" one has prevailed.

Interesting subject I think.

German cars still suffer from some of the same poor engineering instincts that handicapped their tanks and aircraft in WW2: an emphasis on complex designs, unproven innovations, and theoretical or lab-measured performance rather than simplicity and empirically tested performance and reliability.

Worse is better.

"worse is better" is worse

Nah they're not happy, they're just trained to think it's how things have to be.

You know that and I know that, but often it's easier said than done to convince them of it. I've had users get indignant because I dared criticise their favourite enterprise bloatware like I insulted their mothers. Even as they themselves were struggling with some misfeature keeping them from doing their actual work.

But it does mean that it's easy to exceed their expectations, for example by making the browser back button take the user back to the previous view instead of crashing the entire user session. I tend to try to justify spending time solving technical debt by making development of new dazzling features faster and cheaper, but as long as the users (and more importantly their managers) are sufficiently dazzled if you only throw them a pittance once in a while, that becomes hard to sell.

I think I'm getting to that point as well. I've written some very tidy code back when (~5-10 years ago), but nowadays I'm more and more thinking it doesn't really matter - more than one of the projects I've spent months and years of my life on have already been replaced. Not because it wasn't good, just because an alternative was better for that company (like when they replaced some big java enterprise CMS with just some Wordpress instances recently).

Have you ever worked maintenance programming on some legacy software?

Disclosure: small-ish company (~100 employees) in France

You know, I kind of went through the same questioning: at my current company we have an employee that has his niche/historical role on one of the key infrastructure of our product.

The guys works from his home and whatever hours, commits 300+ lines per commit (90% of which is unrelated to the commit name, just commenting things or uncommenting others). The code is a spagetti of if else if else, it has monstruous "client specific" code (like: if the current client is this guy, then do that, otherwise for all other clients do this).

There's even a load of "if the program runs in test, answer what the test expects, otherwise in prod send something else".

Every time I look at the commit tree (a balance of wanting to have fun and wanting to see what is going on), I'm in horror. The guy rewrote the java.lang.string, the java.lang.date, and some other base classe of the language to incorporate his own version of the Date handling.

Fun fact, at one point his date parsing code "wasn't happy with october"... why? because October contains the letter 'T', and he botched his ISO date parsing (format: YYYY-MM-DDTHH:mm:ss) by checking the presense of the letter T..

I have many many stories of him commiting some bad code, only to spend 12h in a row trying frantically to correct the bug by adding ifs and elses.

The last time I looked at his code, he was rewriting a huffman compression algorithm by himself (NIH syndrome) to compress his on disk data. Complete with loads of code to handle per-bit operations that he wrote b y himself instead of relying on a battle-proof library.

Management-wise, they accept his behavior because this guy, working from his own home, often can work at scandalous hours and be on the bridge when he pushes a botched version to production at 2AM. At the time of this comment, his last 4 commits on master are at the following time: 08:57PM, 10:40PM, 12:11PM, 01:02AM

I know I'm venting hard, but this guy is probably be the best paid engineer of the company, he is a huge liability IMO, and is present at all-hands meeting maybe once a month. So yeah, why bother coding great code, making sure that your tests and CI are green, when some other guy with a bit more history in the company can get away with doing shitty work and having poor life balance, as long as he shows "commitment" when he botches his software releases.

This is really some life-long questioning of mine: should I care about things or should I focus on politics and disregard others. Aren't those on top of the pyramid the more "sharky" people that climbed the ladder not by pure skill but by showing the appropriate (bad) behavior when needed and correctly compartimentalized their own questions about ethics and got away ?

I replaced that guy at my company (similar size) and a year in I'm barely scratching the surface of the horror, a lot of the time I end up putting a safe interface in front of it and a big note saying fix this at some point.

It's nor even that the code is bad (and it is) it's that he couldn't think straight, the business logic is all insane as well.

Still boss hired me to sort it out and the time scale is measured in years (week is a mix of new stuff and fixing old stuff).

Typical performance improvement is two orders of magnitude.

When I started it took 70s to search for a quote.

Now it takes 200ms and mine includes the line details on the quote in a easy to eyeball format.

My boss seems happy at least.

It really depends on what type of business youre in for the quality to matter. If you were say, writing a real time data infrastructure platform for a whole company, or something else related to distributed systems, then the code needs to be good. If you are working on CRUD line of business software with some complexity, then it can be garbage. In general, the closer the code approaches to actual computer science problems (and this is out of necessity), then the greater the need for good code. Few businesses have this requirement.

It's funny how when you make a bug and then stay awake at 1AM to fix it in production, the managers will mostly remember the latter.

Now that I'm thinking about it... do the managers actually know the bug was caused by his "programming style"? Because, if they are under impression that he is fixing other people's mistakes... then it would make sense to pay him better than the rest.

> This is really some life-long questioning of mine: should I care about things or should I focus on politics

I guess you need both; and the exact proportion depends on the company.

> Aren't those on top of the pyramid the more "sharky" people that climbed the ladder not by pure skill ...

This already assumes that those on the top have climbed the ladder. As opposed to already starting somewhere higher than you will ever get.

I've worked with people like that, not exactly on my team. Managers loved him because he was always "committed", working late hours to fix bugs his own botched code created.

I bet the cost/benefit for the company is positive, and as long the clients are happy and paying, how this guy works is irrelevant. he was rewriting a huffman compression algorithm by himself because he's bored and he wants to learn and experiment. I bet is not a good life he's living. There's a great disorder in working from your home at odd hours, it's not healthy. He would be happier working normal, fixed hours, and with a manager to guide him.

> There's a great disorder in working from your home at odd hours, it's not healthy

I'd be curious in hearing you elaborate on this. I've been in a WFH setup like this for about 4 months, and while I enjoy it right now I do wonder if it'll be something I might regret after 2-3 years on this schedule.

My primary reasons for working like this are that: 1. I hate being in the corporate office environment, it feels very artificial and constraining 2. I tend to have my most productive coding sessions very late at night anyway

I did the WFH for a year, and while the freedom is enjoyable it is very easy to slip into working very late going into 'hermit mode'. You have to be very diligent in training yourself to be productive during normal hours so you can be social in the evening.

My new situation is categorized as flexible office hours. I go into the office 2-3 days a week, enjoy good rapport with the people I work with, then WFH on Mondays and Fridays (typically). I find this to be a good balance - I have enough structure Tuesday to Thursday that I can continue this trend Monday and Friday.

If you are valuable enough or underpaid then make everyone around suffer when they force you to work on these tasks. People will learn and you'll have easier time.


I would pay for a book with more anectotes of kafkaesque, nightmarish visions of horrendous programs.

I could spend a year writing blog posts on the one I inherited :)

>just spew large quantities of low-quality code that made the managers just as happy

Honestly this is my biggest disappointment as I get better as an engineer. No one except me seems to bother about simplicity, readability, maintainability (Ok a few of my colleagues do).

Seems like you kind of answered your own question. :)

if you care about people then yes. if you don't then no.

Is the grammatical error in the title intentional?

I blame English. That language is always up to no well: https://www.smbc-comics.com/comic/noun

... and want to learn how to do other stuff good too!

I wasn't going to be "that guy," but it's "well" to see the question.

I believe it's a reference to https://www.youtube.com/watch?v=NQ-8IuUkJJc


Love the idea of starting with “what will the user be doing?” and prioritizing based on that question.

...You can also make Tracker show a window for a particular folder in order to show the user where a particular file has been stored and give him access to it directly....

...By removing unneeded items from the file navigation window, you are reducing the number of choices the user must pick from and also preventing him from opening the wrong kinds of files...

...Good feedback just means making sure that the user knows what your program is doing at any given time, especially if the program is busy doing something which makes him wait...

...An even better solution would be to select a good default choice for the user and give him the option to change it...

... In short, a user will learn only what he must in order to do his job...

Ctrl-F "her", "she": 0 matches

Another tip to design good software: don't assume all your users are male :)

This is an okay stab at a HIG, although it is severely lacking in examples and screenshots, for a document that deals with graphical user interfaces.

Good past attempts that are worth the read if you're into this kind of stuff include the original Macintosh HIG, as well as NextStep User Interface Guidelines. I also quite liked the Hypercard Stack Design Guidelines. Gnome once had an okay-ish HIG (which I contributed to), not sure how well up to date it's been kept through the years. KDE, despite being better on the eye candy front compared to Gnome, was always lacking behind in terms of actual concrete guidelines for developers.

This document was mostly written almost a decade ago, and by someone who I think went to a prestigious institution. So the "rules of style" back then aren't what they are today.

You're right, we should update it, though.

> You're right, we should update it, though.

No, you really shouldn't, it's not a real problem. The only people who keep tallies of gender pronouns in documents are internet trolls; don't indulge the trolls. Anyway, switching up pronouns every other time breaks continuity and makes documents harder and painful to read.

No need to keep a tally; in 2018, any document that exclusively uses masculine pronouns for genderless subjects reads as outdated.

When writing on technical topics, the chief concern is to be clear, accurate, and straightforward. Anything that might distract your reader is undesirable. By referring to “the user” as “he”, you’re already distracting roughly 50% of your potential readership (if you’re writing the documentation for a Do It Yourself Vasectomy Kit, perhaps you get a pass).

Switching up pronouns every other time is a strategy, but it is indeed one that can hurt the readability of a document. This article covers various strategies nicely:


(Note that this document was written over 20 years ago, so the “rules of style” when this document was written were roughly the same as they are now)

> any document that exclusively uses masculine pronouns for genderless subjects reads as outdated.

Correct English is never out of style.

Different languages do different things, and that's okay. Some languages apply grammatical gender to more than just sex, e.g. tools, or fruit. That's okay. English uses grammatical gender to distinguish between male, female & neuter objects, and defaults to male when referring to unknown males-or-females. That's okay too. It's all part of the rich panoply of life.

> By referring to “the user” as “he”, you’re already distracting roughly 50% of your potential readership

I really doubt every woman is distracted by proper grammar. And of course by not using correct English, you distract people who use & prefer it.

Except there is no such thing as “Correct English”.

If we were speaking about eg contemporary France French, you would have a point - there is a governing body decreeing what is correct French and what isn’t.

That is not the case for English.

But even that is besides the point, because “they” as a genderless pronoun has been used for over half a millennia. (William Caxton, 1460: “Each of them should...make themself ready”)

> By referring to “the user” as “he”, you’re already distracting roughly 50% of your potential readership (if you’re writing the documentation for a Do It Yourself Vasectomy Kit, perhaps you get a pass).

Do you have any research/data to back this up? I'm a male and I am not distracted when people use "she" (or "he" for that matter). I have never heard anyone in person complain about not being able to focus on an article because they used "he" instead of "she".

> Anything that might distract your reader is undesirable.

Concision is of utmost importance in any language. Many of the examples in your link sacrifice readability and concision for the purpose of avoiding pronouns. What really distracts me is when people use "(s)he", "s/he", "he or she" (oh no -- which one should be first?), or periodically switch between "he" and "she" absolutely everywhere -- all for the purpose of filling a contrived equality quota.

We need an "it" that can be applied to a person.

English has had one for a long time, “they”.

Elitist prescriptivists who wanted English to be Latin because of bizarre linguistic fashion tried to purge it, but never did from general use, and prescriptivism, especially on points that have never aligned with common usage,has fallen out of favor, so there is little reason not to use it now but adherence to a particularly archaic elitism.

It’s useful to have a singular and plural form, required probably not.

There is a user, they are using the system, and that’s about as much as we know about them.

"He" clocks in at two characters, it's a very concise way of referring to an arbitrary person. "The user" takes longer to write and longer to say. Forcing the reader to juggle around "or" conditions (i.e. with "he or she") is even worse and I consider it bad practice. Your readers will thank you for concreteness and brevity.

You’ll notice that my previous post uses “they” as shorthand for “the user”. If your last remaining argument is that “he” is 2 letters and “they” is 4 and thus it’s unequivocally better to always use “he”, it’s not a very good one.

> you’re already distracting roughly 50% of your potential readership

If by 50% you mean 100% of the women having trouble making it through an article with multiple instances of "he" not immediately followed by "or she", that's a pretty bold claim.

I suspect the key factor in people's inability to focus on the central point of the article lies in their political persuasion rather than the shape of their genitalia.

50% refers to the mix of male and female people who get distracted by this; in my experience as a technical writer who would always use “he” when I was less experienced, perhaps 70% of women and 30% of men?

It was certainly one aspect of my writing I got a lot of feedback on. Changing that habit was fairly easy, and has entirely solved the problem.

I’m not sure what’s political about pointing out that “the user” of eg a computer system is not always a “he”, but if you want to bring the conversation there, have fun. If anything, it’s starting to feel to me like people with no interest in technical writing are coming out of the woodwork and making this political!

> 50% refers to the mix of male and female people who get distracted by this; in my experience as a technical writer who would always use “he” when I was less experienced, perhaps 70% of women and 30% of men?

But who were these people? I don't doubt that there exists a population such that half of it complains about lack of gender neutral language in technical writing, but it does not necessarily follow that the same half should also be arbiters of techincal writing style.

> It was certainly one aspect of my writing I got a lot of feedback on. Changing that habit was fairly easy, and has entirely solved the problem. > > I’m not sure what’s political about pointing out that “the user” of eg a computer system is not always a “he”, but if you want to bring the conversation there, have fun.

Back when the convention in English (and various other languages) was that male pronouns were used in mixed or unspecified situation there was no need to point this out, it was given by the convention. Pronouns with masculine grammatical gender being exclusively applicable to men is a recent innovation, without which the problem you are trying to solve wouldn't even exist.

These shifts in meaning can of course happen organically, but I find it very curious that this particular one happened at the same time as the rise of feminism in the 20th century. More extreme examples of the same politicised language is using "womyn" instead of "woman" in order to prevent it from ending in "man".

The only people who consider switching between "he" and "she" in different examples are internet trolls. Obviously, that's not really true, but neither is your snark.

Look: gender-neutral language in technical writing is not that difficult. Most of the time you can rewrite sentences to simply avoid gendered pronouns, e.g., "the successful applicant will use his skills to contribute to the platform team" becomes "the successful applicant's skills will contribute to the platform team."

As for alternating between pronouns, do you think readers are bothered by switching between using male and female names in examples? If you realize that the first example used Bob, the second used Agatha, and the third used William, do you suddenly leap up, scream INTERNET TROLLS GOT TO THEM!, and punch your monitor? No, probably not. So do you do if you see pronouns switching? Really? Again: I'm betting probably not.

I can tell you that as someone who's done technical writing for a half-dozen companies over the last decade or so that I have never heard of readers complaining about alternating pronouns. I have heard of them complaining about documentation that is exclusively male, though, because it turns out that's something some -- not all, maybe not most, but definitely some -- readers will notice and be a little nonplussed by.

> This document was mostly written almost a decade ago, and by someone who I think went to a prestigious institution. So the "rules of style" back then aren't what they are today.

What does the second half of the first sentence have to do with the second sentence?

That prestigious institutions up until the past few years used male pronouns to refer to generic individuals as being "proper English", which this document ostensibly is, and this explains the document's use of them?

Yes, replacing "he/she" with "they" was discouraged because it is plural. Since then gender issues have come to the forefront and this drawback has been deemed less important. "They" is now recommended by most folks, but wasn't the case 12 years ago.

Times change.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact