Hacker News new | past | comments | ask | show | jobs | submit login

I suggest you read "The Field Guide to Understanding 'Human Error'". You'd learn a lot.

https://www.amazon.com/Field-Guide-Understanding-Human-Error...

My view is that expecting humans to stop making mistakes is much less effective than fixing the systems that amplify those mistakes into large, irreversible impacts.




This, 1000x this.

It's easy to be an armchair engineer and say "well, obviously, don't make this mistake in the future".

As a species, we will always take shortcuts. If a mental pathway doesn't need to be exercised to do something, it won't be. If we see the same popup a hundred times, we're going to ignore the contents by the 100th time because we're used to it.

But it shouldn't be possible to make this mistake if this was designed properly and in a way that made it clear that what you're about to do is actually dangerous and not "dangerous" like the other dozen times you've seen the same message.


Yeah. A warning that can be pattern matched by a human brain, that is just a routine action, will result in autopilot behavior. Flipping early repo visibility is common, I assume. Flipping visibility of 100+ star repos is likely much less common. So having a separate warning in that case might help you snap out of autopilot due to novelty.

A common knee-jerk reaction to "users aren't taking our warnings seriously" is making the warning look more scary and involving more (mechanical) steps – such as two confirmations instead of one. Well, that's pattern matchable and subject to desensitization.

Like the article states, it's more efficient to instead show what you're about to remove, i.e. a summary of the content as opposed to an identifier, then you get both novelty and proportional scariness, depending on how "big" it is.

https://en.m.wikipedia.org/wiki/Alarm_fatigue


Pretty much agree— Universal human fallibility is not an avoidable moral failure, but looking down on those who've made, admitted, and taken responsibility for simple mistakes sure is. Unrealistic assumptions about people's attention, processing capability, and physical capabilities may be the greatest hurdle when trying to actually solve people's problems with software. It can even be a safety issue— consider the Therac 25 incident.

That said, superlatives like "impossible" aren't realistic here. As long as an action is possible, it is also possible to fuck it up. The challenge screen does require typing in the repo name which is a pretty solid attention-getter. It would be nice to list the important side effects in big angry letters but I'll bet the list of side-effects that are important to some people would be pointlessly long. Maybe a sanity check requiring repos with extremes in certain metrics to get a sanity check from support first— but I have had some preeeeettty protracted response times from GitHub support.


> I'll bet the list of side-effects that are important to some people would be pointlessly long.

There's many good ways to design this that wouldn't result in pointlessly long lists (which is another problem). You don't need to expose everything that will happen when you do an action, just the "most important".

With unlimited budget, you could compare a bunch of metrics to a baseline to figure out what might be important. Have a lot of stars, way more than the average? That would be nice to highlight. Only have a normal number of forks? Probably less important to highlight that. At the end of the day, Github repos aren't extremely unique. Sure there's probably extremely rare edge cases, but for most repos, comparing against a relatively small number of metrics should get enough to paint a picture of what might get broken. Everything else can be hidden behind a "... and X more" if people are curious.

This both highlights the most important things that might happen, and also provide people with an action to do before confirming, hopefully helping to break the autopilot at the same time.


Warnings are good but not perfect. At the end of the day, you can’t help everyone with just warnings. You could make the UI bright flashing red, with a skull and crossbones, have it say “Never invoke this: it will delete all your data and kill your dog. Please type ‘I_WANT_CANCER’ to continue.” And users will do it. It can’t be helped. The only (admittedly hard) way is UNDO.


Undo. If it's possible, you want undo. If it's difficult, you still want undo. Only when you've satisfied yourself that it's outright impossible should you reluctantly design the UX with the "Are you sure?" prompt instead.

Also, if you must have I_WANT_CANCER type prompts, try to make them involve expressing what it is that the human is agreeing to, because sometimes it's only at that point the human realises their plan was very stupid. GitHub does that here (you have to type the name of the repo, to delete main_project you'll need to type main_project) but many things do not.

This is why I like Git's force-with-lease. When I try to forcibly overwrite the state, because I know Jim's push was wrong and must be undone, force-with-lease makes me express that, "--force-with-lease=jim-daft-change". And when in fact what I'm going to overwrite is Lisa's urgent bug fix, not Jim's erroneous change, the force-with-lease fails and to get it to apply this way I'd need to say "--force-with-lease=lisa-bug-fix" and that's one last opportunity to say "Wait, what? That's not what I want to do".


It's a real strength of Gmail and some other Google interfaces — undo instead of warning dialog.


I think from UX perspective it is unbelievably more convenient as well. Plain popup warnings with "press Ok if you mean it" are bad enough, but quite recently the new even worse "type delete, if you want to delete, then press Ok" popup has been spreading about... If I want to delete over a couple of things I get incredibly irked.


There are a couple of issues with UNDO too:

- (not intrinsic) many interfaces only give you a few seconds to undo your action, which wouldn't help in this case

- (intrinsic) some events can be layered on top of your change, meaning that you have to either block the undo or reverse all the subsequent events too. This wouldn't be the case for this HTTPie incident, but does prevent UNDO functionality from being a silver bullet.


You assume the metrics you have are sufficient to determine intent and importance, that your users needs are uniform enough for those determinations apply, and discount the possibility of making things worse by communicating the wrong thing. These assumptions perennially land developers and inexperienced designers in hot water designing things like this.

> "most important"

Yes— the problem is determining the most important metrics for the human who's reading the dialog. Sorting the second part is a lot more difficult than the first.

> With unlimited budget

This framing is only useful to determine what behaviors would be great, not feasibility. We seem to both agree on the ideal behavior but not on the feasibility.

> you could compare a bunch of metrics to a baseline to figure out what might be important.

Important to whom?

> Have a lot of stars, way more than the average? That would be nice to highlight. Only have a normal number of forks? Probably less important to highlight that.

Average for what? Just average for everybody? For the deprecated SlackTextViewController which has vastly higher numbers overall than the unmeasurably more important net-snmp? What about for repos that serve as core functionality to frameworks that only care about forks? XML Schemas that people refer to but never interact with? What about for a company that doesn't care about stars because they're incredibly popular outside of that repo but it's critical for them to monitor the small number of people who've forked their repo?

> Sure there's probably extremely rare edge cases, but for most repos, comparing against a relatively small number of metrics should get enough to paint a picture of what might get broken.

The repo about which you said this mistake should not have been possible was an extremely rare edge case. The article went to great lengths to explore just how fantastically different this repo was. And if you're talking about painting pictures you're using the wrong medium. More information can make the message more pointed but only if it's brief and extremely relevant. If it's not, it's going to reduce the focus on the danger message.

> Everything else can be hidden behind a "... and X more" if people are curious.

Aaaand we come full circle. The whole point of this is to put the metrics most important to that user right in front of their face in bold type because that's going to stop autopilot mistakes. If it's hidden, it failed. Beyond that, the order in which you list things connotes importance— closer to the beginning means more important. Being hidden connotes far lesser importance. If you put the wrong things at the top and the right things behind a click then the autopilot user subconsciously assumes they'd care about the hidden things even less. You're increasing the risk of failure rather than decreasing it. If you're going to highlight anything, you better highlight the right thing.

The existing dialog is pretty clear that you should investigate what it does before you actually execute the command without giving people false clues, and that may be the best compromise. You don't convey false impressions but still convey danger.

> At the end of the day, Github repos aren't extremely unique.

From a quantitative perspective they're not incredibly diverse, even if they're probably more diverse than you imagine. What is immeasurably diverse is the roles those software packages serve, the people who interact with them, and what they need to accomplish their goals. Assuming you can boil that down to a heuristic using available data is something that needs to be concretely proven— not assumed.


Anyone wanting to dig deeper into the design side of this should check out Triadic Semiotics— it's the philosophy of signs.


It sounds like you've never been part of an engineering organization (I know you have since you bring up unlimited budget ;)

It is obviously possible to design a much better, contextual warning with (more) relevant details highlighted. How much time is usually set aside for those?

And yet, it's never going to stop mistakes.

And how many other places like these are there in a large service like GitHub?

A much better way to fix this is to expect mistakes and make it possible to revert instead. Soft deletes can make that trivial to implement for the most part too (except in GDPR like cases where you've got to really remove stuff).


Even for GDPR stuff you can delay the hard delete for a day or so, which should give the user some time to notice and correct a mistake.


> The challenge screen does require typing in the repo name which is a pretty solid attention-getter.

This is not quite correct. In requires inputting the repo name. It's attention-getting the first time. But recently with some colleagues as part of a handoff we went through 50+ repos and archived the irrelevant ones. Very quickly the person doing the actual archiving went from typing to just copy-pasting the repo names. For programmers, copy-pasting from one spot to another can become very mechanical.

So I think this UI choice works well for something that is done very occasionally. But it's no better if people are in the habit. Whereas showing novel information about the cost of the choice would be helpful in breaking the routine. Then could then also make it so that they had to type something only when the operation has a high cost, and what they typed could be something not directly copy-pasteable.


That's fine. The goals of designing a sanity check prompt to snap people out of autopilot are entirely different from goals you might have in a security system— their being defeatable isn't even a flaw. Jersey barriers vs rumble strips.

If someone deliberately doing the same thing over-and-over can avoid the manual effort, that's fine. Their frustration would likely make them think about it less. You don't, and shouldn't get prompted to do the same thing using the API, either.


That’s not quite it. The solution isn’t to stop them from doing it or warning them more aggressively; it’s to make it genuinely less dangerous. For example, make the stars all come back when you reverse the switch.

Of course that’s a lot harder to do, since it’s a lot more than a UX change. So I’m not really slagging GitHub here. But it’s the right way to attack this problem.


True "Undo" is, IMO, a pipe dream for all but brand new projects. That will pretty much require you to rearchitect a decent chunk of anything that wasn't built with it explicitly in mind.

In this case, I don't quite understand why stars and forks need to be deleted when switching visibility, since it seems like the two shouldn't be linked. I can star and fork my private repos just fine, so it seems like it was just easier to clear it out than to deal with RBAC at that level.


I don't think any fork was deleted. 3.5k forks as of now vs 3.4k forks in January: https://web.archive.org/web/20220129022012/https://github.co...

Many people won't notice some of their stars having disappeared, but many would be surprised to see their forks deleted without their consent, especially those that are not merely a mirror. People can have diverged forks after adding substantial changes without feeding them back to the original for a variety of reasons.


Then don't "do" the delete, when the users "do" the delete.

Pretend to do it, actually do it 24 hours later, and if they "undo" the delete within that time frame, just remove it from the delete queue process.


Just not deleting them in the first place, or not deleting them until explicitly asked to, would suffice, and would be, ultimately, less code. So, I am not buying the apologetics.


It seems to me that services that pretend to delete data but don't actually do it are usually subject to criticism on Hacker News, not praise.


The issue tend to be services who are not upfront about it. "This is potentially really damaging, so in case it's a mistake we'll just hide it for 24 hours during which you can undo" or something to that effect would be sufficient.


You did not read the comment you replied to. Why? And then, why reply?


The counterpoint is that tools like that can be abused. Just as a hypothetical, if I have an open source product and do something the community doesn't like, I can make the repo private til the storm blows over so people can't un-star it. The number of people that remember to un-star it a week or two later is going to be a lot less.

I still think you're right, but it's not even as simple as UX and small backend changes. Allowing the repo owner to re-play other peoples' actions comes with complicated policy decisions. GitHub could make it so only support can restore those, but now they're in the awkward position of having to say when it's okay to restore and when it isn't. Still, it may be the best compromise.


> It's easy to be an armchair engineer and say "well, obviously, don't make this mistake in the future".

It's sort of equally easy to say Github should do X or Y to improve the UX in this (rare?) case without actually knowing the effort or opportunity cost.

Like maybe I'd rather Github figure out how to not have outages so frequently and how to not accidentally give people million+ dollar bills and such.


> Like maybe I'd rather Github figure out how to not have outages so frequently and how to not accidentally give people million+ dollar bills and such.

I doubt the engineers working on the stability issues or billing have anything to do with UX and vice versa. It's not like there's a singular focus that everyone at the company must prioritize above all else. Multiple people can work on multiple projects and not impact one another at all.


On one hand you have millions of users who each are obliged to walk on eggshells.

On the other, you have a few Microsoft hacks who could easily do the right thing on behalf of those millions of users.


I am surprised so many people treat losing stars as "dangerous": I can certainly emphatize with the designer coming up with this pop-up while looking at a sample project with 12 stars.

Considering this project was out of ordinary in having 54k stars (article mentions being in top 80), they should not be surprised that their case is not top of mind.

Sure, it would be nice to highlight the most destructive of actions, but they already had to type out full project path as confirmation.

I do believe it would be better if Github allowed restoring data in the next few days (soft deletes ftw) once they hit this issue themselves in the past.

And Github support should have recognized that this project is a special, out of ordinary project, and afforded it some engineering time to restore everything.


> I am surprised so many people treat losing stars as "dangerous": I can certainly emphatize with the designer coming up with this pop-up while looking at a sample project with 12 stars.

Lots and lots of websites and companies use the number of stargazers as a strangely important metric for success, so losing them can be really bad.


But still, I also share the opinion that GitHub's UX for dangerous actions on a repository is currently already best in class, so blaming them for your mistakes is pretty unfair. To make your repo private, you have to:

- click "change visibility" in the "Danger Zone" area of the settings.

- select "make private" with an additional warning shown that you will lose all stars and watchers. Ok, maybe it should mention how many.

- type the name of the repo into a box and then click "I understand, change repo visibility".

And yes, sometimes it is important to expect humans to not make mistakes. For example, at railway crossings. Even if you drove over it 100 times and no train came, the 101th time you may still die if you don't check for a train before crossing.


> And yes, sometimes it is important to expect humans to not make mistakes. For example, at railway crossings. Even if you drove over it 100 times and no train came, the 101th time you may still die if you don't check for a train before crossing.

For somebody going through a railway crossing, yes, they shouldn't allow themselves to be prone any mistakes there. However, for the people involved in the construction of the railway crossing, they should certainly expect everyone to screw up and scrutinize the safety. We don't want to leave anything affecting livelihoods to chance.

"On average, each year around 400 people in the European Union and over 300 in the United States are killed in level crossing accidents."

https://en.wikipedia.org/wiki/Level_crossing#Safety


That's why a level crossing has a barrier that's only there when a train is approaching. Imagine the barrier was always there; that would be asking for people to ignore it.


This idea - fixing systems that amplify mistakes into large, irreversible impacts-- is why I am against alcohol, marijuana and other mental-state altering drugs for recreational purposes. (I'm not talking about people who really need pain relief from cancer, amphetamines for ADHD, etc.)

If no one in the world drank, smoked pot, or did drugs, how much better off would we all be? 100,000 deaths come from alcohol use every year in the United States alone[1]. That doesn't even account for countless cases of abuse, broken families, crime, and other negative effects of alcohol and drugs.

So many people say "oh well it's fine if I do it, I'm responsible" but then at some point someone isn't fine and isn't as responsible as they think they are.

[1] https://www.cdc.gov/alcohol/features/excessive-alcohol-death....


> If no one in the world drank, smoked pot, or did drugs, how much better off would we all be?

Perhaps that's true for your definition of "better off", which is perfectly fine, but it isn't universal. Even though I don't do drugs, I don't think it should be anyone's business to police what other people do with their minds and their bodies on their own time, as long as they don't pose a threat to anyone else.


You haven't really thought this through.

One, those substances are fine for a lot of people.

Two, some of those substances are fine for most anybody. I've never even heard of a deadly marijuana overdose, and the evidence shows no increase in mortality for marijuana users.

Three, we already tried alcohol prohibition, and we are currently trying drug prohibition. It does not solve he problem you care about, while creating other large problems.

So really, you sound like somebody who has a personal hobbyhorse and uses pretty much anything (like say, losing stars on GitHub or mentioning a book on airplane safety investigations) to argue for it. And that sort of motivated reasoning around argument for societal change strikes me as way more dangerous than somebody eating a THC edible.


I'm scared to see what your sanitized world would look like. Some of the most interesting art, music, and personal perspective has come from the consumption of the substances you deride as unnecessary and destructive.


How much art, science and deep, personal perspectives were lost due to abuse and death stemming from substance abuse? In case of certain substances (like alcohol) we have rather hard statistics about it's overal impact on physical and mental condition of society. Slow change from alcohol to safer alternatives (ex. cannabis) is probably one of the best trends in current times.


How many lives were saved or changed for the 'better' through the escape of, enjoyment of or numbing with drugs and alcohol? Who knows?

Perhaps if we think more like adults instead of infants we can try to understand complex issues better instead of reinforcing black and white stereotypes of the world or rehashing whatever our favourite source of ignorance tells us.


I think that do-gooders who have a "Great New Idea" for how to make society better are 1000x more dangerous than any type of drug ever invented. People drunk on alcohol mostly fall asleep harmlessly. People drunk on power and how awesome their own ideas are launch prison-industrial complexes, dystopian enforcement schemes, wars, genocides, etc.

It may be an even worse drug. People on drugs mostly have some awareness that their ideas aren't very good. But for those drunk on power, the fact that their same great idea has already been tried and led to total disaster is no cause for concern at all. You see, they're obviously smarter and better than the last batch of power-addicts who tried that, so they'll do it right this time. Heaven help us if we ever discovered a chemical intoxicant that was capable of making people that deluded.


An interesting thought experiment - how many of us would be here at all if not for alcohol? Ignoring butterfly effect aspects, the number of people who were only conceived due to decisions made under the influence of drugs would surely be very large even today, let alone prior to the availability of birth control.


So, displaying an additional or more explicit warning that mentions stars and watchers makes sense. No arguing, hope they do that. But to me also, it left a little bit of a bad aftertaste that the author completely skips over the part of that process where you have to enter the repository name right where they show the two screenshots side by side.

Anecdotal: You have to enter the full name for a couple destructive actions on GitHub, and every time I had to do it was so jarring to me that I stopped everything, rechecked what I'm doing twice, started wondering if I might be dreaming, am on drugs or a voice in my head is telling me to do something dumb. Like, yes we should improve nonetheless, and I usually try to put myself into other people's shoes before judging them, but this is just one of the few times where I just can't help but, you know, think it was somewhat dumb to make this mistake with the current system in place. Like, next time we have this stars and watchers warning in place and still someone will manage to proceed on autopilot, what do we do next? Have a siren sound go off in addition? Have the user enable their mic and spell out the repo name? Send a written letter to GitHub? You'll never get the error rate to 0, at which point would you rather accept people making mistakes and calling them out on it than adding more inconvenience on top that just bothers everybody else?


Most people in this thread seem to be ignoring another very major cause: the inconsistent naming for personal READMEs between users and organizations. Users have their README at username/username, but orgs have it at orgname/.github

Nothing else on GitHub is like this: orgs and users are treated as the same class of entity pretty much all the time. I could easily see myself making the same mistake on autopilot.


They have a bunch of differences under the hood, particularly when you want to give perms. It makes sense, sort of, on its face, that a user can’t have teams, but why? That decision is pretty arbitrary, to me. Then in GHE, users and orgs have all sorts of fun differences when you consider things like internal/public/private and how people can interact with them; to wit, if you’re in ANY team you can see ANY internal repo in an org, but if you’re limited to just personal repos, you can see no such things.


> My view is that expecting humans to stop making mistakes is much less effective than fixing the systems that amplify those mistakes into large, irreversible impacts.

This is applicable to almost any activity: "Sure he was driving drunk, but the car's manufacturer should have prevented that from causing any damage!" I agree that GitHub should improve the design here - privating a '10 star / 1 week old' repo shouldn't be treated the same as privating a '50k star / 10 year old' repo. I don't mean to diminish the fact that GitHub's UI should be improved here.

But the author needs to take some responsibility and realize they were ""driving distracted"", and not act like GitHub is 100% at fault here. Just because GitHub didn't act perfectly doesn't mean the author didn't make any mistakes.


Neither the author, nor anybody that I can see here, is saying that the author didn't make any mistakes.

But what you are saying is that marginal effort to prevent mistakes is not worthwhile.

Cars, to take your example, would not be as safe as they are today if they followed the principles you've shown in this post and the grandparent. And yes, even when driving drunk, which is illegal, is safer too - stay in lane, adaptive cruise control, automatic braking, etc. have all incrementally made even extremely ill-advised behaviours safer.


Yeah, it's a different story when dealing with safety.

With industrial machines you have to assume people will do the dumbest thing possible. Because someone will find a way to get crushed in a moving part if they can.

You have safety fence after safety fence and regularly test that your lockouts work.

If you don't do all of this and someone dies you can face very harsh legal penalties.

We don't do this with software when the cost of failure is so much lower but we should still understand smart humans will make mistakes.


In software often the cost is paid by other people (your users) and you don't (want to) see them. How much did the Atlassian outage cost it's clients altogether? How much thereof will they need to compensate?


Is that book still worth to spend time reading if I do understand what you are saying and I hold the same views? I'm always hungry to learn something useful from books, but more often than not I find these hyped books ("The Checklist Manifesto" comes to mind) really annoying in that they make me spend a significant amount of time reading stupid "curious life stories" to bring across a point that can be expressed in a 3 word sentence, like "checklists are good". And it's even worse when these are trivial truths most sane people would agree on, which doesn't make the "advice" more actionable, because it's just much easier said than done.

I mean, it would be disappointing to read it to find it summarizes exactly to what you just said.


It's an incredibly rich book. I learned a ton from it. Indeed, I avoided summarizing it here precisely because it's hard to sum up.


My favorite is "let's imprison people who make mistakes". It doesn't stop mistakes, it just covers them up.


People are imprisoned when they are a danger to others or at risk of flight. Their time in prison is meant to reform them and help them back into society. The community is protected from further "mistakes" from a person, while allowing the person to learn not to make the same mistake again.

At least that is the idea.


Some mistakes are accidents, some mistakes are made on purpose.

The latter needs some kind of consequence in place to offset the benefits from purposefully making said mistakes.


From the product's perspective speaking in aggregate, that is entirely correct.

From an individual user's perspective, you still need own your shit and avoid making these mistakes. You can't rely on everything you use having smart safeguards.

There will always be dumb/careless users that a product should consider. There will always be suboptimal UI (we could've just as easily been talking about CLI database tools) that a user should watch out for


It's not about whether human can make a mistake or not. It's about how much effort should be made to prevent a certain type of human mistakes.

If every system we're interacting daily should be fool-proof, it will requires enormous time and effort. Naturally, we invests more time and effort when it's about life and death situation. If we only lose 2 cents by our mistake, we just don't care.

Of course, I'd like to have more fool-proof design in all the destructive interactions from all websites. But, the question is, does it worth it?


There is also an audiobook version, for some reason not linked from there:

https://www.amazon.com/Field-Guide-Understanding-Human-Error...


The penalty for sin is death.


What about cos?


death + π/2


Romans 6:23

The wages of sin is death.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: