Hacker News new | past | comments | ask | show | jobs | submit login

You don't have to know what would be a mistake. E.g. if the tool is used most of the time to operate on a small set of servers, you have some extra confirmation or command-line option for removing a large set.

That's good UI design in tools with powerful destructive capabilities. You make the UI to do lots of things v.s. the few things you do routinely different enough that there's no mistaking them.

You can also have the program tell the user what's going to happen (if it can be computed beforehand), e.g. "This will affect 138 server(s)."

Yes, but be careful. UIs like that tend to accumulate "--yes" options, because you don't feel like being asked every time for 1 server. Then one day you screw up the wildcard and it's 1000 servers, but you used the --yes template.

Which is why I'm pointing out that to design UIs like these you should fall back on slightly different UIs depending on the severity of the operation.

This is a good pattern to use. The more pre-feedback I get, the less likely I am to make a horrible mistake. However one problem I often see with this pattern is the numbers are not formatted for humans to read. Suppose it prompts:

  "1382345166 agents will be affected. Proceed? (y/n)"
Was that ~100k or ~1M agents? I can't tell unless I count the number of digits, which itself is slow and error-prone. It's worse if I'm in the middle of some high-pressure operation, because this verification detour will break my concentration and maybe I'll forget some important detail.

Now if the number is formatted for a human to consume, I don't have to break flow and am much less likely to make an "order-of-magnitude error":

  "1,382,345,166 (1.4M) agents will be affected. Proceed? (y/n)"
I always attempt to build tooling & automation and use it during a project, rather than running lots of one-off commands. I find this usually saves me & my team a lot of time over the course of a project, and helps reduce the number of magical incantations I need to keep stored in my limited mental rolodex. I seem to have better outcomes than when I build automation as an afterthought.

This doesn't work. Users learn to ignore the message.

I think it depends on the quality of the feedback. Most tooling sucks, so the messages are very literal trace statements peppered through the code. , vs what the user-facing impact will be. When the thing is just spitting raw information at me, I'm probably going to train myself to ignore it. But if it can tell me what is going to happen, in terms that I care about, then I'll pay attention.

Imagine I just entered a command to remove too many servers that will cause an outage:

  "Finished removing servers" 
  (better than no message, I suppose)

  "Finished removing 8 servers"
  (better, it's still too late to prevent my mistake 
    but at least I can figure out the scale of my mistake)

  "8 servers will be removed. Press `y` to continue"
  (better, no indication of impact but if I'm paying
     attention I might catch the mistake)

  "40% capacity (8 servers) will be removed. 
    Load will increase by 66% on the remaining 12 servers. 
    This is above the safety threshold of a 20% increase. 
    You can override by entering `live dangerously`."
  (preemptive safety check--imagine the text is also red so it stands out)

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact