It's easy to see how this is bad, and I bet dozens of designers are now creating alternatives for their dribble and Twitter appreciation, but the problem is probably what led to this, not this specifically. I can see it: a contractor started with a link that triggers a push notification, then someone requested another link, then another, then it grew from there, never having the ability to stop and rethink this from a design standpoint since it would require more work, more training, more money.
The problem is not knowing (or having someone who knows) how to design something better, it's treating good design as a priority. It rarely happens.
It looks like quite a normal UI from an "internal" government or enterprise application point of view.
In this case, the citizens of Hawaii paid for the bad system design with 48 minutes of existential horror.
But these kinds of mistakes happens ALL THE TIME with "enterprise applications." Oracle has similar horrendous UX for their EBS (Enterprise Business Suite), so does SAP, Siemens, etc. People regularly make costly mistakes in shipping and receiving, purchasing and manufacturing because they have to deal with shitty confusing applications that look just like that screenshot.
Totally not surprising that the internal application for public emergencies has the same awfulness as PTO request.
I expect the same employee in Hawaii fumbled the UI on their "time off request" for psychiatric counseling too.
> People regularly make costly mistakes in shipping and receiving, purchasing and manufacturing because they have to deal with shitty confusing applications that look just like that screenshot.
Currently unscrewing one of those as my day job.
"Delete all production orders for period" - no confirmation, no ability to rollback (beyond me restoring from a backup) that kind of thing.
It's not that it isn't user friendly, it's that it is actively user hostile.
My boss is both understanding and willing to listen, after working with him for 6mths I really can't blame the system on him.
The out sourced developers however...very much yes I can.
Half the things do the wrong thing wrong, the other half is the wrong thing sorta working.
There was clearly no attempt to understand what the user wanted or any attempt to ask the simple questions like "Why do you want X, is X related to Y, would it be handy to pull in related Y for display X" beyond the terrible code quality (and database architecture, I wish I was kidding when I say it took 49 to 69s to search for a quote (I timed it on repeated runs), it now takes ~400-600ms and I still think that's too slow (and I search more things) there is just a complete lack of any thought/effort or consistency.
Even the tables aren't consistent across the system - sometimes click a row opens the related entity (with no indication that's what it does), sometimes there is a "view" button in the row on the right, sometimes on the left.
It's just pile after pile of UX disasters/architecture code disasters, no wonder the original lead dev came close to a nervous breakdown, I just don't think he was capable of dealing with a system of this complexity.
> There was clearly no attempt to understand what the user wanted or any attempt to ask the simple questions like "Why do you want X, is X related to Y, would it be handy to pull in related Y for display X"
Asking those questions is being obstructionist. Most end users don't understand the ramifications of what they are requesting. Explaining it all is an effort to bring them up to a level of education that would allow them to do your job. They pay _you_ to do your job and now you are just wasting everybody's time. Why do you have to make everything so complicated!? Why can't you just do what your told? Just add the button and stop asking questions!
It's so painful running into brick wall after brick wall of asking "Why do you want X and have you thought of the implications for Y, Z, and A-W?!", only to be met with sighs and eye rolls that we should just know and get it done. Blah. Bad day for me, I guess? I apologize.
A way to ask these questions that sometimes works better, is something like "can you explain the business value added by doing this, so that I can be sure my implementation achieves that?"
Asking "why X, why Y, why Z" can come off as challenging their knowledge, and questions like "would you also need W?" tend to get answered "yes, sounds good" so now you've just made more work for yourself.
> "can you explain the business value added by doing this, so that I can be sure my implementation achieves that?"
100% this - "Have you considered that our long lead times on inventory means we increase our holding costs?" is a good lead-in to exploring how to reduce that.
> "Have you considered that our long lead times on inventory means we increase our holding costs?"
What's frustrating about this is now we also have to understand the business overall, and that's kind of the problem in the first place: nobody has the patience to explain it.
So in addition to coming up to speed on whatever tech stack they have, we're going to have to find someone who understands and is capable of explaining the (sometimes completely foreign to us) business side of things and come up to speed on that too. That's a lot of ramp up time, especially when nobody wants to hand out raises and you're better off finding a new gig roughly every other year anyway.
I'm currently working somewhere with people that are great at explaining the business, and are willing to discuss at length the intricacies of how anything works at any time. The tech stack is abysmal, but being able to easily understand the business makes it worth dealing with the dumpster fire of a LoB app. I'd take it over the previous job, where we had greenfield project, bleeding edge tech, and money, but nobody could tell me how the hell the business worked...
I didn't wait for anyone to explain it to me, I went to the library and checked out a bunch of books they had on Supply Chain Management/Operations Management and Inventory Control and then read them.
That gave me enough of a grasp of the principles and more importantly the nomenclature that I could then talk to people in their language not mine.
To me it's a normalised series of tables to them it's WIP inventory.
One of the things DDD got right (if you remove the buzzword bingo/hype) was the idea of a 'common language'.
Unfortunately in my experience (couple of decades) most business people don't want a common language, they want you to understand theirs.
It is surprising how little theoretical knowledge people who work in a field use on a day to day basis.
> One of the things DDD got right (if you remove the buzzword bingo/hype) was the idea of a 'common language'.
> Unfortunately in my experience (couple of decades) most business people don't want a common language, they want you to understand theirs.
The “common language” of DDD was the language of the business domain, so that’s in line with DDD.
The practical problem is that the language of many business domains is often highly context dependent, making it unsuitable for direct use where ambiguity must be avoided even without context, and trying to namespace things to map to the specific relevant business contexts is often impractical.
I don't use email on my iphone. It's because I receive 400+ emails a day, mostly spam. I need a "delete all" button, as deleting them one at a time is untenable.
But there isn't a "delete all" button, presumably because people would use it and complain that they wiped out all their email.
The end result, however, is the iphone email is completely unusable for me.
I don't know. The interface could be better, but a different interface wouldn't necessarily stop someone who's confused from sending the wrong alert. How many times do you see a dialog box and click "yes" without even reading it? I do it all the time, and it's usually fine... except sometimes I think "wait, what did I just agree to!?" Someone under pressure to get that drill going can easily make the same mistakes, no matter what hoops you make them jump through. Someone who thinks they read the text but didn't comprehend it can easily click through just about anything.
In the end, the correct process is probably have two people activate the alert, so that two people have to independently not read the dialog boxes or whatever to send a false alarm. But maybe that doesn't work either, I haven't done any research to that effect and don't really know.
On some level, I like the interface. There is a list of possible alerts, and you click the one you want to send to send it. It's just that it doesn't account for the possibility that the user doesn't ACTUALLY want to send the one they clicked. The same problem exists with any powerful tool. Hammers don't check that you're hammering a nail and not your thumb. Cars don't ask "there appears to be a pedestrian, are you sure you meant 'accelerate' and not 'brake'?" With great power comes great responsibility.
> a different interface wouldn't necessarily stop someone who's confused from sending the wrong alert.
No. Just. No. It is a horrific design. This one will likely end up in a future UX textbook as a case study. There is simply no excuse for jumbling together a list of options like that. The list items are not even semantically coherent.
Ultimately, however, the individual, his chain-of-command and the system designers have to be held accountable. The scary thing is that many people saw this and said nothing, almost certainly some people heard complaints about this and did nothing, somebody even signed off on this. It demonstrates an organizational problem, I think.
This would be good. Takes 15 min to code this in and you dont have to go redesign a bunch of stuff like some on here are suggesting. Obviously they dont care what it looks like. Fine. But safeguard for such a monumental action they should have had.
I've interacted with systems like that before. My brain looks for the quotes, I double-click to select the text, Control-c, Control-v, and it's done... completely on autopilot.
Trust me, if you get in your users way they will find some way to get rid of you.
this would probably be a bad design aswell. because if a real missle treat happens I would need to get out the warning as fast as possible.
the best way would've been a second page with a big button and huge warnings.
You were modded down, but I agree. Contrary to popular opinion, "duck and cover" is not just psychological pacification. It will save lives if worst comes to worst. Literally every millisecond counts when issuing a warning of this nature.
So, yes, bring up a huge flashing red warning dialog with a single confirmation step. And make it easy to issue a retraction.
For everyone saying that people are so used to clicking yes on dialog boxes without reading them: Then change that paradigm for different levels of importance. For example, for a test alert, use an automatic countdown that will activate the alert unless canceled within N seconds. The ramifications of a test alert going out accidentally are a lot lower than a real alert going out accidentally. For more serious alerts like this most recent ballistic missile one, turn the entire screen red, have “THIS IS NOT A TEST” header, with an explicit acceptance required to send the alert. Two people may mitigate this existing problem but it still doesn’t get to the root of the issue.
Same thing with your hammer analogy. Everything shouldn’t look like a nail then! A real alert should look like a nail. A real ballistic missile alert should look like a railroad spike (aka really big nail). Test alerts should look like a screw. Etc.
I can think of two obvious mitigations that would have helped a lot.
1. Separate items intended to be tests from items intended to be live warnings. Draw a green (or yellow) box around the test items and a red box around the live items.
2. Require a confirmation before sending out a live item, and not just clicking a button. Put up a big red flashing warning that says, "You are about to send out a live warning, are you absolutely sure you want to do this?" and require the operator to type "YES".
Taking 1 a little farther, maybe have the first page be links to the test page or the emergency page then add some warnings to the top of the emergency page that make it damn clear that clicking any of the links, even accidentally, will send an actual emergency alert. Make everything red and scary. Hell, make it blink if you have to.
You don't even need confirmation boxes to make this better. Put the "test" options in a section all by itself, clearly labeled as test functionality. Put the real-world scary options in another section, labeled clearly, and perhaps colored red, or badged in such a way (red alert icon, or something) to indicate it's a real-world, serious action.
> With great power comes great responsibility.
Yes, and designers here have a lot of power and responsibility to design an interface that is difficult to use in the wrong way, and doesn't treat options with wildly different real-world consequences as equivalent.
> The interface could be better, but a different interface wouldn't necessarily stop someone who's confused from sending the wrong alert. How many times do you see a dialog box and click "yes" without even reading it?
How many times to I test the SMS delivery system? Not very often, so I am going to be more careful with any popups. Maybe you only use your computer once a month?
>It looks like quite a normal UI from an "internal" government or enterprise application point of view.
The "normal" for enterprise and especially government software is dogshit though.
It's because the people doing the procurement are not the people actually using the thing day to day. They treat UX and design as extravagances in the face of spec-sheets and slick marketing decks.
Or, more charitably, they're trying to balance a bunch of conflicting demands for requirements, budget, deadlines, etc.; the procurement process scares away the non-enterprise vendors; and they're not given enough budget to have in-house staff with enough expertise to meaningfully oversee it. If a big vendor says they'll handle it all and mount a huge PR push if anyone questions your decision, it's not hard to see why people opt to overpay.
Enterprise software is chosen by a manager who is told the feature list, but never uses the software. Usability is never considered because the manager never uses the software, and it's not really possible to evaluate usability on paper (they may say "this is really usable" and an application can still be really bad; if they say "it has feature X" and doesn't then you can sue them for breach of contract).
Consumer software is chosen by people that actually use the software. They can see that it is shit, give it bad reviews etc.
I used to work at a very large company who use Teamcity (think git for CAD), which was the worst POS I have ever used. A 90s Java mess that was so slow you could watch it redrawing GUI controls in slow motion. Everybody that used it hated it. At least some of the higher ups liked it, but they didn't actually have to use it.
Teamcity is the best CI that's available, far ahead of Jenkins.
The issue with these tools is the company that is too shit to run them. Put the smallest box they could salvage that's already running a hundred apps. Put the storage on a NAS in another datacenter because there was no more local disk space.
>I wish we could have a sane conversation about how UI's should be customizeable for the end user
Depends a lot on context and the type of application, but a lot of end users get confused about where the line is between UI/UX and backend functionality.
A lot of things that might seem to be UI issues might actually have implications for things like resource consumption and application stability. It's a tough balance to strike.
> bad UI design is usually only obvious in hindsight
That's only true if you have little or no experience designing or even using interfaces. As someone who has been bitten by bad UI design in the past, I've noticed and called out a decent amount of bad design well before it actually has caused a problem.
> I expect the same employee in Hawaii fumbled the UI on their "time off request" for psychiatric counseling too.
That requires a paper form. The payroll system in the state has failed to be upgraded for some 20 years or so of promises. Staff are paid via checks which are sent direct to their banks because they can't even pay over the wire.
Somewhat controversially, I wonder if that existential horror was really so bad. I can think of worse ways to spend 48 minutes that contemplating the reality of your mortality, the fact that you could very well be dead at any moment with no warning whatsoever. It makes you re-evaluate your priorities, maybe makes you stop putting off the life changes you were thinking about that seemed like they could wait until next year.
These 48 minutes may well have changed a whole bunch of people's lives for the better, in the long term.
Perhaps a bit of context might make it seem less like I’m being unempathetic. TL;DR:
I was caught in the London tube bombings in 2005 and I believe it changed my life for the better.
I think that for anyone who has kids, those minutes could very well have been unimaginably awful.
I guess you can think of it like waterboarding. Waterboarding doesn't harm anyone, it forces one to contemplate mortality, but it is also something so terrible that one would not wish on anybody.
This is an un-empathetic thing to say, because you're ignoring the real emotional experiences of the people who lived through this. The terror experienced by people, especially but not limited to those with predisposition to anxiety, hypertension, panic attacks or suicidal thoughts, will have a real impact on their lives.
Or imagine someone who, convinced of an impending death, donates all of their saved wealth to charities online.
I haven't seen this comment yet, although I expected it. It is time for those who design internal government applications to take it as seriously as other engineers who work on systems that, when mistakes are made, result in people being hurt or killed. This system clearly has that potential, so people need to take it as seriously as someone designing interfaces for NASA or the military, along with serious consequences of shoddy work.
As a government contractor who did not work on this, I can say I'd bet a lot of money that this is how this screen came about. Initially launched 2-10 years ago with 1-2 links, fast forward a bit and we have this.
I can hear a forward thinking developer saying "hey should we at least have colored buttons or something on this screen so it's easy to see what is a test and what is not?" and the product owner/business owner/PM saying "no man just add a link it's faster."
If that confirmation window is on every message drill or not, then it would be completely useless. Users have been trained to click past/ignore pop-ups.
I say this as someone who watches people log in on PCs and click past a useless (for them) Citrix dialog that includes a "stop asking" checkbox. Citrix added that checkbox some time ago, so these folks have been vacuously ignoring the entire thing at least once per business day for at least a year.
Perhaps a captcha style prompt could be added (basic math with random values?) to critical/damaging operations but I'm sure that would get pushback of its own.
"You can't require that our people do basic arithmetic when they need to send an alert! It'll slow them down!" Suggested response: "Are you telling me that you're allowing this kind of operation to be done by people who can't handle single digit arithmetic?"
They could consider a different UI response between a drill and active alert. Since drills are the majority of their workload they'd get dialog blindness to the drill confirmation, and then you'd design the active alert confirmation to be a-typical of the drill (for example a different color dialog, and have an "I agree" checkbox on the active, but none of the drill).
I ended up with a checkbox plus a mandatory input field requesting a reason for one irreversible action in a system I worked on.
Just adding a checkbox and later a popup with an additional warning wasn't enough.
An undo function is obviously best, but ufortunately while technically possible, it took a while -
possibly a day - to restore due to ripple effects in other systems, and it normally wasn't the person clicking the button that had any reason to notice something was wrong.
Unfortunately a lot of organizations don't want to invest the time/resources into creating it. Particularly when it gets into discussing exactly how the underlying data should be handled during the undo window (e.g. does it exist? Is it just flagged deleted? What about relations? Do all of our queries that touch this data check that? Etc).
> If that confirmation window is on every message drill or not, then it would be completely useless. Users have been trained to click past/ignore pop-ups.
That would apply to something that's used routinely, I'm not sure this system is actually used that often. Do these emergency systems send out "this is a drill" messages really that regularly? As a non-US person that sounds like a quite scary way to live.
Even if it is something that gets used regularly, adding additional levels of confirmation, depending on the kind of alert that's gonna be sent out, would already go a long way of preventing this kind of mistake.
For the real deal add some bold and flashing text along the lines of "This is the NOT A DRILL message, are you sure about sending this?" which pops up as a second confirmation screen, while drill messages have only one confirmation screen.
> That would apply to something that's used routinely, I'm not sure this system is actually used that often. Do these emergency systems send out "this is a drill" messages really that regularly? As a non-US person that sounds like a quite scary way to live.
Per the explanation provided by the agency, this is a drill that is conducted at every shift change, so presumably multiple times a day. I gather it does not send any actual alert to the public, just simulates it (perhaps sends to a small pool of test devices?) But it's appearing more and more likely that we've been giving a misleading explanation for this incident so who knows.
Government standards never allow only color distinction. There always must be an additional factor than just color.
For instance, here's some icons from NATO, which used to be MIL-STD-2525. You'll notice that all the icons differentiate in both color and form. The NTDS standards for Naval systems are similar.
Well, in addition to the color coding I suppose the text would still be different?
To me, the screenshot looks more like a list of user-defined presets than something that went through the original software development process. Maybe something along the lines of only superuser can define new presets, but lower users can activate them. That way, at least random "I love you honeybunny" pranks would be avoided, but the risk of skipping a real warning because of insufficient permissions would be avoided.
A physical UI would have an openly accessible switch for "drill" and "not a drill" protected behind a seal that needs to be broken, this is difficult to replicate in software.
I did think of the covered switches that are common in movies. I think the equivalent in software are confirmation dialogs, but those are not as effective as the physical counterparts due to click-through-itis disease. [0] This could actually be one of those types of things that might benefit from a different presentation, such as a silly skeumorphic design involving a graphical cover.
[0] Which, to be fair to users, is a disease introduced by putting too many switches under covers which shouldn't be.
They could use an actual covered switch. Maybe something this important should have a bit of physical infrastructure associated with it. Doing everything in software can be great, but it's not always the best solution.
> You'll notice that all the icons differentiate in both color and form.
Good stuff but I think they could probably have gone with something else for "Neutral" - the distinction between a 16:10 rectangle and a square could easily be missed in a hurry, especially at small sizes.
Not necessarily. Colorblind doesn't mean "everything is grey" it means you have trouble distinguishing between between a few, usually fairly specific colors. There are plenty of color schemes out that that are perfectly fine for colorblind folks, especially when you only need 2-3 distinct colors.
Nope. Dichromatic color deficiency is the most common form of color blindness, but about one in 20,000 people are completely unable to perceive color due to a variety of ophthalmic and neurological deficits. Even among dichromats, you can only reliably expect all users to be able to distinguish red and blue. It's probably not worth worrying about too much in most circumstances, but it is a relevant factor if you're designing safety-critical systems with a large user base.
If you're going to worry about that, wouldn't you be more concerned about the operator being fully blind? And simply justify not hiring the vision impaired for this particular position?
"safety-critical systems with a large user base"
Surely a missile warning system would be operated by a very, very small number of people, on the order of twenty or thirty?
You're saying that people should be prevented from getting jobs they're capable of performing because the alternative is to crack open decades old usability texts? There are multiple suggestions in this thread which could be easily implemented at little-to-no extra cost without sacrificing accessibility.
No, its because for certain jobs you need certain qualities. They might upgrade this system, but who knows what else lurks in their basement. Its the same reason why you would not train a severely short-sighted person to be a sniper.
There’s a big difference there: for a sniper, vision is a core skill.
For someone sending alerts, color vision is not — things like keeping cool under pressure, following procedure, interpreting chaotic information, etc. are.
Ok, you've hired William because he's got 20/20 and full colour vision.
One problem: William's in the toilet. He had a bad taco for lunch and honestly, he's going to be in there for a good 10 minutes. And there's a missile coming. Luckily Frank the IT guy is there and knows what to do but ... he's colourblind!
Oh dear. He's just sent out a dummy warning by mistake and now half of Hawaii is going to die. Well done.
That's not a huge obstacle if you know what you're doing. Color is only one tool in the arsenal - shade, shape, pattern and outline can all be used to visually distinguish UI elements.
I think a big problem is that people that don't have a lot of experience with design treat design decisions as subjective and arbitrary when in fact there is a lot of science to back up certain design choices (human vision, cognitive psychology, and not A/B testing.) As a result the approval process for any design, whether the initial or a redesign, gets stuck on the desks of various people that either don't feel qualified or don't see the point in the design.
This routinely happens where I work. To the extent that our UI is this random mishmash of paradigms depending on who was in the room when the decision was made. If I had a nickel for every time I said "if the mechanical hardware does (blank) we should let the user know so they know not to start the rest of the process" only to get blank stares or laughter from the senior engineers, I would retire. "We'll take care of it with training" is a common refrain, as if training can be remembered with perfect clarity at all times.
"We'll take care of it with training" is a common refrain, as if training can be remembered with perfect clarity at all times.
Something I brought up at my IT Support job yesterday regarding the crazy idea I had to actually document some processes and rules we have. Flow charts and tables and such. Having a reference you can use at any time is much more efficient than expecting everyone to remember literally thousands of facts they're told once in two weeks of training.
Part of that, I think, is a presentation problem. And I'm admittedly coming from a small sample size of designers I've worked with.
When I've been given PSDs in the past, either for review or for implementation, there's never been any documentation with any of the reasoning behind any of it. And when I've asked follow-up questions about why something was done the way it was, I've typically been met with either defensiveness or "that's how it is, now go and build it"
A lot of people that do have a lot of experience with design treat design decisions as subjective and arbitrary. Design is often taught as an arts subject, with woefully little science content. The rise of UX as a distinct discipline reflects this lack of scientific thinking in the mainstream design community.
I agree and I think that art and design students would benefit from classes on (or exposure to) the human visual system and neuroscience/cognitive psychology. A lot of the intuition behind subjective decisions in the arts can be further understood/explored/refined through the relevant science. For anyone interested in this topic I highly recommend the book "Vision and Art: The Biology of Seeing" by Margaret Livingstone.
You don't need any science to know to make an action with very serious consequences : hard to invoke (e.g. must click through three confirmation forms that all require some real-time thought to complete correctly) and clear in purpose (e.g. displays in large blinking text : "This is an action for which you'll be fired if you are doing it in error".
There had to have been quite the cavalcade of idiots behind the making of this system.
Incremental design doesn’t excuse the guy adding thing #2 or thing #3 not paying attention to the whole. Nor does being a contractor excuse it. You don’t get a contract job to add some links to a page. You are contracted to modify it (regardless of what the buyer said he wanted) - and you are responsible for not making it dangerous. Even if that means you can’t take the job.
Someone reviewed the spec for this. Someone modified the page of links. Someone reviewed that modification. Someone signed off on the change.
> You are contracted to modify it (regardless of what the buyer said he wanted) - and you are responsible for not making it dangerous.
Actually, with government contracts, you are legally obligated to perform only the task assigned. Providing work to the government without the government paying for it is illegal. And you are only being paid for what's in the contract. (This is actually overall a good thing. Otherwise, large companies with better margins could easily provide free value-adds to government customers to win contracts away from small shops with small margins.)
Now, in this case, yes maybe the contractual item could have been interpreted in a way that allowed a proper page redesign. However, I would never state that as a fact without having the requirements and also receiving agreement from the government customer over such changes.
>Providing work to the government without the government paying for it is illegal.
The Anti-Deficiency Act only prohibits "voluntary" services, not "gratuitous" services. Prior to the passage of the law, it was not unusual for cash-strapped gov't agencies to accept "voluntary" services (i.e., no formal obligation to pay for them) but then pay for them later after being allocated more funds. Congress didn't like this so it outlawed the practice. However, if someone wants to donate services to the gov't and executes a written agreement to not accept payment, then it is legal for the government to accept the free services. See, e.g., https://www.gao.gov/assets/450/441639.pdf
Thanks for the important clarification. I think the relevant part for this conversation is that the work still has to be outlined within a contract, and that contract has to specify that there is no compensation. That's a drastically different scenario than pretty much any developer in a government contract situation would find themselves. It would still be illegal to do work outside of that specified in the contract!
This might be the case, but I sure wouldn’t have done it this way regardless. I would have pressed on to get the customer to revise the order, or just fixed it (criminal or not), or refused the job.
This is clearly a case where all the downsides of bureaucracy were in full effect (someone might be afraid to touch something) but not the benefits (lots of security checks and balances such as in air traffic control).
> I would have pressed on to get the customer to revise the order, or just fixed it
When "trying to do it right" means you have to go through layers and layers of bureaucracy and probably ruffle a few feathers in the process, over and over again, I can see how some people quick learn to "just do their job".
This is human behavior. We can talk as much as we want about what would be the ideal solution here, and how we would never do it, but problems like this don't happen in a vacuum and are rarely one person's fault. It's a whole system that leads to this.
Yes - this is a very broken system and that needs to be addressed.
But what I was trying to say was that by either refusing to do it outright, o doing it right (without “asking for permission”). That is - I’d rather be jailed or fired than doing this. And I hope that goes for most devs.
And none of those someones were likely the same person, nor can it be proven that they ever communicated.
Welcome to government contracting.
When I was in my sea tour, my SO hated when I said "Designed by the lowest bidder, built by the lowest bidder, manned by the lowest bidder, for the lowest bidder."
So many assumptions... first, inject twice as many levels. Then make the output text go through a system which makes it impossible to add any formatting. Add a legacy system and some procurement. Mix well.
Oh and the person signing off likely never actually sees the output.
Right, and the developer who was at the bottom of the chain might want to clean up the page a bit, but the person above him can't sign off on something like that, nor the person above him. Approval for any modifications has to come from the woman who wrote the business spec that was contracted by your boss's boss's boss and her company has allocated her to another project right now and won't be able to shift hours to handle your request until March.
I’d just make the smallest change that “fixed” the problem (in this case for example including the action title in the comfirmation, and calling the non-test vs test action something more clear etc.)
My superior would just have to solve the problem of getting the spec change cleared or find another developer. If that was somehow even seen as a problem by the superior - same thing - they’d have to find another developer.
I think you and other commentors are being generous in assuming there was a spec to review and that the work was done by a contractor.
I've seen lots of organizations try to save a buck by extending a previously existing system with in-house labor after the original contractor asked for more money to do the change. I would bet that the original contractor (if they're still involved with the project) hasn’t seen that screen because it's some homebrew patch put in by whichever administrator babysits the machines for the state government.
The problem with organizations maintaining their own rogue patches is that, if those organizations had people compentent enough to make changes well, they wouldn't have hired a contractor or bought off the shelf in the first place.
As a contractor or an employee, you do have the ability to do what you think is right despite what your employer says they want.
But only to a point. People resist perceived change. You can improve the back-end all you want, as long as you don't make unexpected changes to the front-end. This includes something as simple as a clarifying pop-up, or a speed increase of a procedure.
And being fired isn’t just a risk you should be willing to take, it’s the required course of action here. With any luck you could get a superior in trouble by going above their heads or reporting to a relevant authority.
Doing things that are immoral/dangerous/stupid because you need to eat is not an excuse.
That's a rather naive view of the work environment. There's a myriad of reasons why it is hierarchical of which not the smallest one is so people down who have a limited field of vision and expertise do not jump into decision making above their pay grade. Doing it once may be acceptable but the issue of course is that one is unlikely to know when is the time do that stick the neck out. Doing it multiple times pretty much guarantees that one would find out that he or she is imminently replaceable.
I feel like it was absolutely a money issue. In my experience it's a matter of going "well, we'd like to redesign the screen for $x, but we could throw another link on there for a quarter of $x", and the user will always go with the cheaper option. You can't just say "No, we need to go with the more expensive option" because that makes the client unhappy, and we don't want that, and if you only go with the more expensive, the client will suggest the second themselves. And once the precedent is set with a second link, you're out of luck when you try to redesign for link #3. It's a no win.
But we're not talking about a web startup. I think if you saw the actual hourly work breakdown, cost breakdown, and number of people who would need to be involved to make that change your head would spin.
Recent project I worked on the dev cost ended up being about 10% of the total cost of the project. It's amazing how much effort it takes to move the institution any.
I bet the naming of each link was approved by a 16-people committee over a 4-week process, each bikeshedding it their own way and forgetting about the actual context of it.
Then adding one new link with any reasonable categorization might mean changing the label on a separate link to make it more distinct, and no one wants to go there anymore.
"Should 'FOR REAL' be all in capitals? Bold? Or maybe a different font? I know! We can find a proprietary font that's 99.9% like Arial, but we'll need to throw in specific rules to who can use it and how..."
A failure to correctly price the difference between cost and value, in other words.
I'd be very surprised if the ultimate cost of this mistake isn't far higher than the added cost of adding an "Are you really sure?" confirmation option.
This is why you don't let accountants make cost decisions in a vacuum. Of course if you're counting beans, the cheapest option is best.
In reality-based accounting, the hidden costs, consequential costs, potential damages, and costs of failure guarantee that cheapest is risky if you're lucky, and suicidally expensive if you're not.
The GAP are not designed to analyse those costs, so stupid decisions are made, avoidable consequences occur, and ultimately money is wasted, not saved.
We're seeing the same thing in the UK now where a huge government services contractor has gone bust.
In the UK it's policy to give government contracts to the lowest bidder. This makes perfect sense, if you're clueless and have no idea what you're doing.
In other countries it's policy to offer contracts to the second or third cheapest bidder, because this encourages bidders to to put in estimates that mean they're more likely to finish the job without cutting corners, more likely to include realistic margins to keep their business afloat, and more likely to estimate unexpected costs realistically.
The UK's policy has completely failed to save money. It's going to cost hundreds of millions of pounds in direct costs, and probably more than a billion in consequential losses around the rest of the economy, to deal with the fallout from a wholly avoidable corporate drama.
To me this kind of thinking from management is, frankly, negligent. If you're working on a simple e-commerce site I suppose it's excusable, because in the end, the customer is going to pay the cost of poor design in lost business and that is their decision. For safety critical systems it is absolutely inexcusable, and the manager responsible should face criminal liability. No one should die because some middle manager was cutting corners to hit his quarterly targets.
And with every single modification, at least one person involved said, "this sucks, can we redesign it yet?", and was told, "no, we don't want to spend the money on that right now."
It's a catch-22 for developers in this position: you have to be able to justify every single major improvement from a position of cost-benefit to the business, but to do that adequately well enough to convince a client to spend the money requires up-front time and money expenditure that is hard to justify until after it's done.
If a system has critical safety components (using, or misusing the system could harm or kill people), all parts of it should be treated as such. This applies to hospital equipment, missile warning systems, cars, etc. Things like security and reliability rightly get a lot of attention, but UX is just as critical, as this event shows. There are plenty of case studies of poor UX on hospital equipment killing people[1]. When will people learn?
Yeah. I spent all of 1.5 months in a government contracting shop. I quit as fast as possible. Until something changes with project management and politics, that's just a space that is unlikely to produce good software.
If you have good software, you can more safely use the software without a full-time developer to do maintenance on it.
I have worked in shops that intentionally deliver bad software, and those that had higher ethical standards, and the only difference on the business side was that the bad software had higher budgets and more permanent employees (both more-permanent employees and more permanent-employees).
The problem is that all the people who can tell the difference between good and bad are employed by the contractors. The direct government employees are still counting SLoC and basing their UI requirements on the Excel spreadsheets that were directly copied from paper forms from the 1970s. I am not making this up.
The contract awards are still mostly based on who you know, rather than the quality of your past work, so given the choice between a slipshod initial implementation with a juicy back end in the form of continual maintenance and doing it right the first time and delivering something that never needs contractor support ever again, it isn't surprising that a lot of companies opt for the former even when most of their employees would prefer the latter.
Right... the time and effort to even [sort out the links into TEST and LIVE categories in adjacent boxes] as the minimal disambiguation would likely not be granted because budgets.
Plus also Hawaii is a deeply nepotistic state. You don't get a contract to build this kind of thing without knowing somebody who knows somebody. So bidding for this kind of work is much less about qualifications than it is about who you know. Way, way less than anywhere on the mainland. Way less. Whatever you're thinking along the lines of "Oh, no, this is just what it's like in government work," it's not even close to the situation in Hawaii.
> The problem is not knowing (or having someone who knows) how to design something better, it's treating good design as a priority. It rarely happens
What some designers don't realize is that, in reality, they are also in sales. If they can't effectively pitch their ideas to non-designer audiences, it doesn't matter how good their design is; it won't be used.
It's not a coincidence that Paul Rand would include a hefty proposal book along with his logos.
This is not graphic design, it is UX. Graphic design is a component of UX, sometimes, but not here necessarily. Simply reordering the list, and giving it a hierarchy makes it much easier to see what to do: https://twitter.com/iamlucamilan/status/953201356545974272
Google, Apple, and Twitter have hidden the actual launch alert behind three layers of sub menus. Facebook has three layers of sub menus too, but that's the drill and the actual launch is on the front page. Microsoft just has one of these buttons; what's hidden behind the three layers of sub menus is the toggle for whether it's a drill.
Facebook's drill button would be hidden in a huge list of other buttons, the order of which changes randomly every time the page is loaded, and the button would only show up 30% of the time based on an algorithm. Some users would never see the button based on what the algorithm decides, despite specifically adding the button to their list.
There is a bit of a tension here. For a missile alert, every second faster the alert goes out, there are lives saved. Having an "are you sure?" request for confirmation is probably a bad tradeoff.
A better idea might be an "Oops, belay that!" next to it so it can be cancelled just as quickly.
"Good enough for government work". The lowest bidder, etc.
I have seen a large number of government web sites, and most of them, especially at the state level (I am talking about US states), have a horrible UX.
The thing that annoys me most about this situation is people complaining about how long it took to send out the "Sorry, there was no missile" message. Just as I expected these were hard-coded messages that were sent out by a button (link) click and not a free-form text area. There are many (good) reasons you want your program set up this way (let's ignore the rest of the terrible UI/UX for now).
It makes complete sense to me that given a spec of "make a button send this specific text" you would end up with this. So it's very easy to send the initial message but sending a custom follow up requires contacting someone with access to the underlying system who can send a custom message.
In the same way that I may give internal users the ability to fire off SPECIFIC (maybe with placeholders) push notifications with the press of a button but not custom messages. Sure my underlying infrastructure can handle custom messages but the UI for such a feature does not exist (or at least is not available to the person who might have the ability to send preset messages).
All in all this was terrible for the people in Hawaii, no doubt. That said I WANT my missile alert system to be easy to activate to give people the most time I can to prepare. The best fix for this (yes a new UI/UX would be best but let's be realistic on how much time/energy they are willing to spend on this) would be a scary looking alert confirmation dialog of "Are you sure you want to send this alert to the whole state?" and maybe have to type a confirmation string.
That's useless then. That's the sort of dialog that trains users to ignore it because it offers no useful information.
There's a post just a short distance down that twitter thread to a simple layout change that would have made it difficult to choose the wrong option by simply categorizing the options properly. But I'm betting that list is autogenerated on a hardcoded template from some goddamn database somewhere in this enterprise app that makes a simple layout change like that a nightmare to implement.
When someone is in a crisis situation, their actions have to be the SAME as a drill. Making someone do or type something different when things are already hectic is likely to send them into brain-lock.
The solution is something extra that pops up AFTER doing everything the same that says "Sending real alert in 10 ... 9 ... 8 ..." with a "Type: "Abort" in text field to stop".
If the operator brain freezes whether real or fake, the alert still goes out in 10 seconds. If the operator screwed up but doesn't brain-freeze, he can stop the situation.
That's pretty much the best you can do if you're trying to make your drill and your alarm almost identical in order to maximize crisis performance.
I can imagine that when someone accidentally sends out a real alert, the "Oh F" moment will last longer than the 10 seconds you give the operator to type those 5 characters, in the correct order, with shaky fingers.
It has a similar list of alert "templates" as this one, but in addition it has a confirmation box with a radio button "send TEST only / SEND EAS LIVE", and if you select the "send live" option you must enter an authorization code.
It would be interesting to see what the rest of the user interface looks like, if the hawaii one has something similar.
Having to type out YES or some phrase is a fairly common "Are you really sure you want to do this??" sort of thing. I gather they're also looking at requiring a second person's confirmation which, if practical and very unlikely to make it impossible to send any message, seems reasonable as well.
I think I have read some research in healthcare field that such confirmations aren’t s helpful as you would think.
One reason of course is a combination of alarm fatigue and the fact that software has trained people to just always click yes when an annoying pop up appears.
Another is that people will think the have already clicked the correct thing button. So it’s less “Oh a confirmation box has appeared, let me confirm that I pressed the correct button” and more “stupid computer, of course I clicked the right button, that’s what I want to do!” without realizing they did in fact accidentally click the wrong button.
Having to type "YES" or "ALERT" would be useless, I agree. But having to type: "SEND A REAL ALERT" or something like that would probably be reasonably useful. The way GitHub prompts me before deleting a repo is a good example of a useful prompt. The text I have to type is the name of the repo being deleted.
That's fair. Typing something out helps somewhat with the confirmation reflex problem. (Essentially hitting OK or typing Y as almost a single motion with the original selection.) But you're right that, if you think you've selected the right thing especially in a high pressure situation under time pressure, you may well not read through the warning but just take whatever action is required to confirm.
Do you want a button pushing clerk to be able to write any message to millions of people over an emergency network? There is so much potential for disaster in that idea. Options is much better. Having the employee type out the predefined message word for word would really be a good confirmation though.
> Do you want a button pushing clerk to be able to write any message to millions of people over an emergency network?
Yes, in situations where the message does not fit within a pre-defined list due to an unexpected event. Of course it should still have security mechanism's to ensure that the transmission is properly authorised.
The biggest threat is probably misuse by politicians rather than the poor clerk!
You're conflating two different issues. My original suggestion was a mechanism to prevent the clerk from accidentally sending the wrong message. Typing out the entirety of the message hopefully would force them to realize what they're sending.
Allowing a separate option for a clerk, maybe with confirmation from a supervisor or other clerk, to send an arbitrary message (with a separate approval chain) would be useful as well. In the Hawaii situation not having that option likely delayed sending out the retraction as they didn't have a built in.
When deleting a repository in Gitlab, you are required to type the name of the repository in a field. This forces you to change the response based on the particular action.
More like when you delete/rename a github repo and they make you type our the repo name. I'm thinking of having to type something like "MISSILE INCOMING SEND ALERT" or similar.
Well if it was a proper test run (also to check procedures), the test too would have this extra confirmation.
Maybe changing the challenge to "HOLY FUCKING SHIT YES THERE REALLY IS A FUCKING NUKE COMING AT US SEND THE FUCKING MESSAGE!!!" in the real challenge would work better. Of course, when you have to push the button like that, I'm not sure if you could still type straight ;).
But yeah: how do you properly write a test for a potentially dangerous operation?
Have PINs required for certain critical operations and post those PINs in multiple places around the facility. Each PIN is different for each operation. That way the user has to look up the PIN before sending out the "you're going to die" message.
Part of the problem is that these need to be sent quickly. An in bound ballistic missile could have only several minutes of warning or less. Make it too difficult and you waste thirty second looking up a pin. That could kill tens of thousands of people who couldn't get away from windows in time.
I misspoke, PIN would be the wrong term, it would be an activation code.
The codes would be per action not per person. So the "send missile threat" activation code would be, for example, 98790 for everyone and the "test send missile threat" would be 16289 for everyone. You'd have to look up what the code was for your message you wanted to send. All test message start with 1 while all real messages start with 9. They could be rotating so employees don't memorize them.
So to send an alarm by mistake someone would need to press the wrong button and also look up the wrong code and also ignore the meaning of the code prefix.
Just an example.
Though maybe just "this is not a drill" would be better/easier.
I think Y_Y was trying to convey that PIN expands to Personal Identification Number, which does not apply if no specific person can be identified.
In any case, your proposal suffers from the flaw that in a real emergency, nobody will have the extra seconds to spare to look up a code, because that would literally be thousands of people dying per lost second, so the emergency codes will all rotate from '99999' through '99999'. And an employee would actually be responsible for changing the codes from '99999' to '99999' every time they are required to rotate.
I don't recall the source, so I'm not sure of the accuracy, but I read that this is what happened, the employee clicked through the confirmation screen.
Does the "drill" link also have a confirmation screen? Do the two confirmation screen look equal or they are different?
If they are equal (for example a generic message like "Are you sure?") then it's almost like no confirmation, because people get trained to click it automatically.
I think that ideally each one must have a "nice" image about the alarm, that is very different from the other images.
Another possibility is to force the user to retype the message, so the user must read and understand the actual message to be send. (Remember to disallow cut and paste.) (Allow a small number of typos, perhaps a Levenshtein distance of 4 or 5, because the user will be probably nervous if there are some incoming missiles.)
Something that comes to mind is Skyrim's legendary skill confirmation. When you go to reset a skill it prompts you twice if you really want to do it and the second confirmation has the yes/no switched and the default on no so if you are just clicking through it's easy to not reset the skill. You have to read and know what you are doing. But also, yes, the drill shouldn't prompt so it stands out when it does prompt.
In the this case nuclear alert it should be a big red button with a locked cover - do we know how many people died in the panic due to traffic accidents, heart attacks etcetera
How would you test whether the big red button with the locked cover still works though? I mean that's the kinda thing you don't want to see broken (mechanical failure, rust, etc), and these systems can remain unchanged for decades. You have to be able to test the real thing somehow, which gives a chance for accidental triggering.
> How would you test whether the big red button with the locked cover still works though?
Simple: you don't put the button behind the cover, you put the media with the scary message behind it. Then you can test the whole transmission system with less chance for mistakes.
Ironically, this was exactly the solution to the last big EBS failure, the "code word hatefullness" incident in 1971. Back then an operator had to load a tape into the transmitter, but one day he loaded the wrong one because the real and test tapes were stored next to each other. After the incident, they moved the tape to a different location:
> …In the past three tapes, one for the test and two for actual emergencies, were hanging on three labeled hooks above the transmitter… In the future only the test tape will be left near the transmitter. The two emergency tapes [will be] be sealed in clearly marked envelopes and placed inside a nearby cabinet.
Agreed - there really is no excuse, development-wise, for the UI being this poorly designed. 30 minutes of easy work could take this to a point where it would be unlikely somebody would make a mistake.
100% Agree, there are a million little things that could have been done differently/better. My point really was trying to focus on that the delay in follow-up was completely understandable (also unfortunate) and that there was definitely some low hanging fruit for improvement (your suggestions being chief among them). This is, however, the UI/UX use end up with when you produce a specific spec and farm it out to the lowest bidder.
According to the Twitter thread the option at the bottom of the list (False Alarm) was added as the fix to this problem. Making a reasonable UI was apparently not an option.
There's likely a CMS of some sort for adding messages, so adding a false alarm one probably just meant someone with admin access just needed to fill out a web form.
UI changes, even simple ones, would take a little longer.
You are more concerned about what you see as inappropriate criticism of one aspect of this UI design than you are by anything else in this debacle? I hope you will be able to broaden your horizon, if only because narrow perspectives were probably part of the problem, as they often are.
That's an interesting thing to be annoyed about given that a faster way to send out the false alarm notification is one of the mitigations they have put in place.
No, I'm annoyed that people seem to think people sat around doing jack shit for some 30+ minutes instead of sending out the false alarm message. I'm confident they were going as fast as they can and I don't like the blaming of the operator for what was a failing in the designed software. I'm not even sure I blame the people that wrote the software as I'm sure they were given a very specific spec of what it should do and they met the spec.
> The best fix for this (yes a new UI/UX would be best but let's be realistic on how much time/energy they are willing to spend on this) would be a scary looking alert confirmation dialog of "Are you sure you want to send this alert to the whole state?"
If I was in the operator's place I would probably send all the other messages as well so that people know the system didn't work and don't go commit crimes/suicides thinking the world is gonna end
I really don't get all the high levels of excuse in this thread. Think about the victims here: think about putting your (terrified) kid a storm drain. Really think about it, let the feelings about it sink in.
The only correct solution is to fire the person involved, as the next guy will look closer. Yeah maybe not fair, but when working with landmines you have to pay attention - and it is politically much more feasible than redesigning the system.
However I also object to that system in the first place, since it doesn't matter where you hide if a nuke falls close to you, and you probably don't want to survive.
Thinking about it. Also quite possible that the system could only handle so many messages a second.
But regardless of excuses it should never take that long to send that message. You are going to have people with PTSD for years over this.
I wished the state would be sued by every single person in the state -- I am sure than both the guy who clicked the link and the UI would be gone presto. Probably some sovereign immunity that stops that though.
Yikes. Now consider this: this is the design for the missile warning system. It is possible, even likely, that some missile launch system has similar catastrophic UI.
How we've not wiped ourselves off by accident so far is a miracle.
I take it you've read Command and Control [1]? A great, if worrying read on exactly that subject. It's luck, not judgement that something really terrible hasn't happened yet.
Along the same lines: a whistleblower's account of problems in the UK's "Trident" nuclear defense program from 2015: https://wikileaks.org/trident-safety/
That was because, IIRC, Congress mandated a launch code be added to the system, so the military added one and set it to all 0's as a loophole, because their main concern was to make sure the system would work as intended. Not having a launch code when you need it would be a bad thing. None of this precluded the existing, strict, chain of command, which included the famous "biscuit" that the president needs to launch missiles. And I believe the whole thing also needs to be authenticated by the Sec. of Defense.
What is the provenance of this image? How do we know this is real? While UI/UX designers are falling all over themselves with their meme fest I didn't see any reference for the source of this image.
If you read the Washington Post account, it says the interface was a drop down menu of which this picture is not of. See:
In the cockpit of every jet fighter is a brightly painted
lever that, when pulled, fires a small rocket engine
underneath the pilot's seat, blowing the pilot, still in
his seat, out of the aircraft to parachute safely to
earth. Ejector seat levers can only be used once, and
their consequences are significant and irreversible.
Applications must have ejector seat levers so that users
can "occasionally" move persistent objects in the
interface, or dramatically (sometimes irreversibly)
alter the function or behavior of the application. The
one thing that must never happen is accidental
deployment of the ejector seat.
The interface design must assure that a user can never
inadvertently fire the ejector seat when all he wants to
do is make some minor adjustment to the program.
Also jet pilots have years of training whereas users are presented to "intuitive" interfaces where they have no reason not to push every button just to test them out.
This isn't the guy looking for it, this is the guy running the emergency alert system. It's likely to be a contractor who gets meagre pay, limited benefits, and no job security as the contract probably turns over to a new vendor every 2 or 3 years.
The "training" they have is, if anything, an MS word document with a bunch of screenshots or a flash animation showing each of the functions that the contract required the developer to build in.
How is the notification sent from the military to the civilian agency? Does the civilian agency validate the notification? Why is it necessary or preferable for a human to do this instead of a computer?
We probably won't ever know how many people died as a direct result of the Hawaii BMD mis-alert, but I'd be surprised if the number was non-zero (think in terms of car crashes, stress-aggravated cerebrovascular accidents, and so on).
Similarly, consider the warning signage deployed around high tension electrical substations, and the consequences of ignoring it or failing to understand its significance.
The point is, in these types of situation bad design can kill, and we need to design accordingly and unambiguously.
That guy's death was apparently complicated by a chain of failures: user error with the safety pin, user error with the firing handle, maintenance/construction error with some bolt, and several aspects of the M/B design that allowed all failures to happen at once. Overall, those seats have saved some 7500 pilots (search for the tie club) and maybe killed a few: similar odds and debate to airbag deployments.
>but I'd be surprised if the number was non-zero (think in terms of car crashes, stress-aggravated cerebrovascular accidents, and so on).
I'm assuming you meant you'd be surprised if that number was zero. It's awfully hard to, in good faith, directly associate a message with a death. I mean if you wanted to make it seem bigger than it was, you could associate any crash or heart related death that happened in that 38 minute (?) period as directly a result of the warning, but I don't think that would be in good faith.
> consider the warning signage deployed around high tension electrical substations, and the consequences of ignoring it or failing to understand its significance.
"not only will it kill you, it will hurt the whole time you are dying"
Formatting the quote like this makes it too wide to be readable on my phone without repeatedly scrolling right then left. Even in landscape mode the first line ends with the "b" of "brightly painted".
A lot of people format quotes with italics, as in this example:
I'd argue that very few computer applications need something that works like an ejector seat lever. In most software applications you have more than milliseconds or seconds to make decisions about big, dramatic (and potentially irreversible) changes, and thus they should have multiple levels of confirmation.
There's a lot we can learn from aviation: accident retrospectives, checklists, master caution alarms, crew relationship management, human factors research, etc.
However, my knowledge of these things is a bit piecemeal and anecdotal. Does anyone know a good book on these things?
At least with airlines this seems to be largely abandoned now. My wife worked for a large middle eastern airline for a few years, and the crews were just assigned randomly. Well I guess there is a complicated algorithm that figures it out, but given the company has nearly 15,000 cabin crew it’s unlikely you’ll see someone you flew with again on another flight.
Also from the stories I heard people were promoted to managerial roles (on big flights there’s usually a senior for each class, and then a purser who reports to the captain) through seniority, not because they had any specific people skills that made them a good manager.
Hey that text block is unreadable on mobile. Here it is readable
>In the cockpit of every jet fighter is a brightly painted
lever that, when pulled, fires a small rocket engine
underneath the pilot's seat, blowing the pilot, still in
his seat, out of the aircraft to parachute safely to
earth. Ejector seat levers can only be used once, and
their consequences are significant and irreversible.
>Applications must have ejector seat levers so that users
can "occasionally" move persistent objects in the
interface, or dramatically (sometimes irreversibly)
alter the function or behavior of the application. The
one thing that must never happen is accidental
deployment of the ejector seat.
>The interface design must assure that a user can never
inadvertently fire the ejector seat when all he wants to
do is make some minor adjustment to the program.
Several indents like that make code blocks, which respect original new line placements and are usually unreadable on mobile.
This is a great teaching moment for anyone who works in UX or observability, but it's worth keeping in mind that the FCC's Public Safety and Homeland Security Bureau (who operate the Emergency Alert Service (EAS)) has a yearly operating budget of around $17MM this year. The system itself was launched in 1997.
This is a legacy software (and hardware!) system with a relatively small budget and number of employees that needs to coordinate with other large organizations (FEMA, HI-EMA, NOAA, etc.). I think the most interesting lessons to learn from this have to do with long term software maintenance. I'm sure folks at the FCC/*EMA knew that this UI was janky but why did they not have the budget/power to fix it? How do we ensure that the public sector can benefit from the technical advances that most people on hacker news take for granted? Curious to hear from folks with experience in relevant parts of the government.
> I think the most interesting lessons to learn from this have to do with long term software maintenance.
Yes! And with re-engineering too! When you redevelop a system, you need to build up historical knowledge of its antecedents, and teach it to the current operators and maintainers. You shouldn't just start from scratch from the requirements.
In this case, there was an even older legacy system that had a similar incident in 1971, which they then mitigated. Apparently that lesson was lost.
> …In the past three tapes, one for the test and two for actual emergencies, were hanging on three labeled hooks above the transmitter… In the future only the test tape will be left near the transmitter. The two emergency tapes [will be] be sealed in clearly marked envelopes and placed inside a nearby cabinet.
I once got contracted to integrate with a large government web app that was created with some sort of legacy SPA generator. The app was written in .NET and the js on the page would render the views from specs it would get over XHR from the .NET app. It positioned everything on the page absolutely and everything was styled with inline styles. The people who manage it were quoting outrageous numbers to do the simplest style changes because it was so hard to work in. Very few improvements ever got made because the money was just not there in the budget to pay these ridiculous quotes.
What I wonder about is whether those links generate HTTP GET-requests. Links do so by default.
If so, just need internal web spider or overzealous web browser prefetcher, and one day Hawaii might have a lot of false alerts going on...
GET-requests are not supposed to have side effects, like alerting a whole state.
When you have side effects, HTTP verb should be something else, like POST.
Of course, it's possible there's Javascript handler behind those links that generates a HTTP POST request. Overall appearance of the page does suggest otherwise.
I wonder how the confirmation page (if any) behaves and looks like...
You're assuming a lot here. This might not even be an HTML page. It used to be fairly common in Windows UIs to have underlined blue clickable text in native applications.
Well, all I know it looks like an unstyled web page. We can just guess indeed.
Oh, and their display mode is set incorrectly. It's clearly non-native resolution (blurry text, bilinear "zooming" performed by the display, while individual pixels are in sharp focus) and wrong aspect ratio (the font appears too wide).
Perhaps 1024x768 on a display with 1920x1080 native resolution. Or something similar.
I'm looking forward to news stories about the Great Hawaiian Baby Boom in a few months...
But seriously, as an occasional government contractor, this does not surprise me at all. This is par for the course on government software projects. Nobody gets fired for adding just one more menu item. Besides, good luck pushing a complete redesign through in a reasonable amount of time/money. With the bloated teams and processes they use, that would take years. Sadly, this is probably the "new" version anyway.
Consider the odds that the current tension has had a lot of people thinking they should test their emergency alert systems really soon, and we're hearing about each case where someone had a bad user-interface which they tried to compensate for with training rather than fixing it.
You are correct. There is no current reporting that claims these were set off intentionally. However, I'm curious if there could be a valid motive for both states to purposely set off these false alarms. It does seem like a pretty big coincidence that both states happen to get these false alarms in relatively short time frame. I have already seen conspiracy posts making claims about these alarms being purposely set off, but they never provide a motive. Just curious if anyone can think of a valid one.
- To allow authorities to listen for "chatter" from NK military deployed in Hawaii who might have been ready to carry out some action if an attack were launched, but who would not likely have been told when it would occur.
- To watch how NK responds internally to the idea that the US might be launching a counter-strike. Things like escape routes, comms, etc., could be observed via sigint.
- To create a "path" for the public to think about this kind of thing. It's been years since anyone in the US (public or media) has had to think about incoming missile attacks. The false alarm gives all the reporters an excuse to read up on the history, to issue warnings about lack of preparedness, etc. This means that in the event of an actual attack the information mechanisms will all work more smoothly. It's a dress rehearsal so that everyone is not stuck like a deer in the headlights if an attack occurs.
- The false alarm could have been triggered by someone on the payroll of NK. Note that the main goal of the military is to inspire fear, not to actually kill people.
When a nuke comes to destroy a big developed part of Hawai'i, there's a lot of stuff on the island you might want to get out of there. Maybe, systems are designed to rapidly move that stuff to safety, as soon as an alert triggers it. A Bulk Data Transfer of all sensitive, unique data would probably be hardcoded to start on this emergency alarm. How long would such a BDT take? Maybe somewhere around 38 minutes? Forget data transfers, what else might need to be moved out, that would take more than 30 minutes? This whole situation sounds silly, but what explanation sounds more probable?
I don't want to get into conspiracy type stuff, but I was saying to colleagues yesterday that the bright side of all this is that I imagine there are a lot of Hawaiians building their emergency survival kits right now.
I expect that we're seeing organizations dust off or setup new systems to warn their populations of incoming missiles as a result of the rising tensions with North Korea. There's an argument to be made that there could and are political motivations in addition to the concerns about safety.
In any case, as these systems are powered up, updated, overhauled or even replaced, there's testing going on. And when there's testing, there's the opportunity for just this type of gaffe.
Is this design horrible? yes. Is this out of the ordinary? Absolutely not. This is very common UX for systems designed 5+ years ago and there are still systems designed today that look like this.
This is an alert system, now picture your surgeon or air traffic control using something like this...
If that’s the user interface, I can hardly imagine what’s underneath. Is this thing actually secure with properly designed two-factor authentication, etc. Or a weak password and a PHP script and some rubbish like that?…
That's probably why they released this cropped screenshot. It wouldn't surprise if the site is accessible on the public web and secured with a weak password that must be changed weekly.
While we are bikeshedding this, let's give some thought to the possibility of sending a test message when a real one is called for - and in the tsunami warning system as well, as I certainly hope it has more real events than does the nuclear missile one.
I wouldn't call it a source of human error, but it's a huge amplifier for it. It adds massively to the cognitive overhead of performing and monitoring tasks.
And I would hope that the people (here) who have been calling for the employee who messed up to be drawn and quartered for "pressing the wrong button" might reconsider after seeing this.
And that's likely one of the only things that could be easily fixed, too. The ordering of those links likely isn't changeable without going into the guts of some upstream monster system (preventing those categorized redesigns that some have suggested). But a simple text label change would seem to be easier.
As a designer 2 things going through my mind since the moment the story broke the EMS system fired a statewide alert due to a wrongly clicked HI element:
1) How in the name of everything holy did this monstrosity of an interface for such an important system get approved in the first place?
2) Given a monstrosity of a interface: Why are there, apparently, ZERO safeguards in place to make sure the human error rate at least has a HUGE chance to get close to zero?
Holy shit. That is not even a "pull down menu" as previously reported.
Good to know. This means that in Hawaii, you should only think about taking action if the magic FALSE ALARM notice never appears.
If the FALSE ALARM notice doesn't appear within an hour, it might just be that the false alarm guy is on lunch break. If it doesn't appear in 8 hours, it might mean that a shift change occurred without proper hand-off. If it fails to appear in 24 hours, a nuclear attack might actually be imminent, and you should check other sources, to determine if the alarm was, in fact, real.
This protocol ensures that archaeologists will receive a valid record, omitting the false alarm notice in the rocks on the other side. Eventual consistency ensures that before the archaeological record is written, a false alarm notice will appear, if the alarm was indeed false. In all other cases, the alarm is effectively revealed to history to be a true alert, at some point in the future, as yet to be determined.
In the defence of those that made the UX, this is version 1. Or prototype. Or something they paid a ten year old with candies to make...
Or they don't get it and they are hopeless.
No, they do get it. The DRILL link is one row away from non-DRILL link to avoid incorrect dispatching. In the mean time you can ponder if you need to announce a tsunami.
More importantly - what about the reverse case? Presumably someone issuing a real ballistic missile warning is doing so while terrified and in a hurry. The consequences of pressing the "This is a drill" button in that case are a lot worse than this.
Actually from some of the interviews with Hawaiians on NPR, it seemed like people didn't know what to do when waiting for an incoming ballistic missile. So it seems like even in the case where there is a real threat, the usefulness of the message is questionable if people aren't trained about what to do when waiting for a missile.
Probably the worst UX design (considering the context and the stakes) I've ever seen.
How could multiple people accept this as the interface? Put it on a separate page! Have an explicit confirmation dialog, or 3 of them.
At the very least maybe it's own section with some padding...
An ICBM will land about 30 minutes after it launches. At those kinds of timescales, every delay you put in somebody's path to issue the warning, even a few seconds, is going to be a cost measured in lives.
How many people are you willing to kill to reduce the risk of a false alarm?
> An ICBM will land about 30 minutes after it launches. At those kinds of timescales, every delay you put in somebody's path to issue the warning, even a few seconds, is going to be a cost measured in lives.
Actually, considering that timescale and all the variables involved in detection and tracking a few seconds is less than the margin of error for predicting the warhead's arrival. While the goal should obviously be to get a (valid) warning to the public as quickly as practical, the "every second counts" mantra in this situation is overly dramatic. In fact, I think it is even counterproductive because it can lead to a "better safe than sorry" attitude that triggers unnecessary false alarms.
> How many people are you willing to kill to reduce the risk of a false alarm?
Depends. How many people might die in a false alarm? How many people might die if they stop trusting the alert system and fail to properly react to a true alarm?
That's true. But somewhere between one click or "Alexa, send the nuclear warning alert" and having multiple people confirm the message with some complex procedure, there's probably a reasonable medium. False alarms also have significant risk, including encouraging people to ignore a real alarm whether for tsunamis or missile attacks.
By that argument, this is worse because the person might have to hunt around in a pile of similar looking links for the one button, with an increased chance of getting it wrong and issuing the wrong alert.
Really, there is no way to justify this as in any way "intentionally designed".
They could at least make the non-test warnings a Red hyperlink or something, and organize them on a separate part of the page away from the test warnings... If that picture is legit, it's laughably bad. I have a feeling adding a confirmation dialog that takes an extra 200ms to click isn't gonna make much of a difference either when a thermonuclear warhead is heading your way...
In the current UI, the 'real alert' and the 'drill alert' have equal visual weighting, hence equal priority, and share about 75% of the same text.
I agree that the 'real alert' should not be impeded and they could at least put a red or black rectangle around the 'real alert' button to make it stand out from the others.
In my opinion, this begs the question: what was the population of Hawaii expected to do in response to this warning? From my reading of the articles, for the most part people did not know what to do. In their position, without a clear plan, I might just hug my partner and children that one last time.
I wonder if there is a clear plan, and if there's a better way to communicate that to the population. That seems like maybe it should be more of a priority then this alert message, which seemed to only sow confusion.
There probably is a plan, but nobody wants to be taught it because of backlash to civil emergency plans.
The "Duck & Cover" stuff is lampooned for being rediculous, but it's actually what you should do. The idea that everyone will die from being incinerated in a nuclear blast is a misconception. Most people in America would survive the initial nuclear blast from a full Russian strike (and China, etc. have even smaller arsenals).
In the short term, act like you would in a Tornado. Basement or if you don't have one, an room with no windows.
Longer term--stay in side for as long as you can because fallout will disipate. The longer you don't go outside the better.
You think a few seconds is going to make a difference in lives with a nuclear strike? Will people be able to run to the nearest bomb shelter in that time?
The WaPo reports it was from an interface with a drop-down menu.
“Around 8:05 a.m., the Hawaii emergency employee initiated the internal test, according to a timeline released by the state. From a drop-down menu on a computer program, he saw two options: “Test missile alert” and “Missile alert.” He was supposed to choose the former; as much of the world now knows, he chose the latter, an initiation of a real-life missile alert.”
Rule of thumb with tech reporting is that the reporters always get the details wrong. Nothing shatters the illusion like watching a story where you know the actual facts of the case and seeing just how much they get wrong.
One problem with interfaces like this is that they are an abjection of ownership. The situation probably doesn't warrant a dedicated designer, and the developer who make it, probably thought 'I'm not a designer, that will do'.
The amount of times I've heard 'Just add a button to do it' from 'backend developers'..
The point is, software is there to serve a user, and if you can't envisage the user using the software, then you don't understand the requirements. It's not a design question.
Too much responsibility is deferred as 'design' when it's actually a fundamental part of solving the problem at hand.
Not that design isn't also important, but the perception that usability == design needs to change.
I remember when former threads about this came up, some people were certain that it was impossible that a mistake like this could come down to "pushing the wrong button," because surely there would be safeguards and sanity checks in place.
Ironically, moving the ballistic missile alert to an actual physical button separate from everything else would have been better than... this. This looks like it's literally just an HTML page on an internal network. Do they send the alert command through the query string or something?
For those of you who don't believe a single thing we're being told about this situation -- including this screenshot -- just know that you're not alone.
You don't need to question the narrative publicly like some of us do, but it's always good to keep an open mind about everything we're told, and sadly, that means considering the possibility that there have been more lies than truths told since Saturday.
Hopefully someday we'll get the real story, but I have a very hard time believing that this is it.
Are you suggesting that there may have been a real attack but it's being kept quiet (by idk, the Deep State, lizard people, round-earth conspiracists, etc?)
Not at all, and I take offense to what you are suggesting - that anyone who questions the 'official' story automatically believes in lizard people or whatnot.
I find it sad that people have been conditioned to lump anyone who questions such a messed up narrative in with nonsense like that.
Let's not act as if we've never been lied to before. We have reached a very low point in both credibility and transparency, and just because I have serious doubts about this story doesn't mean I'm a flat earther or whatever else you're insinuating. Let's be mature here.
You offer no reason to doubt the official story though. If you don't present us with new information you're either pointing out the obvious or advocating nonsense.
Clearly not. He is suggesting that the alert was not send in error, but that requires no more than somebody clicking it and further that he doesn't trust the government (in would call anything else imprudent).
I can see a bunch of reasons for clicking that link, from just for fun to wanting more money to fix the UI to the more sinister of wanting to keep the populous scared so that they are more likely to support a war in the future.
It seems very likely to me this situation is exactly what it appears to be: a false alarm triggered by an old, underfunded, patch-upon-patch alert system. Initially very scary, then very embarrassing, everyone involved is trying to cover their asses, so you see a lot of spin.
The lack of an actual ballistic missile or missile defense response should really settle the question, IMO.
I'm not sure how to generalize this advice without becoming a paranoid conspiracy theorist. If we maintain an "open mind" about everything we are told and the "real story" comes out later how do we know it is real?
If we are able to identify the later story as real that means we have some other mechanism for determining truthfulness which could be applied to the current information.
Do you have any reason to believe we have been lied to? Can you point to any sources? Your vague statements suggest you know something but are unwilling to reveal it. That is a red flag for me.
This isn't the first time a scary EBS message was mistakenly sent out (there was a famous incident in 1971), and it probably won't be the last time. I see no reason to doubt that it was a UX failure.
To me, the biggest surprise is that this alert system actually worked. Whenever I set up sensing / monitoring systems for my web services, I often wonder if it really will work.
Holy hell... I was expecting some old ncurses mainframe design with shitty interface, but if it's just an href... seems like it'd be trivial to hack and cause havoc
A site is vulnerable to XSRF if it doesn't use tokens when performing critical operations, critical operations are (usually) performed using HTTP POST, which can be done via form submission... token generation and validation is done server side...
You can perform a successful XSRF attack in a browser with javascript completely disabled.
Wait, so what am I thinking of? The phenomena where if you can get a website to display output of your choosing in a non sanitized way, you can abuse that to cause code to be executed by the user.
I've worked at quite a few companies in my time, and almost all of them have complete garbage for their internal tools. It's always perceived as "something we need, but it doesn't matter what it looks like" -- but this shows exactly why it _does_ matter. I actually prefer working on internal tools if given the opportunity, because I get to talk directly with most of, if not all, of the users and get real, continuous feedback.
This is just my observation, but this is what you get when software is used in such critical situations and yet the entire software engineering/development field is completely non-standardized and practically the wild west. Again, this is my opinion, which I know a lot of people disagree with, but we're going to keep coming up against this kind of stuff until software developers become licensed and accountable for their designs.
YES once again we learn the hard way that this is the only "engineering" field in which its practitioners are not licensed, certified, belong to a professional organization with safe harbor protections, and so on.
Fix proposal subthread. Can we harness HN collective intelligence to vote up some good suggestions?
Instructions:
* Reply with your suggested fix to this comment. (That way we can vote equally between suggestions, instead of having them being scattered throughout the thread).
* First line is simple one-line summary, optionally followed by a blank line then some exposition.
Example:
----------
Simple fix: Seperate links into 2 color coded sections, TEST and ACTIVE
Put each set of links into seperate divs, with different background colors or striped backgrounds.
Simple fix: Seperate links into 2 color coded sections, TEST and ACTIVE
Put each set of links into seperate divs, with different background colors or striped backgrounds. As is, these different links are mixed up together making them accident prone. Undoubtedly a mini development boondoggle will result over this incident, in the meantime the above fix is easy and satisifies the real requirement.
Lots of money to be made in "Enterprise" software, and even bigger amount of impact to be made by having smart developers working in the space.
I'm not really sure where the problem lies, partly economics, and part lack of specialized knowledge. In this case that knowledge might be laws around AMBER alerts, how to respond to an RFP for such a system, or even just not knowing that such systems need to built.
There are plenty of possible charity improvements, but I think anyone who has accidentally sent an email should understand what this really needs: A countdown timer in the 10-30 second range with a clear, bold description of what it will do, and a cancel button. There should be no way to skip the timer. I'd be amazed if no one said "wait, shit" within 10 seconds of clicking the wrong link.
This is crazy. I used to work on the engineering-side of this on phones (CMAS). The amount of requirements from the government and the carriers for the UI/UX were ridiculous (how the messages are precisely displayed to the users, requirements regarding data scenarios). To give an example, it took about 300 hours to verify each release.
How is it possible that they don't have standards while the OEMs do?
I think I'm the only person who saw it and thought it was fine. It's a list of clearly labeled text links, very clear very simple. There are some changes that could be made to separate drills/tests from real events (or a different naming convention), but I'm surprised to see many people call a single page text list "terrible UI".
There are some great comments above on this. Two that stand out:
fredley 36 minutes ago
If a system has critical safety components (using, or misusing the system could harm or kill people), all parts of it should be treated as such. This applies to hospital equipment, missile warning systems, cars, etc. Things like security and reliability rightly get a lot of attention, but UX is just as critical, as this event shows. There are plenty of case studies of poor UX on hospital equipment killing people[1]. When will people learn? https://medium.com/tragic-design/how-bad-ux-killed-jenny-ef9...
fredley 33 minutes ago
This is not graphic design, it is UX. Graphic design is a component of UX, sometimes, but not here necessarily. Simply reordering the list, and giving it a hierarchy makes it much easier to see what to do: https://twitter.com/iamlucamilan/status/953201356545974272
It's all about context. It might be fine if that's a list of your website bookmarks and if you click the wrong one you just click a different one. But when the consequences of clicking the wrong one are such high magnitude, it's bad design.
And I don't even think it's up for debate. If the design of the page just made someone accidentally alert a whole state of an incoming missile, then it's bad.
I think someone will always be able to click the wrong link, especially if given the wrong information or confusing information "Give the message for PACOM CDW, ... Oh sorry you did the DRILL PACOM, right?" The larger problem was not having a quick system in place to redact an erroneous message.
But I've read a lot of suggestions like spreading items across multiple pages, adding passwords, or big warning colors and lines around the live options that are not just obfuscating/distracting/annoying but are bound to cause errors and open the door for even worse design.
And remember that when you need to click it, you will be 100% entirely terrified by the incoming missile. Parsing through the list will itself be challenging under those circumstances.
The thing is is that those two items look very very similar and and our eyes routinely scan or skip over words in in text. For example, how many repetetitions were there just in in this paragraph I just wrote?
I remember while rooting my samsung phone, i had to 'reset all settings'. Settings to me means configuration options. Found out for the designer that settings meant _all_ data!. Lost some family photos that were not backed up.
What I want to know, if there is come sort of confirmation window/popup as step 2. That would make a big difference. Was that just plain wrong click, or something more?
Btw: any idea what this application is? looks like a HTML page ?
They should have added the captcha's with the store fronts. That alert would never get triggered unless it were truly needed. It would be 15 minutes late though every time
I kind of agree that’s ugly and can be way better, but I don’t see how you can make the confusion. One says drill, the other doesn’t. The guy should just get fired.
> Surely the alert should be sent automatically when the anti-missile system is engaged?
This is a civilian system. Detection and anti-ballistic missile systems are military, specifically NORAD and the MDA. I agree that NORAD should have an API, though I suspect integrating it with civil defense systems will result in more such mistakes in the short run.
What I find interesting is not this screenshot, but the fact that most commenters here don't even question if we are being told the truth as to what happened. Yet here we are talking about UI/UX.
And everyone required to sit through the mandatory class will be zoned out and not even absorb the facts that aren't in the 10 question quiz at the end of the section.
Alternatively, a foreign adversary is taking its hacking to the next level, and this is damage control pretending that it's not. Advanced warships that "accidentally" run into other ships. Paralyzing remote islands with missile warnings. This is like the beginning of a Tom Clancy novel, and our cool, calm, collected President with his military and CIA experience will see us through.
The problem is not knowing (or having someone who knows) how to design something better, it's treating good design as a priority. It rarely happens.