>We create rules enforcing mandatory sleep requirements stupidly believing that we can eliminate the potential for a user of the system to be drowsy while at the controls.
>stupidly believing
Dick move by author to reveal his level of ignorance of USN operation tempo around McCain collision until the last few paragraphs. There were lol 4 fucking surface ship collisions and a grounding in westpac in 2017 because sailors were ran ragged, leading to operation pause. UX wasn't the primary problem. Sailors weren't "drowsy", they were sleep deprived, hopped up on stimulants etc due to manning shortages and long deployments, and likely lax training (due to shortages), which caused USS Connecticut accident a few years later. I'm sure you can improve UI for audience subsisting on 3-5 hours of sleep, but maybe the more pressing thing to try is to get them more sleep. IIRC there was study on navy sleep hyigene and like 100% of sailors in bottom quartile experienced bewilderment/confusion.
Having spent 10 years as a Navy Nuclear Propulsion Operator on two different submarines operating the reactor plant, I can tell you I am not surprised by incidents like these. In order of importance:
1) Lack of sleep (it wasn't unusual to operate on no sleep over a 36-48 hour period)
2) Poor or insufficient training. Just because you are "trained or qualified" doesn't mean you know how to operate.
3) Poor or missing procedures (let's call it UI/UX for today's lingo). Many procedures were vague, and drawings were hard to understand. The Navy has a feedback system for this, but it often takes months/years to resolve.
Having said all that, the issues pointed out in the comments and the article, including my comments, have existed for decades in the Navy. At some point, it comes down to the command's leadership and superiors to ensure these issues don't happen. A poorly designed checkbox is the last thing that caused this issue.
> Many procedures were vague, and drawings were hard to understand. The Navy has a feedback system for this, but it often takes months/years to resolve.
When I worked for a DoD contractor I worked on a system that was designed to tighten one such feedback loop. The publicly-available copy regarding this unclassified effort [1] says that it was
> [a] framework for an end-to-end Change Request (CR) workflow system that will improve turn-around time and speed to the fleet. We are leading the innovation of a paperless cockpit through the design and development of an eFC mobile application that will provide responsive, reliable information for our aircrews on mobile devices at the touch of a button.
I thought that it was a pretty novel idea - it was certainly the most technically-progressive project I worked on when in defense contracting by a country mile. When I attended a program picnic at the Captain's house, however, I found no shortage of people who were skeptical of what we were building. When I pressed them for reasons why it basically amounted to "I learned what we have years ago and I don't want to change". Institutional rot is very real.
That said, there's something to be said about being resistant to change; "if it ain't broke, don't fix it". I don't know what "eFC" means, but "mobile application" implies they would need a new device with everything that entails.
As a career military aviator (about half and half flying and non-flying air ops jobs), there are definitely a surprising amount of Luddites in green flight suits. But there are also legit security concerns bringing modern mobile devices into a cockpit for the same reasons as the concerns around bringing one into a SCIF.
Having spent ~10 years each active and reserve in Naval Aviation, it still boggles my mind that the rest of the Fleet hasn't understood the concept of crew rest yet, or is at least only now beginning to understand it 60-70 years later.
We adopted it in the mid-20th century post-WWII because we were literally killing people for dumb reasons. I don't know if it's the well-known aviator hate among a significant minority of blackshoes that's the roadblock, or what.
There were also issues surround group dynamics and trust. A constant parade of ragged junior officers arriving and leaving leads directly to breakdowns in communication. Teams (driving a ship is a team effort) require stability.
The military doesn't really have the autonomy to reduce their own mission set. The major missions in terms of maintaining certain capabilities or protecting against certain adversaries are assigned by Congress and the President. The military then has to figure out how to execute within a budget that, while enormous in absolute terms, is still inadequate for everything they're tasked to do. There is no political will to fix this problem.
> A poorly designed checkbox is the last thing that caused this issue.
Indeed. The checkbox, the lack of sleep, the insufficient training and the cryptic instructions are all symptoms.
Lack of sleep is one thing I would think about deliberately employing to get a notion of what is the safe margins of individual crew members. For instance, I work very well under stress, but fail early on sleep deprivation.
E: since many are quoting authors preface about not knowing much, but doing their own research
My beef is, given disclaimer, I read piece to end thinking author made good faith effort at research, only to see author characterize, near conclusion, sailor/operator severe lack of sleep hygiene as "drowsiness" which can be designed around. That expecting enforcement of better operational conditions is "stupid", which may feel true in military context. But 7th fleet went from 4 accidents in one year to none after brief operational pause for a month, I dont think the result is because USN bureaucracy figured out a way to improve UXUI on Arleigh Burkes software. Also note the other 6 fleets with... more relaxed tasking relative to west pac weren't suffering from same level of dysfunction. UXUI is important yes, but sometimes operations are ran so badly that you should prioritize improving the way it's run instead of pretending it can be bandaid over with a better checkbox.
The Navy has the stupidest possible ideas of sleep hygiene, boiling down to "it sucked for me so it should suck for the next guy, too". I had friends in departments who worked 6 hours on, 6 hours off, 6 hours on, 6 hours off repeatedly. In that pattern you never get a solid 8 hours of uninterrupted sleep, ever. Yeah, it's possible that emergencies will arise where you have to work 24-48-more hours straight without relief because the ship is under attack, or on fire, or barely afloat. That's not a reason to try to kill sailors with sleep deprivation the rest of the time.
Six and six is fine, if you actually do it. In reality, junior officers do six on watch, six on a computer doing paperwork, six back on watch, a couple hours eating/bathing/cleaning, then perhaps get five of actual sleep each day.
Nor is doing 8-8-8 or 12-12 shifts. There are only so many people on a boat and they all need to be working about 50% of the time, plus all-hands stuff where nobody sleeps.
The risk of that logic is that 1-1 shifts would have 50% of the ship working at a time, although most of them would die after a week or so.
1-1 is clearly absurd. So is 2-2. And 3-3. I'm convinced 6-6 is also biologically unsustainable for most people, although 6 hours is long enough that 1) some people are perfectly fine with it, and 2) most other people can hack it for at least a while before incipient mental breakdown.
A full night's rest isn't 8 hours, and the sleep doesn't have to be accumulated in one block. This is a myth that persisted after the invention of street lighting.
For some people. Sleep monitors confirm that I consistently sleep from 7 to 8 hours straight through per night unless something external disturbs me. That's my own natural sleep pattern. I cannot function on fewer contiguous hours long-term. I know. I've been through it several times in life and it was universally horrible.
Some people function best with several shorter sleep periods. Other people function best with 1 longer one. The former can work just fine with a 12-12 schedule. The latter cannot maintain performance with a 6-6 schedule.
It's my understanding that our ideas of eight hours of continuous sleep window may be an artifact of the industrial revolution and electrification. Meaning it may not be ideal for us, but just the assumption that we've become accustom to in modern times. There are lots of old writing (Ben Franklin etc.) that refer to their "second sleep" ie, they sleep for a few hours, get up and do some work, and then go back to sleep for a few hours more. If anyone has more concrete information on this, I'd be interested.
That's a valid sleep mode for some people. My wife wakes up in the middle of the night, reads for a bit, then goes back to sleep. That works for her. I absolutely cannot do this. When I fall asleep, I stay asleep for 7-8 hours and only then wake up. In the times where I've been unable to get that uninterrupted sleep.
In the case of the Navy, I can guarantee that 6+6 shifts didn't come about because of advanced sleep research, but because "it was good enough for me and now it's good enough for them".
I don't know it's origination, but I suspect the 6+6 was probably a result of having the ability to split operations of a 24 hour day into even crews (not much different than the civilian 8 hour allowing factory work to be split into 3 shifts). How they rationalize it after the fact is a different story.
They need to be well-rested in case those emergency situations occur; to me that 6 hour schedule feels like it's intended to keep people in a constant state of being stressed / tired. Weird sadism in the military. That said, that schedule looks like they only have the people (or the facilities) to run two shifts, instead of three (8 hours on, 16 hours off) or more shifts.
My department did basically 12+12. The work day was long (with meal breaks), but then you had a few hours to hang out, write letters, read, and sleep for 8 hours. I’d infinitely rather do 12+12 than 6+6.
> Dick move by author to reveal his level of ignorance of USN operation tempo around McCain collision until the last few paragraphs. There were lol 4 fucking surface ship collisions and a grounding in westpac in 2017 because sailors were ran ragged
I read it as "mandatory sleep requirements don't actually mean people don't show up to a shift without enough sleep".
Basically acknowledging the difference between how the world is on paper and how the world is in reality. Even if there's rules about people getting enough sleep, designing a system that assumes everyone who works it will get enough will get people killed.
I didn't read it necessarily like that. It can also mean that even with fully rested sailors, the same confusion can still happen again, because the interface is inherently confusing.
In a sudden life-and-death situation combined with information overload, a bad interface can be what tips the scale into disaster.
Certainly true but I think he is saying that the author should have indicated his limited knowledge of the context of the collision early on in the article.
Yes, I'm being overly uncharitable, but it takes very inept research to study Mccain accident and not be exposed to the other 3 accidents, or be generally aware of the state of 7th fleet / condition of sailors from any of the reports. 4 major accidents do not happen in that specific fleet (out of 7) because of UXUI, when the other 6 fleets operate the same ships. Extra side eye of commentary/conclusion reducing cripplingly bad culture around sleep to "drowsiness" because elevating UXUI / blaming checkbox works less well when operators are mentally not there. You don't UXUI truckers so they can drive safely on a few hours of sleep, you regulate how long they can drive to make sure they get enough sleep.
> UXUI truckers so they can drive safely on a few hours of sleep
I think truckers have significant UXUI AND regulation on how long they can drive.
The Author does a good job oh highlighting the issues around UXUI that have not been analysed enough anywhere else and also raises the other issues which have been reported on.
> "first and only public source of real design criticism"
> "Add inexperience, insufficient training, and lack of sleep to the situation and you have a recipe for disaster"
In the 3rd paragraph (of which the preceding 2 were very short) ...
"Before going any further, I want to make it clear that I am just a civilian piecing together this story from whatever information I can glean from the internet."
Having read some great articles on the spate of pacific fleet collisions contemporaneous with the McCain incident, this is when I stopped reading this pointless article.
And I came to that comment page to say: "I really want to memorize all that hilarious design problems" - like eg. physical steering wheel not working and then suddenly turns itself into a exactly wrong direction... You know for https://xkcd.com/742/ purposes.
But then I read few comments on navy systemic sleep depravation...
And how air force solved this 60-70 years ago.
And how some peoples says UX problem is not a problem becouse sleep is a problem (and there are contemposomething evidences of other problems) and obviously two or more of them can't compound simultaneously ! /s
Just want to add: if there is so basic problem with working conditions then there obviously are much more human (and not only) problems in that navy force. Seems quite identical to what Russia is demonstrating in last years and in case of conflict results will be identical - a lot of lost american lives for stupid reasons. Just pointing obviousness.
I ran the nuclear power plant on the USS La Jolla (SSN-701) for five years. Per the Engineering Department Organization Manual, you're not allowed to do any operations in the power plant if you haven't gotten a certain number of hours of sleep in the past however many hours. This is the most laughably ignored rule on the ship. It's very normal to have people operating the power plant after being awake for 40 hours straight. (I still remember getting yelled at for falling asleep during training because I'd been awake for >24 hours, and the training was about the importance of being well rested and how the Department of Transportation developed its sleep requirements by studying railroad operations. Maximum irony.) Naval Reactors, the organization that supervises the whole navy nuclear program, knows this is the norm and helps hide it. I remember, during the briefing prior to every reactor startup, the engineering officer would say loud and proud "If you don't think you can perform your duties for whatever reason, any reason at all, if you're too tired, raise your hand." One time I had been awake for more than a day straight and I was suicidal, I had been scheduled for 12 hours on watch, 6 hours off watch for several days, so I said fuck it, and I raised my hand. "Per the EDOM, I am not allowed to stand watch because I've been awake for far too long." The engineer recommended me for Non-Judicial Punishment for 1) not being ready to stand watch and 2) having stood watch previously already too sleep deprived to stand watch. I wasn't actually punished because I threatened to call the DoD Inspector General. The whole system is rotten as fuck.
> I ran the nuclear power plant on the USS La Jolla (SSN-701) for five years. Per the Engineering Department Organization Manual, you're not allowed to do any operations in the power plant if you haven't gotten a certain number of hours of sleep in the past however many hours. This is the most laughably ignored rule on the ship. It's very normal to have people operating the power plant after being awake for 40 hours straight.
And this is why aviators mock nukes. Because we actually understand what gets people killed and avoid it.
This is also very important for lorry drivers, to the extent that there's all sorts of tracking and enforcement for how long they're driving. But in this case it sounds like poor staff management: this isn't a convenience store running on zero-hour contracts, they should have a shift plan in place that provides adequate cover before even leaving port.
Bingo. Problem is USnavy has large + increasing at sea staff/billet shortages, but at the same time has to (or insist on) on doing more missions with less sailors. You can build a better checkbox, but can you build a good enough checkbox to allow a lorry driver to drive 20 hours a day?
Now there's no such concept as a ganged/unganged state. You move the middle (which has the larger area, since it's the most common tool) to move them together, and the side sliders if you want to control thrust individually.
The shift (watch) plan is set by the ship's officers but they have to work with the personnel that they're assigned. There is a constant shortage, especially for experienced sailors on surface vessels. But they have to put to sea anyway to accomplish the mission, which causes personnel burnout due to overwork thus worsening the shortage in a vicious cycle.
The only real solutions would be for Congress to either significantly increase personnel funding, or trim back the mission set to make it sustainable at current personnel levels. There is no political support for either solution and so the problem will persist.
If they are putting people on shift who are too tired to competently do their job, I assume many of those jobs aren't actually important. Some of the jobs are important (and when done wrong lead to these kinds of incidents), but given how widespread sleep deprivation is from people's comments here, clearly a lot of the jobs can be done very poorly without affecting operations.
That sounds like a management issue. Congress doesn't manage how ships run. The Navy makes all those choices itself.
Unfortunately what are you supposed to do when you 80% of the Sailors you're supposed to have, and that number was already 20% less than what you needed to actually fill a watch bill?
On the other hand, in a time of war, it's likely the exact same conditions of sleep depravation and poor training could exist. UX is the element that could be fixed permanently. The Navy absolutely should fix all of the issues in the NTSB report, and all the UX issues discussed in this blog post.
Something similar came up when Ukraine sunk the Moskva. Some speculated that the Russian radar systems would require an operator to focus on a screen for hours at a time, while most western systems would notify the operator when something unexpected happened. This pretty much ensured that a soldier would fail during wartime.
It's not that the US navy should operate by forcing sailors to forgo sleep, but their systems should be able to be operated by someone who has had very little sleep during actual combat. Touch screens seems counter to that. Two physical throttles, which can be moved either individually or together would be a much clearer indication of the current mode. Humans can feel extremely small misalignments, but trying to line up two lines on a monitor and we need to hold up a piece a paper to check if they are 100% aligned.
Yes I’ve heard from multiple sources that the Navy’s training is not what it once was. For officers, much of their training is done on the ship via self study in their spare time instead of in a classroom.
I think that in this case probably the UI could have been better, but it was functional, and with a well trained helmsman, it shouldn’t have presented a safety issue.
> For officers, much of their training is done on the ship via self study in their spare time instead of in a classroom
The problem here isn't "instead of in a classroom", it's "self study in their spare time" instead of "learning by doing the job under the supervision of more experienced people". The way I learned to drive ships was by driving ships under the supervision of more experienced ship drivers. Sure, there was some classroom preparation before that, but the biggest value add was the supervised hands-on time.
What's your source on this? Beyond age-basee chauvinism/get off my lawn type tropes, what about fitness of a generation is different now from past decades?
Is that American youth on average, or specifically American youth in active service? I mean don't get me wrong, I believe you on both counts, but it's an important distinction to make.
The thing that bothers me most about the navy sleep deficit is that this is a peacetime sleep deficit. If sailors are at the ragged edge of capacity in peacetime, how are we possibly setting them up to function in a real war, when things will be 3x more chaotic and stressful?
Note: Japanese ships do not get into collisions. Why? Their trains run on time.
You have 4 control surfaces? And a steering command? Why not have all 5 people looking at each other and communicating. Add two people who are Looking each direction to know where the ship is GOING?
When you fix the organizational attitude, that its supposed to be hard...
You are again, talking about a massive organization that ran an aircraft into a sand bar, using Windows.
"In 1983, just such a moment was jarringly interrupted when the USS Enterprise ran aground a mere 1,000 yards out from the shoreline. In a photograph released by the Naval Institute, the massive carrier tilts slightly to its side, with its crew positioned on the deck."
There is a Chapter in “Turn the Ship Around”[1] where the author mentions a case of a navy officer (I think) that practically abandoned post because of all the unorganized ways and routines made him actually have sleep deprivation.
There are other tidbits of information in the book, of course from one point of view, that are mentioned in other comments here that are still happening and it’s baffling to me, not because I believe 100% of the book, but because you’d think at some point they would get fixed or partially mitigated with a migration plan.
I hate it such malpractices are still acceptable and even encouraged by an institution as important as the Navy.
The normalization of deviance around lack of sleep and experience if definitely the number 1 issue, but come on. That design to maneuver the ship was fucked up at too many levels.
> Thrust transfer during which different people can operate on each propeller.
> Thrust transfer that automatically disable gang.
> The steering rudder that gets reset when transferred.
> The steering taking the current state of the steering wheel when transferred.
> The manual mode that became the default mode, allowing the transfer of controls to empty stations.
The problem with touchscreens is not the touchscreen, it's the abstraction that makes possibles things that would make no sense with real controls.
Why would, at any point, the current position of controls be different depending on the station you're looking at?
A factor to note: Proportion of humans who know how to use touch UI but not the other UI.
---
I wonder if there exist systems that measure response times, error presses etc. consistently over time for different mediums. There is huge amout of underlying behaviour to model from fact that one mistake may cause more risk in different types of ship-environment-task scenarios to the fact that probably certain variables need mapping to others, which complicates analysis little bit.
---
Empirical data from use is actually not sufficient for testing. For the designs, one essentially wants to subject then to high voltages, acid, sea water, high pressures, coffee et cetera in extreme amount systematically in a lab.
For complex systems like ships, it may be reasonable to even simulate what'd happen if your good component was in a ship and someone put a shit replacement part there.
---
Extreme non-seen-in-field testing includes extrems of the human condition. Labeling buttons with icons instead of text makes things more understandable to those who don't speak English, but what if your crew spends one and half year underwater waiting for nuclear launch command, staring at those icons? One should design interfaces such that even extreme delusions, depressive tendencies, anxiety will not reduce the crew members ability to do their job. Or if someone loses a hand, they will still be able to work.
You're putting a lot of weight on the distinction between drowsy and sleep deprived in a casual comment at the end. How many people really know the distinction and use the correct word in the correct situation? He's clearly looking at it from a UI design perspective and other issues like you mentioned aren't the point of the article.
Accidents have multiple causes. Somebody else might blame sleep deprivation and stimulants then a smart-ass would complain that they just waved away "confusing controls" without understanding how they worked.
I mean the real thrust here is that the controls were extremely confusing even when you weren't sleep deprived so sleep hygene is never going to be a fix for something that is frankly bewildering even when fully rested.
I mean it only takes five sentences for the author to make it clear he has absolutely no idea what he's talking about:
> > Before going any further, I want to make it clear that I am just a civilian piecing together this story from whatever information I can glean from the internet.
The F1166 standard permits up to 250ms latency for touch screen response. That is ridiculously high. It practically guarantees people will think it's not working and tap again, undoing any toggle function they might have activated. They won't necessarily notice their mistake.
I once accidentally ran a washing machine at 90C temperature because of high-latency touch controls. I instinctively double-pressed the touch surface to reduce temperature because I thought the first touch hadn't registered, causing it to wrap around back to maximum. It did some minor damage to the clothes before I noticed. And that was with more like 200ms latency, not 250ms.
Latency on touchscreens is more important than with physical controls because there's no tactile feedback. Even 10ms is obviously imperfect for drag controls. Microsoft Research published a demonstration video:
In electrical engineering there is the concept of "bounce" where IO, i.e. switches sensors whatever, are in a state of close to change, they can bounce back and forth between on and off. When writing controls for them you have to debounce, that is wait for the signal to switch from off to on for say 10ms before you act on the transition.
When dealing with users, and slow processes (that you can't make faster for some reason), it is often necessary to "debounce" user input. Whether that is graying out a button, displaying a loading popup, or something else is entirely situation dependent. Whatever you choose, you should be preventing people from annoyance clicking their way into trouble.
The amount of software that doesn't seem to get that is rather surprising.
>wait for the signal to switch from off to on for say 10ms before you act on the transition
That is the worst way to debounce, because it adds 10ms latency for no good reason. The better way is to act immediately but ignore further inputs for 10ms. You might also want a little bit of analog filtering to improve EMI tolerance, but that shouldn't add more than 1ms latency.
It depends on the application. If you wanted to debounce, for example, user autocomplete inputs that will be fed as a search term into your service, generally you would not want to start returning results when a user types the first letter “a” and continues typing, but instead wait for the user to stop typing for a fraction of a second.
If you have exceptional resources you may choose to not debounce at all, but for applications with latency behind it, it can make more sense to just wait for the user to signal they are done.
Yeah, I probably should have put it your way. It gels better with the next paragraph about ignoring user input for some amount of time after the initial signal.
> state of close to change, they can bounce back and forth between on and off.
I believe the general concept is called hysteresis(1). Many systems require some prior state knowledge before acting to avoid cycling near the setpoint. As you said, the simplest solution is often a deadband around the setpoint so that action can never result in a state requiring action.
> I believe the general concept is called hysteresis
Hysteresis is what you add to the sensor in order to debounce. The bouncing effect itself is not called hysterisis, and does not require the system to have any memory - you get it as soon as there's noise in the measurement, which there always is.
> When dealing with users, and slow processes (that you can't make faster for some reason)
The reason is that these systems all require realtime design techniques, and realtime design techniques are rarely taught in school and the overwhelming majority of software engineers don't know them, and the vast majority of software engineering managers are unaware that realtime design techniques even exist.
All computers that manage user interfaces today -- all of them -- are by many orders of magnitude fast enough to provide near-instantaneous responses to user inputs if programmed with realtime techniques.
As an electrical engineer that wrote more than 5 different denouncing implementations: You are of course correct about denouncing, yet most buttons wouldn't need more than 25ms denouncing, which is a magnitude less.
Denouncing user input in UIs is indeed something you should do. Especially if it would be bad if the user input ends up triggering something twice.
I work in industrial automation, and the amount of polling and high-latency touchscreen issues I encounter is embarrassing.
The default latency of a Rockwell Panelview Plus screen with their FactoryTalk software (the dominant HMI system in North America for custom automation) is 1000 ms. Fixing that solves 80% of the complaints I encounter.
(Unfortunately, it's a dropdown selection list with a limited set of values of 0.05, 0.1, 0.25, 0.5, 1, 2, 5, 10, 60, and a whopping 120 seconds).
> Maximum Tag Update Rate: Specify the maximum rate at which data servers will send data to the tags used in the display, including tags used in expressions and embedded variables. The default update rate is 1.0 second`.[sic] If the update rate is changed, the new rate will not take effect until the display is closed and re-opened.
And it's worse than that - often, response time is on the order of 3 seconds, as described here:
> Well, if you press the button just after the update occurs, you’ll have nearly a whole second go by until it’s read. And once it’s read and sent to the PLC, you’ll then have close to another second until the screen updates again to indicate its on.
I set most of my displays to 0.25 seconds or 0.1 seconds if I can, 50ms is not achievable because Rockwell's Ethernet/IP fieldbus protocol is terribly inefficient and the update rate is a polled refresh of every tag on the screen, you can't have fast buttons and slow production history string arrays.
I have a Hario coffee scale that is absolutely maddening. It works just fine, but gives absolutely no feedback (no flash, no beep, nothing) when you press the 'tare' button, and it also waits about a second before updating its display. The button itself doesn't have any tactile feedback either - one of those membrane-type buttons that seems to have a pressure sensor, but no 'click'.
> permits up to 250ms latency for touch screen response. That is ridiculously high. It practically guarantees people will think it's not working and tap again
You're saying people evaluate and react to checkbox feedback
faster than 4 times/sec? I'm not a UX person so this claim seems laughable. I might believe twice that as beyond reasonable, but there is no way the average person waits less than a quarter of a second before assuming their input failed and duplicate the effort. One can hardly even observe and process that quickly - even when paying attention.
Imagine you have a system that ignores taps a substantial percentage of the time (either because it required more precise finger placement, or because that's life). Also most successful taps complete quickly, with only some having 250ms latency. Combine these two and most of time when nothing has happened after 250ms, it's because the tap failed to register and a new tap attempt is needed.
There's been plenty of research for decades showing that 250ms response time in UX is a noticeable slowdown.
I had a similar comment so I'll just add-on here - jitter definitely is a huge issue. If an interface is _consistently_ slow you know to slow down, space out actions and wait for visual confirmations; if an interface is generally responsive, though, you can get into a "rhythm" of actions which, which can lead to chaos and confusion as you try to figure out both the new state of the system, and what actions you need to take to get back on track.
This is exactly what triggered the Therac-25 radiation therapy defect: the UI was very slow to update, but experienced users knew what order to hit the buttons to go through the screens quickly.
Except that there was a bug where if you go too quickly, some of the software safety interlocks didn't work right, and they had removed the hardware safeties in the update from Therac-20.
Your actions are pipelined. The waiting happens in parallel with doing the next action. The 250ms delay causes a timeout exception and flushes the pipeline.
As someone with long industry experience, I share your disbelief. I'd need to see the Fitts tests results for 250ms max delay (<150ms median?) on button state, as long press is also a thing. On touch/trackpads typical double tap rate is 4-5Hz and maximum tap rate is about 10-15Hz, achievable by relatively few. It's notable that you're dealing with a touchscreen here and not a mouse (people are faster at that).
Most premium Android phones in 2016 were 120-30ms motion-photon response time (low to midrange probably hit 250), while iPhones were ~80ms. Haptics add another ~30ms delay. There have been massive improvements since 2016, as it has been a competitive metric, but even with >120Hz screens sub 60ms max true latency is uncommon.
As (actually my long term memory of) the MS video shows faster refresh and lower latency greatly improve smoothness and are useful for moving objects, especially scrolling (though tearing is a real issue). The problem is much more pronounced in VR/MR as eye tracking of diagonal motion is VERY discriminating. That's why they shoot for >240Hz refresh rates. Over 1kHz there doesn't seem to be a lot of improvement (2ms latency) due to a combination of cone/rod/neural filtering.
If you want a very generic worst case latency calculation based on the touch acquisition rate and display refresh rate assuming no position calculation delay it is:
"It practically guarantees people will think it's not working and tap again"
Please provide a link to your data and analysis.
Seriously, I hope you don't work for a major touchscreen supplier. A good UI can deal with relatively high latency input, a bad UI can make any latency unusable. Finger motion before/after contact and rapid high force contact bounces are common problems. Many high performance interfaces "gracefully" degrade report latency to similar levels under high interference conditions.
There is something called a FAR/FRR (False Accept Ratio)/(False Reject Ratio) where parameters on a UI system are tuned to optimize accurate interpretation of user intent (or user identification) for a given environment and action. Limiting response rate (eg debounce) to 2-3x the report rate is a common parameter change. There are always outliers in both user intent and experience, but a typical setting of <1% false reject and 1-100ppm false accept rate is achievable and usually acceptable.
I'd also note that under video review people are notoriously bad at actually knowing how they interact with a touchscreen. Interestingly, fast response rates (and tactile response) reduce user complaints about proper operation even when UI errors remain constant. The users often assume they made the error rather than the machine. There are also interesting cognitive load effects.
The user is either a web developer or has some neurological disorder that dulls his ability to react. There's no other explanation for not knowing how long 250 ms is.
> When a physical control is replaced by a digital interface, it is the job of a designer to translate the functions of the controls into pixels. If designed well the UI is intuitive. When designed poorly, the interface is difficult to understand and causes users to make mistakes.
Ugh. I don't want to quote all 25 of the things I disagree with about this article, so here's the first one that came up.
A well-designed UI is not necessarily intuitive. Think about vi, or the command line, or Blender, or Figma, or pretty much any UI designed for experts who spend every day using the system. These UIs trade intuitiveness and simplicity for power and speed. They do so knowingly.
It's not actually desirable to make a UI intuitive—which is to say, to make it behave the way a naive user would expect the first time they use it. Not if you can instead make it more powerful, and train the users how to use it instead.
An intuitive submarine control might have buttons to go up and down, left and right, and turn, and then a big red button that says "launch torpedo", like in video games. But I bet that's a bad UI for a real submarine. Instead you want it to have a lot of information on the screen, and hundreds of different functions it can perform if only you've gone through an intense training program to learn how to operate it, which is what the Navy does.
I don't think that being difficult to understand and prone to making mistakes is the opposite of intuitive, either. I think those are orthogonal.
None of this is to excuse the design of that system, which I've never used, and can't defend or attack. It's just I know that this is a hard problem that you can't get around by saying "just design it better".
I don't agree, from the perspective of someone who has driven ships, stood console watches on complex combat direction systems, and regularly flies flight simulators for fun.
The UI for driving a vehicle needs to be simple and intuitive. Full Stop.
There is absolutely a place for complex displays. There were 14 people on the bridge the day the McCain had its collision. Any number of them can monitor complex screens with tons of information.
The young Sailor driving the ship needs to know where his rudder sits and what RPM his engines are turning. In fact, there are two of them, and the traditional method is that one drives and monitors the rudder, and one runs the engines. They don't need to worry about complex situations, the helm needs to know the conning officer ordered 5 degrees right rudder, they ordered 5 degrees right rudder with the wheel in front of them, and the screen or indicator in front of them indicates the rudder is right 5 degrees. Similarly, Lee Helm needs to know the conning officer ordered turns for all ahead full, they shifted the great big throttle handles to the all ahead full position, and both engines indicate all head full.
They don't need to know about engine temperatures, synchro and servo status, set and drift, winds, etc. They need to drive the damn ship in the manner they are directed to drive the ship. There were 12 other people on that bridge who could handle all of those things that matter to them. Helm and Lee Helm need to drive the ship.
It sounds like you are knowledgeable about this stuff so if I don’t mind you asking:
Why is it a good idea to use a full human to turn a wheel? The way I understand what you are saying is that the conning officer shouts “5 degrees right rudder” and some dude listens to that and puts the rudder 5 degrees right. I understand that this makes sense if turning the wheel takes physical force. There is someone who provides the smarts which way to turn it, and someone who provides the raw power to do it. But if it requires no excertion to do it, didn’t we just made a shitty voice interface for the conning officer? With all the associated manning requirement and potential for mishearing orders, and potential for confusion? Wouldn’t it make more sense for the conning officer to tap in their desired rudder on the interface themselves?
It's not just about turning the wheel, though, is it? I would assume that much like when driving a car, it's a longer process of being a human PID controller and making small adjustments to angular acceleration based on feedback from the system. It might be useful to remove this mental load from the decision-maker.
When my family drives places we have not been before, the driver focuses on the road and the passenger seat is navigating, planning stops, etc. Sure, the driver could read them off of a GPS unit and improvise stops, but it is safer and (in our experience) better to have a separate person focusing on thinking and planning.
But also it provides a secondary check on the sensibility of the order. If my wife says "take the first right in this roundabout" and the first right does not seem like the way to go, I will do another lap and ask her to double-check. I can do that because I've been surveying the alternatives well ahead of time, which I could do only because I was not focusing on navigating.
>But also it provides a secondary check on the sensibility of the order.
For much the same reason, whenever there is an outage, I like to ensure my teams are running at least two people on the problem. When things are broken and you're trying to un-break them, having one person whose entire concentration is on doing the physical acts requires to un-break things and one person whose concentration is on monitoring things, looking up data, or reading through the SOP helps ensure things are done safely, that you have secondary checks on actions being taken and it also slows things down a little in the "slow is smooth, smooth is fast" sense.
I am totally with you here. It's amazing to me how as cars have gotten easier to drive with technology, we've continued to add distractions to the driver as "features".
I don't even listen to music when I'm driving. I mentally make note of my path before I drive and might have the GPS up just to be able to glance at my arrival time or traffic info here and there, but largely I don't even touch the fancy touch screen display while in motion.
Part of it is everybody knows exactly what's happening, but one dude has one job, and that is to put the rudder exactly where it's supposed to be and hold it there. It isn't actually like there's something that holds the rudder in position -- it is manual, like driving a car, and the rudder will move if not held in place. Also, the order might not be rudder left 5, it might be steer course xxx. Then they use their training to hold the ship on that course.
Another thing not always obvious is the conning officer is in training to be the Officer of the Deck. He/She is learning the exact characteristics of the ship while qualifying for the next higher position.
Yes, you are absolutely correct that this can be fully automated -- and it is in merchant ships. Warships are a bit different, you want instant responses, you want people who can think and issue the right orders and people who can evaluate those orders. SN Timmy isn't stupid -- if conning officer is in the middle of an unrep (alongside another ship) and says right instead of left, SN Timmy will repeat back the order, and ten other people heard that order and there's room to stop the disaster before it happens.
You've absolutely got it where merchant shipping is concerned. Warships are different.
> It isn't actually like there's something that holds the rudder in position -- it is manual, like driving a car, and the rudder will move if not held in place.
But i assume you don’t have to keep your finger on the touch screen to keep the rudder from moving?
As I recall, you still input rudder orders with a wheel on these. Maybe you can enter a rudder setting? The system actually can fully automatically drive the ship. You can just enter a navigation plan and it will do the whole thing. The Navy doesn't use that capability though.
So do airplane pilots and you don’t see them shouting yoke orders to a sub-pilot.
This is a practice which I think made sense with older tech where one had to put some thinking into the execution of the steering. (How much to twist the control to achieve 5 degree? Oops a bit overshot, twisting back. Now hold it steady! That kind of thing.) but now that you just peck at a touch screen the actual task is taken over by a servo motor from the sailor. But because the servo doesn’t have ears to hear the command we keep a whole human employed to do the “hear command, shout, peck, shout” loop. That and institutional inertia. If we were designing navy ships for the first time today we would surely not do it like this.
I think you will definitely hear the pilot and copilot communicating, and they split up tasking between them -- one flies and one monitors equipment and makes system entries. They're also way way more qualified and educated than SN Timmy who got a few hours of training on how to do his job.
That loop is pretty valuable on the bridge, because it's not just one person driving in a bubble. Having those orders vocalized so everyone hears them and everyone knows what's happening is pretty valuable. UNREP is a very difficult and stressful kind of evolution, and so is driving during flight operations. Things need to be done perfectly and there are a LOT of variables.
We design new ships all the time, and they still generally stick to these ways. CRM isn't a thing only in aircraft.
most complex and/or commercial aircraft do share the workload between a pilot-flying, and a pilot-non-flying—in addition to the captain & FO hierarchy you are probably aware of.
The overall design of "controls" for something as complicated as a ship (or submarine) is of course going to be complex, and would not be expected to cater to the intuitions of a naive user.
But if you're talking about the basic ship handling controls--rudder and throttles--which were the controls involved in this case, those should be as simple and intuitive as possible, and should be designed in a way that makes it obvious what the rudder and engines are doing, obvious to the point that even a naive user would have a good shot at being able to figure it out. That's because those controls are not only the most basic ones, but the most important ones for ship safety. You don't want them to be complicated or hard to learn.
This incident was a good example of what happens when this basic principle is not followed. Of course that wasn't the only factor involved, crew training and proficiency comes into play too, but since even the Captain and the Officer of the Deck couldn't tell what the rudder and throttles were doing, there was obviously a huge problem with the way those controls were designed.
Also, given the possible context of wartime, the most basic "make the ship go" controls absolutely need to be understandable and usable by a concussed one-armed half-blind ensign who just got on board yesterday... because, sometimes, that's what happens.
It's an important distinction to make tbh. However, "intuitive" doesn't necessarily mean "intuitive to a lay person", but it should be intuitive to someone in that domain. The "gang" checkbox is not intuitive to me, but to a sailor it probably has a lot of meaning, just like "commit no-verify and push force" is intuitive to me, but gobbledygook for a layperson.
But yes, a UI should be designed for the end-user, and power-user UI/UX design is a rare skill.
The idea that the design should be dumbed down so, what, a lay-person who has never encountered the system can take over and drive the ship accurately? is an appealing one: it makes the case that, all else being equal, simplicity should win out.
I would argue that there's an in-between: good design is not orthogonal to intuitive design. And I would also argue that the human factor component shouldn't be missed here, both in terms of sleep deprivation, but also in terms of being able to easily and quickly identify error states or misconfigurations at a glance, without too much cognitive overload.
Cognitive overload and complex interfaces are solved with training and repetition: we do this regularly in aviation, pilotage, and all sorts of complex interfaces. But I guess it's worth asking aloud, if you can reduce some of the cognitive overload, aren't you effectively defending in depth?
Sure, every complex system could be distilled to a memorized green screen incantation, and peak performance can be reached by fusing the human brain to a well-mapped, ultra-dense information-fest. But the reality of the situation is that, when training and tiredness failed these sailors, the interface further piled on _unnecessarily_. It could have been designed to be a bit more intuitive, and that might very well have saved the lives of 10 people and $223 million dollars of taxpayer money.
"This is a hard problem that you can't get around by just saying 'just design it better'" sure downplays that amount of impact intuitive, clear design can have on, say, preventing the swiss cheese holes from aligning in this particular case.
> if you can reduce some of the cognitive overload, aren't you effectively defending in depth?
Absolutely. But in practice, cognitive overload rears its head in many ways.
When most people think of an intuitive system, they think of a simple system as measured in the density of information on screen ("just show me what I need to know, nothing else") and the paucity of options you have to select from (Hick's law and so forth). But if you have to perform complex activities with such a system, the seeming simplicity results in actual complexity, since you have to do things like (for example) diving into a dozen nested layers of menus and screens to reach an option that was hidden there in order to (try to) reduce cognitive burden.
A different, equally valid measurement of simplicity (and cognitive burden) is how few transitions or state changes you have to suffer through to get to the state you want to be in. That is to say: how many steps you have to take to do what you want. And since there's a chance to make an error at each step, it's not even just a question of saving clicks, it can actually be more error prone.
I say these measurements are equally valid, but not that either of them are sufficiently robust ways of thinking about good design. I'm also not trying to downplay the impact of clear, intuitive design. The point I'm trying to make is that there is no formula for making designs clear and intuitive; it's actually quite complicated. In fact, in most cases, you can't even do it, you can only find a tradeoff between imperfect solutions.
Anecdotally, I have designed dense, complex systems for quantum computing experimentalists—people much smarter than me—and had them tell me "make it simpler!". I've also tried to make simple systems for biomedical researchers, and had them tell me, in so many words, to make the UI denser and more complex. There's no correct level of simplicity you can arrive at a priori, it's different for every case, and you have to reach the right balance through testing. If you don't do enough testing, your users become your testers. The consequences are not usually this grave.
It seems to me that an unspoken assumption in this thread is that "design" is purely theoretical work, performed by a designer.
If the behavior of a sleep-deprived sailor is an unknown variable, then the obvious solution is to get a sleep-deprived sailor involved in the design process. This is not particularly complex, and while _some_ money is involved, preventing this accident would basically give you a budget of $223 million, human lives not even taken into account.
The checkbox isn’t the “smoking gun” but rather part of a broader range of system issues in design and usability that likely contributed the USS John McCain accident.
Having developed these types of systems though, the UX goes through heavy HMI reviews by real users and engineers. Likely whomever built this ship had those reviews.
I would like to understand if this checkbox and its related design ever came up in those reviews and whether engineers were directed by their Navy customer to change it.
It’s possible it was just a small footnote on a PowerPoint slide and they moved on.
I think this is the actual wtf, along with the transfer ui:
> The transfer of thrust comes with an additional level of complexity because the propellors must be transferred one at a time. Half way through a transfer the boat is in a situation where one propellor is controlled by one station and the other is controlled by someone else. At this moment the checkbox labeled “Gang” is automatically unchecked.
Absolutely no surprise that in a moment of panic, the thrust controls end up spread across multiple stations. Why is this even a feature? Not a sailor so just armchair commenting, but in what situation would that be beneficial?
From what I was able to infer this is the use case of “Splitting Thrust Control During Station Transfer”
(btw I made that up just based on my reading of it. I don’t design Navy ship subsystems for a living - I work on specialized subsystems like these)
The temporarily assigning control of each propeller to different stations would require the automatic deselection of a synchronization (or "Gang") function during the transfer.
Why weren’t they properly trained on this? How did they get into this situation unknowingly?
I would assume a provided manual by the manufacturer would have trained the Sailors on how to properly perform this function.
This sounds like a more exceptional than routine use case.
I've experience of steering a twin prop 50 tonne SAR vessel so quite a bit smaller than this but still same principles. We also have strict procedures for transferring command of the helm between different stations. There is never a situation where you would want the throttles for each prop split between command stations. There is almost never a situation where you would want the throttle controls at a different station than the helm (the person steering). The throttles and the rudder interact a lot and in any close quarters manoeuvring situation you want helm and throttles all under the control of one person.
I could definitely see value in war-time, if perhaps one was degraded or damaged, and required special attention that might justify having a person focused on it and communicating with the helm.
Right. This is also possible for us, but the engineer needs to go to the engine room and bypass the throttle system and take manual control of the engine management computers to give throttle inputs to each engine. If these computers fail but somehow the engines are still running I know we can mechanically take each engine in and out of forward and reverse gear. In either of these cases throttle orders would go through the intercom or would be manually relayed by shouting along a human chain if the intercom is out. We can also take manual control of the rudder if the steering hydraulics fail, by rigging two sets of block and tackle to an emergency steering tiller. We practice these fairly regularly in drills and I can say that close quarters manoeuvring with manual control of the rudder is not something you would want to do with an inexperienced crew for real!
That's a good argument for independent throttle control being possible, but it's not a good argument for that being the default. When working with safety critical systems (which throttle control absolutely is), it's important to have sensible, easy to understand defaults. The more steps you have to take to do the "correct" thing, the more likely it is that someone screws up one of those steps.
Seems to me a technically advanced enough system like this should probably combine the independent throttles with the steering to some extent? I'm talking out of my rear possibly, but like if a severe enough turn radius was used, maybe then automatically adjust the throttles to make it possible in combination with the rudder?
I think designing a system like that that gave the control necessary and worked predictably would be difficult. I'm not a navy person (the closest experience I have is driving fishing boats on a lake), but I do know that steering a big boat is complicated.
Steering via the rudder, via controlling the different throttles, or some combination has different characteristics. And which one is best would depend on the ocean and weather conditions, overall speed of the ship, as well as many other factors I don't know about.
A computer is unlikely to be able to properly account for all of this. And if it did, it would affect how the ship controls, something that you really don't want to do on a large ship.
Combining multiple controls into one computer managed control can be a good design. But I think it's the wrong design for large ships.
The article does address this. The system did have a computer-assisted driving mode but it was always turned off into manual driving mode as the sailors either didn't understand or trust the automatic mode.
This design dates to the 90s at least. I went to school on this system in 1997 and it wasn't new then, it was new to our class of ships but had been in service for a number of years prior.
If it went through an HMI review it wasn't by anyone qualified to do it.
The traditional engine order telegraph https://en.wikipedia.org/wiki/Engine_order_telegraph is a beautiful piece of mechanical design. A more modern version would be aircraft-style throttle handles, which are also placed close together so that you naturally move two or four of them together with one hand movement. Reducing that to a checkbox just seems .. inadequate.
As Wikipedia states, nuclear vessels still have EOTs, though they’ve evolved.
On Virginia-class subs, the EOT has been made into a linear button array. It’s arranged in logical order (All Ahead Flank at top, descending to All Stop, then down to All Back Emergency; there are small gaps in between Ahead, Stop, and Back groups), and the button flashes when an order is received. There’s also an audible alert, of course.
The throttles themselves are small hand wheels, with the astern wheel being smaller and offset from the ahead wheel. These send redundant signals to the main engine controllers (which also redundant), which ultimately control the main engines. In the event of an emergency, there is a manual override station between the main engines – this is outside of maneuvering, and would require a different watchstander to man, since the propulsion plant operator is also running the reactor.
To your main point though, yes, a checkbox is inadequate. Physical controls are generally superior, which is why car manufacturers who moved away from them are starting to move back.
This (seemingly elegant) Virginia-class implementation feels like an inheritance of the Rickover and SUBSAFE philosophy. Not sure how the surface Navy would accept anything less.
Similar controls are used on modern ships. They usually though have an option to control both engines with one throttle to avoid having to manually synchronise the speed of both. Not sure if they have a reversion to separate control if someone grabs the currently redundant lever.
"Conspicuously absent from their recommendations was any discussion about user interface design. Of their seven safety recommendations, six involve training."
Yes. That's because if your ship is going left and you can't figure out why when there's an interface clearly showing why (ie one engine input is higher then the other), it is a training and/or procedure issue.
The author then goes on to state that during steering control transfer, the checkbox is automatically unchecked. So the engine input UI is 100% not at fault here.
> 100% not at fault
When something significant goes wrong, it’s almost never due to a single cause; multiple factors and confounding elements are usually at play. We build better-performing systems when we examine various contributing factors and proximal causes.
The OP's argument is that the interface does not "clearly" indicate that one engine input is higher than the other. After reading the article and reviewing the interfaces, I have to say: the OP’s argument is quite compelling. While training is undoubtedly important and should be included in the safety recommendations, the OP makes a strong case that design improvements should also be considered as part of the safety recommendations.
Finding something or someone to blame -- whether it is an operator, the UI, training, whatever -- typically halts all insight-gaining processes, and is therefore not recommended. Especially if that something is "100% to blame".
The box is only unchecked because of the curious decision to allow transferring of a single prop / engine which I'm not convinced has any real value.
One thing that isn't stated is what the state of the redundant thrust slider is when the inputs are ganged together. In the physical implementations I've seen the redundant lever is usually moved to neutral and it's not inconceivable that this was carried into the digital UI. In that case failing to spot the fact that there was a differential thrust involved because of the lack of a checkbox might be much more understandable.
I have no experience in ship-handling, but my naive question for this who do is: What is the rationale for being able to independently transfer control of different aspects of ‘steering’ (prop1, prop2, rudder) to different stations? . The coordination required between two or more stations inherently makes the task (control of heading and thrust) more difficult than if a single station was coordinating the process inside one brain. Is there some benefit I am not seeing that justifies the increased complexity?
Battle Damage or Engineering Casualty (Engine Troubles). Arleigh Burke speed is controlled by two factors, the RPM of propellers AND the pitch of propeller blades. https://en.wikipedia.org/wiki/Variable-pitch_propeller_(mari... Obviously this controls steering as well if system isn't functioning properly.
In normal situations, computer mostly handles this and single station (Helm) has control over all 3. If computer fails, the pitch control fails or propeller engine fails, this will need to be controlled manually by the crew and workload is too much for single person. Also, in certain situations, like underway replenishment, all 3 control stations may be manned because if there is sudden failure, you need to be quickly able to respond.
Also, the problem is touch screen period. This is case where putting iPad instead of levers/switches/buttons is NOT an upgrade. See your car climate control as good example everyone can relate to.
I'm not a mariner but from what I've seen on boats and motoryachts the controls are typically transferred with the engines in neutral. This step ensures that the person who takes control positively knows the state of the engines and avoids having to do some kind of synchronisation with the current state. Perhaps this is problem on a destroyer or they imagined the controls being routinely moved around the bridge. On most vessels it's in the same place for the whole passage and for docking or departure you chose which station to use.
Idling the engines for a minute while you have isn't an imposition as anyone prudent would be doing propulsion and steering tests before entering a confined area at that point anyway.
I can only think the imagined the controls being passed about on a regular basis and because they didn't need to synchronise the physical position with the current state they skipped that step.
Why transferring one engine at a time made sense I have no idea.
It seems to me that the checkbox, and even the UI, is the least of the issues. Even with the most intuitive manual controls, this problem still could have easily happened simply because the transfer of control from one station to another is fraught with potential misunderstanding of what state the different parts of the ship are in (rudder, engines, etc.) and who has control of each at any given time. Lack of sleep will exacerbate it tremendously. More training will only help so much; I'm guessing, they probably already train on this a lot.
My armchair assessment is that these ships should be controlled 99% by computer, which decides the best combination of thrust, rudder, etc. to move the ship where the navigator directs it (whether by joystick to indicate direction, or touch-screen to indicate where on the map you want the ship to go, as well as what speed), and the individual controls (engines, rudder, etc.) should only be overridden in the most dire of circumstances.
I'm sure there's much more to it than that, but the general idea remains the same: this is too complicated to be left to humans, whose time is better spent thinking about other parts of the mission.
Having rudder and propeller on different stations can be useful to organize work on the bridge, especially on a warship. The possibility of having the two propellers on different stations is imho insane.
The only reason that I can think of is a runaway "safety" requirement ~"if one prop control fails you still must be able to take control of the other prop on a different station". That would fit what the article says about them essentially running in manual backup mode all the time and not in the intended mode of operation.
I like this other article on the topic, here's what it says about splitting rudder from speed:
> Sanchez quickly noticed that his new helmsman seemed flustered by the difficulty of having to control the ship’s steering and speed at the same time. He decided to split the helm, giving Bordeaux control over the ship’s wheel. While Bordeaux remained at his station, Dontrius Mitchell, a second sailor on the bridge, was assigned to take control of the speed of the McCain at a neighboring station known as the lee helm.
> Sanchez’s order was unexpected — he had not discussed the possibility in meetings with the crew before entering the straits. Nor had the crew practiced the maneuver much. Bordeaux could only remember doing it once or twice before.
The traditional method of control splits helm(rudder control) and lee helm(throttle control). Both are under the direct command of the conning officer, who is responsible for all orders to the helm and lee helm and setting and maintaining the course ordered by the officer of the deck.
You need to be able to shift control of steering to aft steering in case of battle damage or steering failure on the bridge, and you need to be able to shift throttle control to the engine room for the same reason.
The concept of splitting throttle control of engine 1 and engine 2 between stations as a part of the normal transfer of control is absurd and I have no idea why that was even a thing. I never heard of it when I worked on this system.
I tried for a long time to use this keyboard effectively. https://en.wikipedia.org/wiki/FingerWorks but I never really could. I like the touchscreen idea, your user interface is the user interface you were trained on, Star Trek style.
I think light and sound engineers probably have the best setup. everything's hooked up to a big board with sliders. you can physically move the sliders, and the position of the slider represents what's actually happening. But you can also set up a scene on an attached computer. Click the button, or tap the touch screen, all the sliders move to the predesignated state.
I don't know a damn thing about battleships, or destroyers. But a big ass console, with physical knobs and sliders that represent the current state seems like a really good idea. Sure, you can adjust whatever from whatever glass station you want. But everyone can tell at a glance, the state of the world. You can walk over and turn the knob, overriding everything else. And everybody can see you do it, and hopefully understand what that action means.
The closest I get to that sort of stuff is my car. The hazard lights are one big dedicated button. There may be a way to turn them on with the touchscreen, I don't know. Volume has a physical dial on the console. There are buttons on the steering wheel, but I find myself reaching for the dial when I really need to turn the volume down.
Physical controls are expensive. I imagine there's stuff that's common and essential, that, it's probably a good idea to spend the money and have the control. I have strong feelings about this. There are little fiddly settings that you can tuck away under five layers of menus. But there's other stuff That you touch all the time, or you need in an emergency, that IMHO, you should be allowed to build up muscle memory to accomplish.
To summarize, the checkbox in question was designed such that it's lit up when checked and not when it isn't. I'm surprised they did this because it's terrible design. You see this all over the place. It confuses the user who thinks there should be a check there and can't tell if it's checked or not checked merely by glancing at it.
I was happy to see this:
> The Navy recently announced they are abandoning touchscreens in their fleet in favor of physical controls.
The author defends touch controls and the decision being based on really old standards and one survey (apparently). I'm going to respectfully disagree. I'm no ship captain but I believe the propeller controls on a ship are traditionally like the levers on the right of this image [1]. This is from a multi-propeller cruise ship, I believe.
Now if you had those controls, anyone could tell at a glance that the propellers weren't synchronized.
Touchscreens allow for UI updates more easily. This leads to designers being lazy. "We can fix it in an update". It necessitates training on checkboxes.
I don't understand the difference. With a touch-screen, if the propellers aren't synchronized, you would see it immediately on the green bars: if you slide one and the other doesn't slide similarly, they aren't synchronized.
It is not only the ship's UI that lacks review in the post-mortem analysis, but also the process that lead to it.
"Conspicuously absent from their recommendations was any discussion about user interface design."
_Why_ did the NTSB not review the user interface? What is their motivation, and the forces that influence their actions? Is the NTSB sufficiently staffed with experts on user interfaces? If not, why not?
> These specifications come from a document written in 1988.
Is the process to update Standard F1166 sufficient? (Probably not.) Why? Why is a document that is clearly outdated used to guide UI design in a newly designed ship? What are the incentives and forces that lead to this outdated standard being used, and no update fixing it?
This looks like an article written by a designer obsessed by how important design is, and how anyone could pilot a warship without training if designers made something perfect.
It's a good reminder for software people sometimes overemphasizing the importance of software in a given situation, and how the processes encompassing it interact with it and control its outcomes.
This seems to me like. a case of "hindsight is 20/20". There were thousands of design decisions made in the ship's controls, and likely multiple of them can in certain, yet untested scenarios, cause a disaster. Has he identified those as well? And isn't more training and experience also an effective way to handle more of these possible scenarios?
If making the primary propulsion some ad-hoc touchscreen control instead of a physical lever is only something you can get to with hindsight then you shouldn't even be anywhere near to a position where you can make that mistake.
The point is that this was likely only one of many poor design decisions that were and will keep being made. I get touch screens are universally bad for operating vehicles or any other machine that requires attention, muscle memory and quick response times, but, if another disaster happens tomorrow for some other design issue not related to touch screens, will he then write another post about how that was the main issue?
Because it's not the main issue. The crew where unaware of the physics of the controls. Would this not have happened with physical controls?
It seems it is not immediately obvious at any time how much steering is being caused by the propellers or the rudder. Having 2 physical levers positioned differently would not make it more obvious to someone already unaware of this. They woud likely not look for the Gang button even if it were a giant red button.
Now if the gang feature were designed (physically or digitally) to always snap back to locked unless some constant pressure is applied (like a spring), maybe this would be imprinted into every crew member.
> Having 2 physical levers positioned differently would not make it more obvious to someone already unaware of this.
Yes, it would, because it's so much easier to see. Remember that everybody on that bridge, which includes the Captain and the Officer of the Deck, was unaware of how the throttles were set. Two physical levers would have made it obvious to those people, and they could then have issued the right commands to fix it.
I think making essential controls touch screen based is stupid for a variety of reasons, but I don't think physical levers would've changed the situation.
Airplanes have crashed because pilots misinterpreted physical levers, switches, displays, and lights. There have been accidents where a (co)pilot was confused by the lack of power they themselves were physically applying with their hands.
You can add synchronised force-feedback levers to every station, but that doesn't matter if your crew is too sleep-deprived to understand what those levers mean anyway. Not understanding the UI was only a small part in a huge shit show of problems that plagued the ship, and it was hardly the first time this sort of thing happened on that particular ship either.
Training is good, but it's not like design (I mean in how it should be used, not looks) is something which should be ignored. If you have a bad design, you need an awful lot of training to compensate for that.
The incident report almost didn't mention design at all, so what are the chances of the problems ever getting addressed?
>The incident report almost didn't mention design at all, so what are the chances of the problems ever getting addressed?
The fact the Touch Screen shouldn't have been there? Almost zero. EDIT: Apparently the Navy is going to put all mechanical controls back. I was totally wrong.
My guess is a bunch of retired flag officers at some defense contractor pitched it and close to retirement flag officers approved it. Maybe in 10 years, they will quietly change it behind closed doors but more than likely, someone has great idea of using Vision Pro to control everything because Heads Up Display or some insane thing they can sell to Navy for big money.
The problem was that these guys didn't have any sleep. Lots of people think like they're against Sicilians When Death Is On The Line and that if they just slow poison their folks they'll become immune to poison. Well, just like giving your children small amounts of mercury every day doesn't turn them into a mercury-immune kid, not letting people sleep doesn't lead to sleep immune people. It just transforms people into idiots. So they transformed all their people into idiots. And then the idiots crashed the boat.
If you did the same to me, I assure you I would be even stupider and my performance even more lacking.
I was on that ship but left before the accident, I'm pretty positive it was sleep deprivation. The schedule changed at sea frequently so you couldn't get used to what little sleep pattern you were getting. We also had surprise drills that can happen literally any time of day or night. Longest I stayed awake was 72 hours straight.
One thing that bums me out about this is the idea that figuring this kind of thing out is what "designers" do. As if there is some special skillset and art to design that a special group of people, designers, do, which we should take seriously.
No, this is just what non-idiots do. Any non-idiot would look at this UI and think: this is terrible.
Looking at the UI, I see consistently applied patterns in a pretty normalised layout. I don't know how to use any of the controls, but that's because I don't know how to steer a warship, not because the UI is bad. Had I known what "gang" means in this context, the problem would've been clear from the very first screenshot. Yes, the UI is ugly as sin, but it doesn't need to be pretty.
A barely trained 18 year old on months of sleep deprivation couldn't figure this out, but that's an impossible standard to design for. The entire ship was designed to use computer aided navigation anyway, but apparently that's being disabled by default.
If no, then that contradicts your argument. Non-designers obviously do get it wrong.
If yes, your argument has become a "no true scotsman" -- it no longer makes a statement about the capability of anyone, but simply defines "idiot" as people who get the UI wrong.
Besides that, hindsight is 20/20. Was the UI obviously terrible before the accident happened?
Yes they are idiots.
Well more likely they were cogs in a dysfunctional bureaucracy that is not allowed to do good work because it doesn't allow common sense to drive decision making. So the 'idiot' here is an organization, maybe, and not a person.
Yes, the UI was obviously terrible before the accident happened. My first thought when I saw the picture was, wow, that's terrible.
You seem to be one of those people who doesn't believe that things can have inherent quality, or that anyone can perceive that quality. It's a sort of nihilism about the idea of excellence. But people do this everyday. It's easy. It's a natural impulse that most humans have when you interact with anything: this is shitty, this isn't shitty. So bad design is when something is made that's shitty and doesn't have to be.
And if a large well-funded corporation makes something shitty, that's pathetic. But it happens all the time because organisations and leaders are incompetent.
> Yes they are idiots. Well more likely they were cogs in a dysfunctional bureaucracy that is not allowed to do good work because it doesn't allow common sense to drive decision making. So the 'idiot' here is an organization, maybe, and not a person.
As I said, that turns your argument around: They did not make a bad UI because they are idiots, but rather they are idiots because they made a bad UI. This gives you zero insight to prevent future accidents.
> Yes, the UI was obviously terrible before the accident happened. My first thought when I saw the picture was, wow, that's terrible.
You saw that picture after you already knew that an accident happened. The interesting question is whether your conclusion would have been the same if you saw it without knowing about the accident.
I really suggest that you have a look at the CAST handbook, because every single of your arguments gets taken apart there: http://sunnyday.mit.edu/CAST-Handbook.pdf (also posted in another sub-thread, but this whole HN discussion shows that still too few people know about it)
Jesus christ do you think that I'm so small-minded that I'm saying the UI is bad because I biased by knowing the accident happened already? and that I didn't realize I was doing it?
No I thought it was shit instantly because I've played video games my whole life and if I played a game that had that UI for steering I'd uninstall it.
You could at least assume other people are intelligent when you're talking to them.
One nice thing about designers is their tie-breaking ability for non-designers. Design, being something we all think we’re right about, is easy to lose lots of time going back and forth on with non-specialists, while it’s much faster to send to a pro.
But I generally agree with you, I think the world would be better if everyone was more thoughtful about this sort of thing.
While I agree with most of the assessment, I think that checkbox is fine. Removing the set of motorized haptic control levers and the motorized steering wheel at the brigde is a problem. Had the designers needed to integrate the feedback forces, they could have visualized them on the gui only stations too.
It's not that touchscreens didn't exist when F1166 was written. At that time it was widely understood a touch in a touchscreen would be a click and a drag would be a mouse-down + move + mouse-up sequence. The sliders seem to be designed to be both draggable and the two buttons for precise incremental adjustments.
What I find disturbing is that there doesn't seem to be a single The-Most-Important-Thing screen visible to everyone. I'd expect things like heading, steering, and engine power would be immediately available and readily visible, most important when a possible collision is identified.
And I hope people realise emergency manual control is for resolving emergencies rather than creating them.
The article says "If you don’t have the vocabulary to describe a problem it is unlikely that you will be able to fix it."
I fundamentally disagree with this. You don't need to know design jargon to identify bad design or to make good design. What you need is to understand what the user is likely to know and what you need to tell them. This is different than just providing controls, which is what most design by engineers looks like.
For example, I would characterize the overall design problems with the UX as:
1. Users need to understand at a glance how each control input is currently influencing the course of the ship. Separately the users need to understand at a glance what the resultant course and position of the ship will be given the current control inputs and external factors like wind and current.
2. Users need to understand at a glance which station is controlling the ship, with intuitive controls to offer to pass control to, or accepting control from, a different station.
So I can describe the problem without using any UI jargon. Similarly I can refer to these to indicate why the current UI is broken, again without UI jargon, simply by saying "it doesn't solve problem 2 because the state is unclear" or whatever.
Supposedly the control system has been fixed by adding physical levers, with retrofits to existing ships.[1] Unclear if they're all coupled together, so that moving a lever at one location moves all the other levers. That's a standard feature available for commercial ships.
For two stations, it can be done with mechanical cables. Beyond that, there are electrical systems.
This is a known cause of accidents with ship controls with multiple control stations. There was a New York City Seastreak ferry crash in 2013 due to that.
It seems that the operator is expected to sometimes make very small adjustments to the throttle level, going by what looks like clickable arrows. I wonder why they didn't locate the UI elements corresponding to throttle indicators butting right up against each other, with a Vernier scale on them, to make misalignments quantifiable and more obvious.
Not that it would have helped in this case, but maybe the scale can be magnified and change colour when there is misalignment, which would have been possibly slightly more obvious? I don't know.
The article mentions "ASTM International Standard F1166", and shows a few pictures and recommendations concerning check boxes. I'm very much looking for this kind of thing, for React-like interfaces, so that (1) we don't have to spend too much time thinking about this sort of details, and (2) we respect some good design principles without having been trained on it.
Unfortunately, the standard in question is quite expensive and probably most of it will not be useful for web design. Is there an equivalent document/standard for websites?
Despite all other factors involved in the accident, it seems fundamentally hard to make this kind of control obvious with a touch screen. With a physical throttle control, like used in aviation [1], it's more obvious when you're pushing one side or both, as you have both vision and tactile senses to confirm position.
With a craft like that, there is no position hold mode? It's common in almost all autonomous or semi-autonomous systems. Basically, without input from the pilot, the craft will hold its current position. That is, let go of the input or sticks, and the thing will automatically hold its current position.
Crazy to hear that such an expensive boat is basically driven by rate only controllers.
Virginia-class has lots of screens, but they aren’t touchscreens. The control panels have physical buttons and switches which are clearly labeled, logically laid out, and are extremely tactile. There is no confirmation for anything, because it’s assumed that you received verbal confirmation of your intended action before performing it.
Conversely, they also have computers for some maintenance items, and those (intelligently) have confirmation screens everywhere, because it’s just bog-standard Windows UI. The software UI team really should talk to the control panel UI team.
It's hard to believe they did user testing and this design came out on top.
Take a person at the end of their shift, sit then in front of a 'steering a ship' game with this UI: "with the controls set this way, which way will the ship move?" "we showed you a UI for 1s, were the props ganged? What was the differential in engine output?".
Presumably the US Navy has the user testing sessions recorded and archived somewhere?
Ill be honest I am kind of confused. Why is there sleep deprivation in the most powerful Navy in the world? I thought it wasn't run as a "the least hands for the job" type corporate culture as surely there is in essence unlimited budget. Or is this naive? Does every organisation/organism always minimise expenditure as much as possible until something sort of breaks?
The Navy could look to aircraft for some design guidance. There are significant market competitive forces pushing for user centric design when it comes to the interfaces of modern aircraft. They also have to deal with making new things look and work kind of like older systems, and provide interfaces that simplify information.
When the throttles are ganged, there should be only one slider.
When they are not ganged, there should be a visual indication of the propeller speed differential. Perhaps a horizontal bar above the throttle sliders that grows left or right according to the consequent turning moment.
Ganging is a term usually associated with faders on light control desks or audio mixing consoles. A typical way to indicate ganging in digital audio interfaces is by coloring the knows in the same color (usually very distinct from the rest of the interface) or by displaying a horizontal line that literally connects them.
For me the astounding part here is the complete lack of visual feedback of the end result. Sure I have limited nautical experience, but from sailing I know ships are a lot about force vectors. I would have imagined a ship of that class has a clever and intuitive way of displaying the forces acting on it (and the forces applied by its various parts).
This could literally be 3 horizontal bars with a 0 in the center. If the bar is filled to the left that bar makes you go left, if your bar is filled to the right, that is where it makes you go and if it is centered, you go to the center.
One bar describes the resulting turning direction of both engines, one the turning direction as indicated by the rudder and one the sum of both.
Sure there is then some ambiguity about what that means when you go backwards etc, but hey, one look at that would have told you "engine makes us go left".
Sounds like if it were two throttle handles with a "lock" lever below them they could fail exactly the same. It's a training/attention/fatigue issue, not a UI one.
If they were independent levers capable of being physically locked together, you'd probably notice when they aren't moving together. Of course, you could easily have similar visual feedback on a touch screen. Seems like the real issue is described later in the article; throttle control can be split between control stations, which is actually insane, I can't imagine a scenario where you'd want that.
The fact that control settings were not synchronized between stations and cleared when turning over control is sure a blunder. The checkbox is not to be blamed however, it's a sane design choice.
Fun fact: on some airplanes if you pull on the captain's yoke and push on the first officer's, the elevator will split, the left part doing the nose up, the right one doing nose down.
>> the previous sailor may have set the rudder 5 degrees right in order to compensate for wind or water current.
>> In older ships, speed is basically controlled by a forward/backward joystick. Push it forward and the ship accelerates. Pull it back and the boat slows or goes in reverse.
>> Most of the flaws of the touchscreens could be just one software update away.
Sorry, they lost me with this. Dont lecture professionals about interfaces if you clearly do not understand the systems controlled by those interfaces. Ships are not cars. Rudders are not steering wheels. This is not a video game.
I know there is a HN guideline about not commenting on stuff like this, but in this case the text was so large it prevented me from wanting to read the article at all. Just uncomfortably large.
Meta: irony is talking about bad UI, on a webpage where the typeface is just too damn big for a 1080p desktop screen, and using the browser's zoom out function doesn't do anything because of some "clever" CSS for the font-size.
This website is completely unreadable on an ultrawide monitor, it seems to make the font size dependent on the window's width, which makes it absolutely massive (I can maybe fit half a paragraph on the screen?), and the browser zoom does not seem to change the font size because of it...
> the browser zoom does not seem to change the font size because of it
This is one of the worst cases of this I have seen. It is literally unreadable for me. Normally I can use browser zoom to sort it out, but this beauty even defeats that workaround.
Yup, but while making the window smaller to write this I noticed I can make it a small window, about the size of my phone, to get text that is reasonable.
The site, giving UI advice, is written assuming your display is always ~7" ?
I thought that the site must be designed for mainly mobile, so I switched mobile mode on in the dev tools in firefox. However then there is some menu header text overlapping the main body.
It seems the design of this site is very much style over function.
One solution you could use to get around this weird design decision, is to use the reader mode in Firefox (I'm not sure what is the alternative for other browsers).
All they have to do is remove "p {max-width: 28rem;}" and the website becomes beautiful. The text fills the entire width of the screen, and the font scales with the screen size (though they should dial it back a bit, or have a different scaling ramp). It's like reading an actual document, and how the web should be. Bonus points, it's ridiculously "responsive" for small screen sizes.
Yes, I know they got it kinda "wrong", but they're arguably going in the right direction here. I'd give the author points for trying something out of the box, and also for "sticking it" to the monstrosity that Bootstrap started many years ago.
The font somehow re-scales if you zoom the web page, at least with mousewheel zoom on a desktop browser. Instant accessibility failure. I've also never seen this before and couldn't work out how it had been achieved?
Hm. Someone whose entire article is about bad UI design makes it impossible for me to make the way-too-large fonts on the page smaller by using the controls my browser has specifically for that purpose.
I haven't even read the article yet and I'm already skeptical about this author.
So whisper is 6 years old and this supposed UI expert never noticed this issue?
Or, for that matter, never considered that maybe trying to dynamically calculate font size--instead of just using standard CSS sizes like, oh, I dunno, "100%" or "medium", which are specifically intended to present the best font size for the user taking into account their device's configuration--wasn't a good idea?
> Two years ago a Navy destroyer was ripped open by the nose of a Liberian tanker. Ten sailors were crushed or drowned as their sleeping quarters filled with water after the collision. At the heart of the tragedy is a single checkbox on a touchscreen.
No, this is crew incompetence, the checkbox is just covering the fact the crew has no idea how the vehicle works or how to operate it.
> The crew believed they had lost control of the ship because they were relying on the main steering controls, the rudder, without realizing that the ship was turning because of the secondary steering method, propellors set at different speeds
Let that sink in (no pun intended) - the crew is not aware the propellors can spin at different speeds... for three minutes... with what appears to be two giant sliders on the screen.
> Let that sink in (no pun intended) - the crew is not aware the propellors can spin at different speeds
This isn't a very good take. Every mariner is aware of using differential throttle to manoeuvre. It is a primary means of control below the speed where the rudder is effective.
You've completely missed the human factors element. When what you are seeing doesn't match your expectation then it can quickly lead to issues. It's practically the exact same as AF447. The urgency of being about to crash into something can make it very hard to step back and understand what is actually happening.
The air industry is very good at blame-less post-mortems and finding root causes. I think the Navy could benefit from some things from their playbook. I think one major contributing factor is that Navy ships are so bespoke compared to airliners.
The Nuclear Navy is also excellent at post-mortems (though they’re called critiques – it is still quite possible and reasonable to blame someone if indeed it was personnel failure) and finding root causes. It wouldn’t surprise me to learn that the conventional fleet did things completely differently.
An engine order telegraph wouldn't have this problem. So IMO it is very much a design problem, a problem that was already solved but needed to be created again so we can learn those lessons again.
The design looks pretty terrible. If the same UI can do very different things based on checkboxes and controls can be spread out over multiple stations non-obviously, that's about recipe for causing the problems above.
It sounds like they had a button for reverting all controls to a single station, if that wasn't used to recover obvious control that's on the sailors, imo. The fact that such a button is necessary to recover from a run of the mill situation is however a serious design failure.
This is nonsense. It's the same argument that people use against memory safe languages "it's possible to write memory safe code in C, you just have to not be incompetent". The truth is, all people are various levels of incompetence, and varies according to many different parameters that are all perfectly expected (like sleep deprivation).
You design a system to be as robust as possible to all operating conditions, which includes human fallibility.
I'm very curious about that. It sounds like the kind of thing which is done just because it can be done. But in doing so adds a wast amount of complications and makes everything very error-prone (as you say).
What is the purpose of passing separate parts of the control to separate stations? When is that useful?
>stupidly believing
Dick move by author to reveal his level of ignorance of USN operation tempo around McCain collision until the last few paragraphs. There were lol 4 fucking surface ship collisions and a grounding in westpac in 2017 because sailors were ran ragged, leading to operation pause. UX wasn't the primary problem. Sailors weren't "drowsy", they were sleep deprived, hopped up on stimulants etc due to manning shortages and long deployments, and likely lax training (due to shortages), which caused USS Connecticut accident a few years later. I'm sure you can improve UI for audience subsisting on 3-5 hours of sleep, but maybe the more pressing thing to try is to get them more sleep. IIRC there was study on navy sleep hyigene and like 100% of sailors in bottom quartile experienced bewilderment/confusion.
https://news.usni.org/2017/09/18/admiral-captain-removed-par...