Hacker News new | past | comments | ask | show | jobs | submit login
User interface design timeframes: from 0.1 seconds to 10 years (2009) (nngroup.com)
109 points by hazelnut-tree on March 7, 2023 | hide | past | favorite | 56 comments



I remember reading this a long time ago - maybe even when it was first published? - and formulating some UX rules of thumb based on it.

0.1 seconds or less: the illusion of direct manipulation is maintained. “You feel like you directly made the thing happen”. There should be no loading indicators of any sort. Animation should be sparse and economical (at 60fps you have 6 frames to tell the whole story), or absent if possible (`duration: 0.3s` instantly destroys direct manipulation).

1 second or less: 2-way interaction with flow. “You feel like the computer made the thing happen”. There should be a loading indicator of the indeterminate activity-is-happening type (spinner, pulsing gray text skeleton, etc). A vague reassurance the computer is working on it is enough.

10 seconds or less: Loss of control. The computer has put you on hold, and you’re waiting for it to get back to you. There should be a progress bar, spinners won’t cut it. Users don’t like losing control - take this period seriously.

More than 10 seconds: Loss of user. Persist session data to localStorage so at least it’ll be there if they ever come back.


>0.1 seconds or less: the illusion of direct manipulation is maintained.

100 milliseconds is still an atrocious amount of latency in many contexts (typing for example). There were some Microsoft usability studies of tablets, people dragging their fingers on the touchscreen to draw things. They needed on the order of single-digit millisecond latency to make it feel immediate, because otherwise you could see the drawn brushstroke noticeably lagging your finger.

It's true that human reaction time is at roughly the 100ms timescale, but that got thoughtlessly transmuted into "latency below 100ms doesn't matter" (or 200ms or whatever they read), which doesn't follow at all. The hand-eye-brain system is a very complex prediction-feedback system, used to working in real life where there's usually zero real latency between what your fingers do and how the world reacts. When primitive man threw a spear, its ballistic trajectory started the instant it lost contact with his hand, not 100ms later.

Your brain is already doing the biological equivalent of lag compensation and rollback to account for how long nerve signals take to travel and how long muscles take to move and how fast its own neurons can process information. What you consciously perceive is a gestalt model, continuously smeared over what the brain thought the world was 100ms ago, and what it predicts it will be 100ms in the future. Adding more latency and jitter on top of that can only make the error rate worse.

And doubly so when there are different amounts of latency for the different sense modalities, e.g. visual vs audio vs tactile. I remember reading about a prototype for a sinister riot control weapon that echoed people's voices back to them at a slight delay, it totally fucked up the auditory feedback that underlies the ability to speak fluently.

See: https://danluu.com/keyboard-latency/#appendix-counter-argume...


Yeah, these rules are scoped to “doing stuff by clicking on a web page”. They are rules for buttons, basically.

Typing absolutely needs to be “instantaneous”, i.e. it happens by the very next frame, so “< 16 milliseconds”. Same goes for mouse movement, the cursor has to be perfectly in sync with your hand movement. These are a step beyond “it feels like you made the thing happen”, they are more primitive, like maybe “it feels like you are the thing”. When you put touchscreen gestures in the same category as mouse cursor movement and typing, it seems obvious you would need single digit millisecond feedback - but it did take us some time to empirically discover that was the natural category for gestures.


>These are a step beyond “it feels like you made the thing happen”, they are more primitive, like maybe “it feels like you are the thing”.

That's a great way of putting it. I imagine VR developers have to deal with this.


I vividly recall Carmack describing some of the lengths they had to go to to get VR respond to body movements. IIRC it was crucial to get the scene to move around exactly when your head moves around, or else your body would think you’re poisoned/dying or something. The PC would be polling head position and rendering at like 90fps and even that was too much latency so they had this insane hack where the rendered frame was sent down the wire to the headset, and the headset itself had some onboard processor that would skew and shift the frame according to the last data from the accelerometer literally right before putting the frame on screen.

https://developer.oculus.com/documentation/native/android/mo...


> the headset itself had some onboard processor that would skew and shift the frame according to the last data from the accelerometer literally right before putting the frame on screen

FWIW that's only needed when you don't have enough bandwidth to send an uncompressed video stream to the headset and need a decompression step that introduces additional latency (e.g. you're sending the video stream over 6 GHz Wi-Fi or USB instead of DisplayPort).

Tethered headsets do use a similar technique in situations where the game can't keep up with rendering at 90fps but that's done on the computer's GPU, not on the headset.


Yeah, on re-reading I don’t know where I got the idea the headset was doing on-board processing. I recalled that rendering was done using positional data and they had to add a compositor post processing step afterwards using newer position data, but I hallucinated that the need for low latency was so extreme that it had to be done on the headset.


Again, it does happen on the headset, but that's only necessary in situations where there's not enough bandwidth to send an uncompressed video stream.

See Oculus Link over USB, for example: https://developer.oculus.com/blog/how-does-oculus-link-work-... The diagram there shows Timewarp running on both the PC and the headset side.

In headsets like the Rift or Valve Index that send video streams over HDMI or DisplayPort that extra step isn't necessary; everything happens on the PC's GPU.


Ah, ok, that’s how I ended up with the original mistaken belief. I remembered “Timewarp is necessary” and “Timewarp can happen on the headset” and conflated them into “Timewarp necessarily has to happen on the headset”.


Yep, even worse in gaming where some user's are likely capable of noticing between 1-10ms latency differences. Not strictly, but enough to let a reviewer describe a game as sluggish as noted in today's PC Gamer article[1].

https://www.pcgamer.com/we-played-the-fps-with-unlimited-des...


When building tools for developers and data scientist to iterate (any tool used in an iterative creative process like writing code, building ML models, etc.), I have similar breakdowns. This might be a build process (i.e., Makefiles and such). Or processing large amounts of data for analysis, simulation, model training, etc.

4+ hours: At most one iteration per work day. Because of the wait times, users will try to fit as many changes into each iteration as possible. Users just want to find something that works because trying alternatives can cost days, making results less optimal. Users will typically run jobs over night so they can work on the next iteration all day.

1 - 3 hours: 1 - 3 iterations per day. Users will do some exploration on the most important components. Context switching is a large cost because they have to do something else while it's running.

10 - 60 minutes: Still limited to a handful of iterations per day, largely because users don't want to do more context switches than that in a day. The run will finish sooner, but user won't be ready to switch back. Especially true later in the day as people get tired.

30 seconds - 10 minutes: Context switches can be simpler because they can be small tasks like catching up on email. Exploring different options is starting to pay real benefit here.

< 30 seconds: Context switches are now minimized, keeping the user in flow. Feels more like interactive exploration rather than idea -> do something else -> remember idea -> evaluate. Users will try lots of different options, often yielding much more optimal results.

Doing a bunch of work to get things from 5 hours to 4 hours is not of much use. Getting things from 10 minutes to 10 seconds greatly multiplies the value that others can create.


Yes! 5 hours -> 4 hours is a quantitative performance increase; 10 minutes -> 10 seconds is a qualitative performance increase. Crossing from one time regime to another is a huge victory.


Back in 2000, there was the "3 second rule": web pages should be fully loaded within 3 seconds. At the time it was very much optimization of image size as loading was pretty much serial, with no async HTTP requests.

It saddens me that we have become so complacent; sites like Jira take 5 to 8 seconds to load a page that feels like it should be near instantaneous, and we're supposed to be OK with that. Under the hood, these sites are a spaghetti of nested API calls that probably make a lot of sense to the people owning the products but severely affect user experience.


I would have killed for 3 second load times in those days. I suspect that was a highly aspirational guideline.


You're not supposed to be OK with that. You're supposed to make noise and complain. You're supposed to file bugs and create issues in their repos. You're supposed to switch companies when something isn't working anymore and they refuse to prioritize.

But people don't.


How many people even work at companies where the people purchasing enterprise software listen to their intended users at the company?


And now we got to the source of the problem. Wasn't that easy?


"10 Seconds" needs addition - even if site opens in 0.1 second and shows cookie management options, in my mind it will take 10 seconds to get rid of it and I close the site. So it's often not about actual speed, but speed "from my experiance".


As I make sure to uncheck the options including “legitimate interest”¹, that can be far longer than 10s. Though likely not, because unless I really want to read the information on that particular page my time is probably better spent looking for the same information elsewhere².

--

[1] which really means “we see your preference not to be stalked, but fuck you and your silly preferences”

[2] if I see the admiral³ logo in a consent pop-over my time on the site is ~0.1s as I know my choice is “uncheck literally hundreds of boxes as there is no say-no-to-all single click, accept hundreds of 3rd parties might track me, or leave”.

[3] not an issue with this site, their opt-out seem relatively sane though I did need to click a few times to make sure “legitimate interest” wasn't hidden in nested minimised content.


Do you really care this much about some bits in a computer that might try to sell you laundry detergent? I always hit Accept All because I find GDPR unconscionable and want to support the sites I use for free.


For me personally, I haven’t figured out whether I care or not about cookie privacy, because I’m too traumatized by the absolute fucking UX catastrophe that is “legally mandated modal/large banner on first visit to any site”. A modal is a pop-up window for the SPA age. We hated pop-ups so much that we took it to court trying to make it illegal, with judges in most jurisdictions settling on something close to “no, pop ups aren’t illegal, unless they’re coercive or misleading”.

And now it’s illegal to not have a popup. Every site is legally obligated to hit every user with one of the all-time most hated UX experiences ever.


> And now it’s illegal to not have a popup.

It absolutely isn't. You, along with many others, are falling for the advertising industries attempts to turn you against privacy regulations by claiming the regulations force them to inconvenience you.

In fact a great many of the pop-us (the vast majority) are actually in breach, deliberately so, because they make it far more work to opt out than to opt in.

If all you track by default is tokens required for correct functioning of the site (session tokens and such) then you do not need a pop-up at all.


I’m not “falling for the advertising industries attempts”. You’re missing my point, which is that the law specifies you must use a particularly atrocious UX pattern.


I am not missing your point. I am asserting that your point is incorrect. Show me where in a law/regulation where anything like that is stipulated.

IIRC all that is stipulated in the EU regs, for instance, WRT cookies and other tracking tech is the ability to opt-out should be as easy as the ability to opt in, it should not be auto-opt-in, you can opt-out later if you do opt-in, and you should be properly informed about what you are opting into.

The bad UX patterns making it time-consuming, confusing, or otherwise unpleasant, to opt-out are actually against the spirit of the law (perhaps even the letter of the law) but unfortunately it is not proving really practical to enforce.


They aren't legally obligated to make opting out as painful of a process as possible by presenting you a gazillion individual options and then some kind of loading spinner while they "Process your request" for half a minute as an extra "screw you". When that happens I just bail out as that shows just how little respect the site owner has for their visitors.


I come from a time when commercial interests couldn't easily stalk me and collect piles of data on my behaviour in order to eek out a few pennies more profit, and yes, do care that things have changed in a direction quite away from that.

The multitude of information stored about us is used for far more than just selling too, as per reports like https://news.ycombinator.com/item?id=35028107 so I also object on principal as well as for selfish comfort reasons. I have nothing to hide in that regard now, so nothing to fear, but I know people who would do if similar law covered where we live, and people who did have things to hide that this sort of thing would have been a danger to when other crappy laws were in force (people who were homosexual when it was still effectively illegal to be for instance), and who is to say some other law might pop up later which means I might want to hide something I now can't because every advertiser on the planet knows it and can be very easily made to reveal it?

> because I find GDPR unconscionable

From this I surmise that you do not understand GDPR and related legislation, and have fallen for the advertising industries attempts to turn you against such regulations by making you believe they are forced by them to inconvenience you.


> I surmise that you do not understand GDPR

I'm familiar with it. I fundamentally disagree with its assumptions on rights.

If you send me a letter, you shouldn't be able to compel me to shred it. If you come into my shop with a clear exoectation of security surveillance, the video should be mine entirely.

If you send my server your IP, that's my information now, and you shouldn't be able to compel me to delete it. But somehow this backwards concept of ownership has gotten popular where every individual is the perpetual tyrant of any information they leave in the world as they go through it. They can tell me to forget something they told me and now various governments will try to punish me if I don't agree to the façade. 1984 comparisons might be a cliché but this fits the memory hole analogy all too well.


Yes, "Keep all Cookies" and "only essential" should be right there if at all. I don't want to "manage Cookie setting" and have a bunch of switched - I'm often gone at that point (which BTW means I will not be sharing a link).


That’s not accidental. They want it to be extremely annoying and confusing so that you are forced to click accept all. The EU needs to clarify their legislation but for now they’ve made the internet objectively worse for everyone.


Mostly agree, but:

> The EU needs to clarify their legislation

No, from what I understand, the law is already clear that “reject all” must be as easy as “accept all”. Those who doesn’t show this are already breaking the law.

> but for now they’ve made the internet objectively worse for everyone.

No, the companies with cookie banner web sites did that.


The law should be opt out be default, not “throw up an interstitial”

> No, the companies with cookie banner web sites did that.

The companies are doing exactly what they need to do to stay in compliance while making sure their business is minimally impacted as possible.


> The law should be opt out be default, not “throw up an interstitial”

That would be a bad law, with too many details that will be wrong on many contexts.

Stop making excuses to people that are trying to fuck you.


> The EU needs to clarify their legislation but for now they’ve made the internet objectively worse for everyone.

The legislation is already clear, it should be as easy to opt-in as to opt-out. People/businesses didn't seem to get it, but more and more they are starting to realize it. Also, they don't want to be caught once fines starts being handed out.

"Objectively" should mean objectively, just because it doesn't fit with how you (or your employer) think the internet should work, doesn't mean that's what everyone thinks. I'm quite happy that websites have to disclose what they are doing and ask for permission. They could also not track me and not having to ask for any permission, but not many websites chose to act like that, so nice to get the heads up.


What I’m saying is that the reject all non essential buttons should be the default, no interstitial. There should be a non intrusive banner or other kind of notification asking you to opt in


I think they do now? At least, on Google, I now get two similarly-looking buttons: "Reject all" and "Accept all" next to each other, with a small "More options" link underneath. Pretty close to what's shown here:

https://www.theverge.com/2022/4/21/23035289/google-reject-al...


Recently I've seen many sites put a "reject all" button in the same size and color as the "accept all" button, Stack Overflow for example. I thought some legislation must have changed somewhere.


Legislation hadn't changed, but some big company (Google?) realized EU was serious and they would get fined big time if they continued to pretend to not understand, and then a few more took the hint afterwards.


I thought the legislation always said something like "rejecting must take the same number of clicks as accepting" and the ones without a "reject all" just weren't in compliance.


And yet somehow they’ll remember my choice to accept while reasking every visit if I reject.

The legislation needs to be opt out by default with a non obtrusive request to opt in. These aren’t players in good faith so why indulge them in legislation.


> And yet somehow they’ll remember my choice to accept while reasking every visit if I reject.

Guess what, the GDPR doesn't allow that either.


See also “10 Timeframes” by Paul Ford:

https://contentsmagazine.com/articles/10-timeframes/

This is Paul Ford’s insightful and poignant keynote talk for the 2012 graduating class of the Interaction Design MFA program at the School of Visual Arts.

I’ve been haunted by these lines:

“I can never remember if we are supposed to live each day as it were our last, or if it’s the first day of the rest of our lives. It’s hard to tell sometimes. We make movies about it over and over again. … Of course these movies are made by people who are totally dedicated to making films. They give up their lives and neglect their children to make movies about the value of family.”


Very interesting stuff, both this article and the one originally posted.

This stood out to me:

> This is also from The Soul of a New Machine: One of the engineers in the book burned out and quit and he left a note that read: “I am going to a commune in Vermont and will deal with no unit of time shorter than a season.”

The idea of slowing things down is a very interesting idea to me. I'm fairly certain that all the fluff in the world about moving fast actually slows us down in the long run. I also really need to read The Soul of a New Machine.


Even on the desktop. We had a Solvespace user complain that the program "locked up" on a change to his design. It was really taking about 3 minutes on my machine, so not locked but unacceptable. Over the next months we made a number of improvements that brought that test model change down to 6 seconds - a 30x improvement. It made it usable, but still way past the 0.1 and 1 second marks. The real problem is we've got some O(n^2) algorithms that need to scale better, but everything helps.


6 seconds for a change is something you need to throw a kitchen sink of UX techniques at for sure, especially if it’s the kind of change the user wants to iterate on.

If there’s multiple dials to change, debounce and throttle aggressively to batch together as many design changes as possible.

Show a progress bar - O(n^2) algorithms suck in most regards, but one advantage is they are ideal candidates for progress bars. Initialize the bar with n steps and put a call to tick() in the outer loop, you get a very truthful and satisfying progress bar.

Take a page from web dev: don’t block the main thread. Let the user move the view around while it’s recalculating. Let the user make another design change while it’s recalculating (just start the recalculation over if they do).

For inspiration on hardcore stuff you can do to handle this “user wants to make iterative changes but each change involves heavy computation” problem, you can check out Affinity Photo. They go to extreme lengths in the pursuit of “feels like real-time feedback”, like taking a heavily scaled down copy of your image and applying the filter to that so they can show you an approximate preview of the final result while it’s computing. This is probably more effort than just making a better algorithm though.


In the world of JS heavy web frontends, Electron and apps like Slack and MS Teams, this article is like a cold shower.


Not a front end dev but this is why I love using vanilla js and pre rendered html in side projects. Easier to maintain long term as I don’t have to deal with ridiculous framework updates and feels snappier.


> in side projects

Why not for the main projects too?

Sharing a vanilla codebase can be complicated, but I am finding that it is possible to convert others to this religion with a little bit of pair programming.

Once someone experiences this path, you may find they get hooked on it pretty quickly.


I never experience slow frontends. It's always either very slow requests, probably to some Java monstrosity, or newsletter/cookie modals, that makes me leave a website


I don't understand why it is implied that Java is slow. A running JVM is not slow.

A "monstrosity" as you put it could have been written in any language.


> Awkward sites that require much more than a minute for basic tasks — such as transferring money from a savings to a checking account — will be abandoned.

An impatient user indeed, if already-written checks or automatic drafts from the account are pending.

Seriously though, this points to a special case which we might call "end-of-flow". If all required info for the transaction has been keyed in, short term memory needs only to retain awareness that a confirmation notice remains needed.

Likely next user actions are:

1. Switch to an unrelated task,

2. Check back after a few minutes for the transfer verification,

3. Close the page of the now timed-out session.


I remember that post (I have been a follower of NNGroup for a long time. I've taken a number of their classes).

The 1 second -> 10 second gulf is the most difficult one. I try to keep many interactions and animations to 0.25 seconds. That seems the slowest "acceptable" time for interruptions.

Many transitions and reaction animations can take over a second. In my experience, this can look very cool, the first couple of times, but rapidly becomes an annoyance.


>I try to keep many interactions and animations to 0.25 seconds. That seems the slowest "acceptable" time for interruptions.

I like the way old desktop UIs used to work: the action happened immediately, but there was also a fast animation that acknowledged it. Like, minimizing caused the window to disappear instantly, but there was also the wireframe rectangle that quickly shrank into the taskbar over the following few frames. The animation didn't interrupt user flow at all!


That's a great point. After-the-fact animations.


These are very good guidelines that aren't respected well enough. Clicking between the tabs on Windows Task Manager (processes, application history, and so on) takes about 0.8 seconds on a modern desktop.


Powers of 10... goes from 10 sec to 1 minute (and 10 min -> 1 hour). Sorry, had to say it.

Ah, downvote me, I deserve it.


This is all very obvious.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: