Hacker News new | past | comments | ask | show | jobs | submit login
How annoying is too annoying? (input-delay.glitch.me)
115 points by mnem on Aug 13, 2019 | hide | past | favorite | 73 comments



Apparently typing latency doesn't annoy me at all. I think I just queue up what I'm going to type "in my fingers"; it's a one-directional process, and in general I don't have to see the letters before I keep typing whatever is in my queue.

What does annoy me about user interface latency: when a page loads slowly, I try to click something, the layout changes at the last minute, and I end up clicking something else. That's an example of a full feedback loop where my actions aren't coming directly from me. They're dependent on what I'm seeing on the page.


I share your sentiment. Quoting:

Now, finally, why might input lag for keyboards be less frustrating than on tablets?

My theory is: because the keyboards themselves provide an important source of feedback. Namely: the tactile feedback when a key is pressed tells you that you did what you set out to do. At the very least it’s possible to feel whether something was pressed or not; experienced typists will also easily be able to tell whether it was the correct key or not. And thus you yourself are taken out of the equation as a potential source of error, removing any self-doubt while waiting for the screen to catch up with your action.

When you’re working on a tablet, you’re never provided with such tactile feedback, and are thus forced to wait for the lag to complete whether a mistake was made or not.

http://www.remarkablyrestrained.com/keyboard-lag-and-frustra...


The search page on docs.python.org was like that until recently. You typed a search key, results started to appear, and then when the search was complete a line "Search finished, found xx pages" appeared above the results, pushing them all down at some difficult-to-predict point in time.

That is fixed now, and it is so much better.


I feel you, once a control is displayed it should never move. To me the main offenders are autocomplete suggestions that continue to load on top after some fields were already displayed.


I thought the buttons at the top ("Fine", "Could be worse", ...) were survey answers. Nope, they are how you choose the input latency.

The "We're done here" button is only 200 ms, and "Surprise me" only goes up to 500 ms. I have experienced far worse and variable latencies over Windows Remote Desktop and SSH due to bad Internet connections - like multi-second delays. This demo pales in comparison to the real world.


Random delay between 0 and 5s on SSH is nothing new... thanks goodness for mosh.


The delay is editable, you can set it to 3000 if you want.


I have heard about people putting in the work today so they can read/inspect it tomorrow.


I think the experience of annoyance would be very different if you are forced to "break your flow". E.g. if you need to correct some characters at the end or insert something in the middle of what you have already typed because then the feedback is much more important.

If I just keep typing typing typing without need to worry about correction, the annoyance is probably minimal. I could just ignore the latency, close my eyes, and keep typing, assuming I have really high touch-typing accuracy. (I know that's something I sometimes do when I face high latency in real usage.)

Maybe a better and more meaningful simulation is to show some random sentences that you have to read and type, and then randomly force you to make some corrections as you go along (e.g. suddenly change the last word from "their" to "they're"), to simulate you having mistyped something.


On my iPhone I find everything up to maybe 0.6 seconds to be just fine. Part of this may be being used to using SSH over VERY poor network connections.

I think the biggest frustration with latency is not the latency itself, but rather when latency is combined with the possibility that what you entered didn't register at all and the time pressure of having only 15 seconds of signal to submit your command - situations that often occur when using Internet based sessions in the wild.

The other source of annoyance is trying to "not crash the system". Often when a system presents high latency like this, I tend to consciously pay attention to how many "buffered" inputs I'm sending for fear of crashing something, and that self-throttling can get tiring.


Don't forget the classic "while you tried to click on something, something else moved in the way of it" problem.


Bonus points if this results in mis-clicking something dangerous or irreversible, like "delete", "cancel","post","send", "close", "exit without saving" or the like.


In the same way that browsers refuse to do certain things unless coupled with human input (like open a pop-up window, or submit auto-filled form data), they should refuse to send a click event to an element that hasn't been stationary for a short while.

It would need to be defeatable for "game mode" though, for games involving clicking moving targets. Any other edge cases to consider?


This is why most platforms provide both raw and semantic events: mousedown and mouseup are instantaneous and reflect the raw input, but click is meant to describe user intent to initiate an action. Leaving the raw events for games but requiring semantic events for dangerous actions seems like a decent compromise.


My pet peeve, way more annoying than latency.


This page is super misleading at least in Safari on Mac. The latency number appears to actually be a minimum delay between each letter. 200ms latency should mean that each letter appears 200ms after you press the key. But if you type reasonably fast on the 200ms setting, the screen gets farther and farther behind the keyboard, to the point where I can end up waiting many seconds for the text to finish showing up after I'm done typing.

As a result, this is going to make moderate to high latencies look way, way worse than they really are.


Yep. What you describe would be the expected behavior over a high-latency, but otherwise unloaded, connection. However, I'm wondering now: In the case of an ssh session over someone's fully-loaded typical home Internet connection, might it be a mixture of both? Especially with buffer bloat and lacking any traffic shaping, can I expect that each individual packet (which can be reasonably mapped to a key press in interactive sessions) is queued right after the previous one? Assuming many many packets waiting for egress without any prioritization, when each keypress packet actually makes it out vs. any other will essentially be random, no?


Latency on a loaded connection can be really bad but it won’t be related to how fast you type unless the connection is really slow (like, early dial up modem slow).

When you press a key, it will take a while to see the result. Depending on how things are set up, your computer will either send all subsequent keypresses immediately while you’re waiting, or it will buffer then and then send them all together once you get the reply. Either way, you won’t get this long term buildup.


Nagle's Algorithm is also involved. It's why newer ssh-alikes use UDP (mosh, etc.)


You need much larger variable latency range to be truly annoying.

I'm talking high enough that someone backspacing over a mistake will think that the interface didn't receive the input. So they do another backspace.

To get it just right you'd probably trigger the initial backspace callback immediately after the user types a 2nd backspace. Then wait a long time before triggering the 2nd backspace callback. Since the original problem character has been removed the user will most likely try to continue typing. Finally, update the queue of characters all at once-- starting with the 2nd backspace. Now the latency has caused a character right before the queue to get eaten.

This is how you cause people to throw their device against a wall.


Overzealous autocorrect has a similar effect. Apple has apparently decided that any editing in the vicinity of a word it considers misspelled is meant to replace that word. The other day, I tried to refer to a HN user by username and I must have accidentally overwritten it half a dozen times before managing to get my comment through the OS to be submitted.


For me nothing much changed up to "really not great" - only then did I start to notice the delay.

But "variable latency mode"? That made me more nostalgic than annoyed. It was an instant flashback into the 90s, running Irssi in screen over SSH on a slow dial-up connection while downloading MP3s. Happily chatting away.

Individual perceptions are weird.


Wow, it’s like a hyper-optimised Visual Studio+ReSharper installation.


Funny you mention it. I posted a question about this on superuser. Its currently the second result when doing a google search on "visual studio input lag". The issue disappeared when running in single monitor mode, but I couldn't determine the root cause.

https://superuser.com/questions/715607/improving-resolving-k...


I used to work at a company that gave our two machines: a beefy desktop with lots of RAM, many cores, two GPUs, a fast disk, etc., and a mid-range Apple laptop. For consistency reasons, my main development environment ended up being a tmux session running on the desktop, and I would ssh in to the machine from the laptop.

The latency hovered around 100ms (it's not great), but it went up and down throughout the day and as I moved in and out of the office. Honestly, it isn't so bad. I find after years of working like this I keep enough of a mental model of what I've typed that I'm not bothered by even the "we're done here" setting.


While we're doing asides: In days gone by I worked with a person who was regularly 2 or more full window switches ahead of Emacs. Meaning, they could type a switch window command and using their memory of the code and point there type a search/delete/change, switch to a 2nd window, do the same, etc all before the 1st window rendered.

Of course this was on slow computers so it was not so much a feat of speed but rather how much of the full code they were holding in their head at any time.


Was the desktop thousands of miles away? I’d expect under an ms from the network for this sort of setup. Maybe ~2ms if it was WiFi.

Unless it was 2.4Ghz, I suppose; all bets are off then.


Have you considered mosh? https://mosh.org/


I must be more patient than whoever wrote this. Only the last one or two options really bothered me at all. Maybe I just type slow.


I type relatively quickly (95 wpm) and I didn't even notice the delay until "really not great" (100 ms) and it didn't bother me until "we're done here" (200 ms). I think it has more to do with expectations than it does anything else... A "slow" text box just isn't that annoying. I'm usually waiting for my fingers to catch up with my brain anyway, so there being an extra few ms doesn't change much. It's a little disconcerting to see the text all catch up at the same time and back spacing at the slowest setting was annoying. But still not seeing what the big problem is.


I suspect delay impacts more slow (or error prone) typists who rely more on visual feedback, whereas fast typists presumably generally rely/trust more just physical feedback.


I'm probably still easily in the 80s WPM, and could practice back up into the 100s in a day or two. Input lag doesn't so much throw off my typing as it makes computing feel kinda remote and gummy rather than real and crisp. You lose the feeling of a direct connection between your input and the computer. Like you're poking the keys with a stick underwater, no matter how fast you type.

Even this text box, on HN, is noticeably disconnected compared with, say, DOS on an IBM PC-XT, or your average text input area on early Apple computers, BeOS on a first-gen Pentium, QNX on same, that sort of thing, and that's without applying the linked site's extra latency. Everything, just about, is a bit muddy on a "modern" computer. iOS is the closest thing to an exception and even that's gotten worse over the years.


I'm a fast touch typist, but input latency really throws me off. If there's more than about 150ms of key-to-screen latency, I'm faster with my eyes closed. I normally scan the screen as I'm typing so I can quickly backspace over a typo, but I find it really difficult to plough ahead if the screen isn't keeping up.

I've had a similar experience with audio systems - I can play guitar tolerably well even if I can't hear what I'm playing, but >40ms of latency will turn me into a ham-fisted mess.


I'd tend to agree that it matters the more you rely on visual feedback. I type on unfamiliar keyboards a lot and use visual feedback almost exclusively; 10ms latency was imperceptible, 20ms was very noticeable, and the 50ms latency was at the point of being unbearable (I'd go find another machine to work on or something before typing with that for more than a couple minutes).


Also it impacts cursor editing (for example using vi) more than just typing text, for the same reason.


Wasn't vi invented specifically for dealing with high latency networks?


I was able to notice it at the 30ms mark, and it gave me a flashback to trying out Eclipse and NetBeans when working on some big unfamiliar project years ago. Couldn't make myself use either, no matter how useful all of the magic tools were...certainly in part because they both had noticeable input lag, which I've now learned I find super annoying.


100ms reminds me of visual studio code.

Funny this comes up today because I was thinking of getting back to vim fully instead of VSCode+vim extension and used vscode as a debugger only (love the UI!).


Glad to know I'm not the only one experiencing this! VSCode+Vim has been driving me insane lately with the excruciating typing lag.

I've never learned vim proper (started with Atom and vim-mode-plus), but I keep thinking it might be time to dive in.


Try Neovim, you'll love it. https://neovim.io/


Fun. But I don’t feel it’s entirely accurate. I don’t mind typing with a delay as I can watch my keyboard, but clicking on moving objects with a variable delay is quite dreadful.


Working in Ireland on computers that mostly live an AWS us-west-2 really makes you almost immune to this. ~150ms best case.

Of course, dialup was worse. You can never really get used to 300ms.


So fun story:

When I started using Ubuntu in 2008, I used gnome-terminal as was default, and had no issues for it for quite some time.

Around 2009-2010, I decided to start experimenting with a whole bunch of alternate window managers and terminals. Eventually settled on wmii and xterm.

Since then I've gotten so used to xterm, that I can sense the extra latency in gnome-terminal that isn't in xterm, and it bothers me. This isn't even over ssh or anything, just local.

Toying with this page, gnome-terminal feels about the same as 10ms. And apparently I can still just barely sense the delay at 5ms now. Somewhere below that, probably around 2ms, I can't sense it anymore and it feels like xterm.


When I type it sometimes stalls for a while and then dumps a whole bunch of characters at once (much faster than I actually typed them). Anyone else having that issue?


I grew up connecting to 300bps BBS', this is just water off a duck's back for me!


Worth pointing that the number presented is presumably additional delay on top of the inherent delay of your setup. That means that especially in the lower end the values are not comparable between people on different systems.


I recently ran into the input delay problem when I foolishly installed a firmware update on my HP 608 G1 tablet. The firmware update introduced an unfixable bug that "sleeps" the touch panel if it hasn't been used for a period of four seconds and introduces a full second of latency on the first tap thereafter. HP swapped units out while they were still in warranty but late adopters like me are SOL since no further firmware patches will be developed. I don't know what my limit is but that surpasses it and my tablet sits collecting dust.


Eh, its not really that bad. The variable latency needs to be much more extreme to be noticeable. 250ms is about the threshold where I start to feel hampered. So going from 34ms->78ms->150ms->56ms->14ms really isn't an issue. If I was seeing 100-200ms jumps, I think that would have a much bigger feedback effect, similar to to hearing your own voice on delay. Especially if it changes on a `space` rather than every character. Just long enough for you to get used to, and anticipate, the delay. Then a wild change.


Given that I've spent a lot of time developing remotely using Vim, 100ms feels pretty normal to me. After a bit, you develop a buffer in your head and you think in slightly larger chunks of text (a word or two or a short statement), and you express that and then review it for any fixes needed at that time. Error correcting is less immediate, but it's not always a bad thing, as it lets you focus a bit more on what you are expressing instead of whether it's got typos in it.


It would be interesting to study how annoyance with different settings changes based on your typing style. I touch type and was trained to be able to type without looking at the screen and none of them seem super annoying. However, I could see a touchscreen keyboard quickly becoming difficult to use since you can't physically feel when you hit the wrong "key".


Reminds me of operating servers via ssh when on 3g on a moving train.

(Yes, I know mosh exists for this reason)

It actually made me realise that my normal typing in my browser is quite high latency, since the 5-10ms exacerbated something enough to the point where I notice it even though there's no added latency now.. one of those "cannot unsee" moments and it's going to cause me to go insane.


This reminds me so much of macOS Finder. Back in 10.9 there was no delay when creating a new folder and typing a filename. This evening I accidentally caused Finder to navigate away from the "untitled folder" by starting to type the filename too quickly after pressing Command-N. It's a really frustrating UX, and feels very similar to a 200+ ms delay.


Apple needs another of their performance-focused releases, like that one where one of the biggest features was reducing memory use. For MacOS and for iOS, focused on reducing system latency in both cases. Both are a lot laggier than they used to be, and the power of the hardware under them barely seems to matter.


This takes me back fifteen years to when I "upgraded" from a 1Mb/s high-bandwidth cable internet service to a more expensive 128Kb/s symmetric DSL connection because I was working from home over ssh. I went from "really not great" to "fine"/"could be worse", and it made my life so much better.


> This project has received too many requests, please try again later.

Is it possible to describe what it is without spoiling anything?


It's a text box, plus a series of buttons to change the latency between your typing and it appearing in the text box. 10ms, 30ms, up to 200ms.


Two things I would change:

1. The latency selector should be plain numbers, to avoid coloring the perception by the emotional terms used

2. Currently the letters appear with a delay, but the cursor advances immediately (could be my browser) - there is instant visual feedback, which probably reduces perceived latency


This is missing the particularly annoying 'halts for 5 seconds then catches up' option.


I actually found myself getting motion sick at 50ms and above. I'm not sure why, I was a dialup kid from 300 baud on up for like 15 years.

I think something about the drawing is the issue.


When it finally loaded, I found it horrific fun. My sentiments lined up very closely with the labels: FINE, COULD BE WORSE, and then IT'S WORSE and worse.

The variable latency mode was truly annoying.


There's been a lot of research done on latency in audio systems and how it impacts musical performance, so it's probably worth cross-referencing this with those results.


I know this is quite obscure, but there's a great video of Alex Lacamoire (music director on Hamilton) discussing their live digital sound setup and he discusses the topic of latency: https://www.youtube.com/watch?v=jHs0NVvTxHY&t=8m39s


Anything over 100ms is pretty much extremely annoying.

That being said, somehow the variable latency seems way less annoying than what it would be if averaged out and set that constant value.


300 baud tty input was like this. ah.. the memories (asr33 teletype in the house, followed by an olivetti, proceeded by a remote input cardpunch on 75baud)


"This project has received too many requests, please try again later."

Can't figure out if the site is down or if this is part of the test.


Given how often I remote into my iMac over a vpn that has both cellular and WiFi networks to deal with, none of these bothered me.


If you've ever administered a server via SSH over Tor, you're forced to get used to this sort of stuff.


reminds me of administering an EC2 instance hosted in australia from a remote wimax connection in the rocky mountains.


I don't find it annoying. I can just keep typing knowing everything is queued up.


If you think this is fun, try ssh through a 495ms latency satellite connection.


Reminds me of using Google Drive in Firefox :)


I can withstand them all.


Sort of badly coded. The cursor moves when it shouldn't.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: