The good news is that we can get to even better results than Apple 2 by choosing pieces carefully. This is not my final result, but I'm seeing about 19ms from USB to light on a 2017 Macbook Pro 13" connected to a 144Hz monitor.
Given the unexpectedly large contribution of the keyboard, I think there is a _tremendous_ business opportunity in empirically validated actual low latency keyboards. Gamers will eat these up, but I would buy one in an instant for coding.
For a better test, in my opinion, the key actuation point should be determined and the timer started at that point. Of course this depends on what you want to test. But to say that a gaming keyboard is slow, just because there is more key travel, is inaccurate.
this would be like saying a physical kick drum or high hat is laggy, because theres a delay between when your foot starts moving and when the sound happens. (which would be silly!)
Or maybe there is some optimal separation a key should have between the actuation point the tactile click point to account for the latency of the human's nervous system, which would do an even better job of reducing the effects of latency than if the two events were at the same point?
What I can't stand is typing "f" then pressing "<Enter>" in the url area of a browser, and the browser interpreting the url as "f" because "f" won the race against autocomplete's return value of "https://foo.com". Just hand "f" off to autocomplete and navigate to whatever the return value happened to be.
Of course it's a browser, so I'm sure someone could exploit my desired behavior to somehow mine cryptocurrencies in a hidden iframe.
UI determinism to me is vital, no matter how ugly it may look to designers.
Frankly the url bar, once i kill search suggestions, is the one thing that Mozilla has gotten right in recent years.
Type something in it, and it will do a sweep of history and bookmarks, trying to match the entered string within both url and page title.
Chromium derived browsers limit themselves to macthing the url, and only as if you are typing it as new.
If it has been fixed in the newfangled Firefox I'd love to know. (Not sure how to install Nightly on arm or I'd try it myself.)
He implemented a zero latency mode which bypasses many layers in the rendering stack, which apparently makes IntelliJ one of the fastest text editors.
It's probably faster than Vim if you're using a slow terminal emulator.
A couple of years back, I didn't have any experience with online games and was introduced to a popular one by one of my friends. As we were playing some introductory matches, 400ms latency connection felt very instantaneous to me, while he felt super annoyed and advised me to change my ISP. Some months after changing my ISP and regularly playing the game, I could instinctively feel the difference between a 60ms connection and a 120ms connection.
Most test and site you play seems to show our maximum response rate to click at somewhere between 200ms to 300ms, with most scoring 250ms+ range. And yet we can still feel the difference. And this latency problem sometimes causes me breathing and heart beat problems when things dont align with what I expected to in response time. Sometimes doing things too quickly isn't always the best option either.
It is the same with Apple's Retina Display, 20/20 eyesight, viewing distance etc, but given those condition you can still feel the difference between a 300ppi and 400ppi.
And the iPad ProMotion is really buttery smooth, it still isn't perfect though, I am not sure what frame rate do we need to a point where we cant tell the difference.
I wish latency are taken more seriously in UX and design. Hopefully Apple will lead the pack again.
I think when you interact with a UI (or a physical object, for that matter), you're projecting an expectation into the future relative to your sense perception of the action. So the ~250ms delay applies to both the action and reaction, thereby effectively canceling out.
Like when they made a calculator that fails to calculate correctly if you push buttons too fast?
Definitely applicable because it's talking about UI latency... and Apple.
And why i feel that KDE did the unix world a disservice when they went from emulating Windows to try to be their own thing in KDE4. As long as KDE behaved closely to Windows out of the box, one could more easily get people to give desktop unix a try over the longer term. This because the initial experience would not be as jarring.
Similarly observe the backlash Microsoft got for Windows 8.
Sadly the FOSS world seems overrun with designers and busybodies that want to pad their resume these days.
The most annoying latencies I've seen so far are scrolling on Android and Trackpad mouse moving on Linux (x11).
This reminds me of the current state of the web. There's been such a push to add more and more to pages, largely without regard for performance, that many sites take 10+ seconds to load the first time on 4G or wifi on a new, top spec Android phone. This is one area where Apple still has one of Steve Jobs's best contributions - an attention to user feel and perception. As much as I prefer an Android, iPhone just _feels_ faster.
For minimal latency I would go for Xfce/Gnome 2 without compositing
I recently switched from whatever Ubuntu 17.10 is using (Gnome on Wayland?) to i3 on X.org, and there was no real performance increase that I could see.
Do you have any suggestions on how I can get a tty level experience (with the occasional ability to use a web browser), but with low latency? I obviously can't use tty when the screen is native resolution (and the text is miniscule in size).
Add "vga=normal nomodeset" to your kernel parameters, and revel in the speed (and blockiness of the font). :)
Just now I checked the input response in tty and terminator, and I just don't feel any difference at all.
Hardware and software info: Measurement at 90 fps (Moto G 5 slow motion camera) so it could have been up to 55 fps due to granularity. Filmed keyboard and screen together. Ubuntu 16.10 with self compiled KDE stack on master branch. Fujitsu KBPC-PX keyboard with PS/2 connection (it can also do USB). AMD R9 280x GPU with AMDGPU kernel driver and mesa master / llvm trunk. kwin (X11) window manager with compositing enabled. Editor kwrite (kate should be the same, it's a more featureful UI around the same internals). AMD Ryzen 7 1800X CPU. 16 GB RAM.
Possibly one of the reasons why I like fat Linux desktop machines with AMD graphics so much is the great latency results.
A quite valuable and IME underused Wayland feature is clients sending an opaque region and the compositor using it to draw windows ("surfaces") without alpha blending where it isn't necessary. Alpha blending uses significantly more fill rate than plain overdrawing. Opaque regions can also be used to determine that a whole window or a region of it doesn't need to be drawn at all. I had implemented that as an experimental optimization in a customer project once. In the end it didn't work out because the 2D acceleration in Vivante GC2000 is broken garbage - they even pulled the documentation of it, but still advertise the feature in marketing material.
This of course has all the expected downsides, like tearing, and modern GPUs are fast enough that there's rarely any point--but it's certainly possible.
Note that Kristian Høgsberg, for years the main developer of Wayland, is among the authors. So I guess it is actually not so obscure in the Wayland context.
It tells you how many frames old the current back buffer is, so you can repaint as necessary to take it from the state n frames ago to the state now.
>There are a number of interesting results, but the point relevant to this question is that there was a fairly significant variance between keyboards, and all the USB keyboards tested had a longer effective scan interval (18.77 ms - 32.75 ms) than the PS/2 keyboards (2.83 ms - 10.88 ms).
The dreaded standard HN automobile analogy is latency is like turbo boost lag where you stomp on the gas pedal on my wife's Prius and it spins the tires instantly (well, anti-spin kicks in instantly anyway), whereas my uncle had a turbo Firebird in the mid 80s, stomp the gas pedal and like 3 seconds later the turbo spins up like a sci fi warp drive and kicks you in the butt, but for the first 3 seconds it almost feels like the car stalled, kinda weird. Obviously its been over three decades so possibly the latency on a turbo Firebird was 2 seconds or 5 seconds, it doesn't really matter beyond the point that it dramatically affects drivability.
These are tests of the latency between a keypress and the display of a character in a terminal
What about a mouse pointer move on a current laptop or desktop, a scroll gesture on a trackpad or a current touch device? Those feel absolutely instant to me - it seems like the optimization for latency has just moved to the more common modes of interaction. To do anything at all on an Apple //e, you had to type. To do most things with a current computer, you point.
No, with an Apple IIe Mouse Card, you could use e.g. MousePaint:
“If there is one comment to make about MousePaint, it’s that for a graphically intensive Apple II application, even on a 64K Apple IIe (which ran at 1 MHz), the user interface was surprisingly responsive. The mouse was remarkably smooth, and scrolling around the pixel canvas was seemingly effortless. Menus also appeared instantaneously, with no noticable drawing lag.”
For measuring, if you have an iPhone, there’s an app “very snappy” that works well for it, you can scrub frame by frame and mark the events to get the difference (no affiliation, just a happy user).
http://www.gdcvault.com/play/1021825/Automated-Testing-and-I... Automated testing and instant replays
http://www.gdcvault.com/play/1022195/Physics-for-Game-Progra... Networking for physics programmers
http://www.gdcvault.com/play/1020583/Animation-Bootcamp-An-I... Procedural animation for indies.
All are an hour long but great watches.
Older 2D accelerators had hardware support for beam-avoiding blits that allowed for single-buffered compositing. You could still see unsynchronized blits between distinct buffers, though. We could easily build 2D accelerators that work like this today, and they would use less power than GPUs.
Terminal use at this level of focus, is inseperable from thinking. You do not think to compose a command nor contemplate to move your fingers to type and execute, it just happens.
Therefore I think latency is also severely important at the terminal level.
That's not all the reason. More pixels pushed would only increase latency if the CPU/memory/etc wasn't faster than the older computers.
My conclusion is that if being able to re-render the content consistently in under one frame time is feasible, it's a win over layers. Fortunately, this is possible if you're willing to write fast code to render on the GPU.
There's a lot more complexity to this based on whether compositor latency is optional. Generally if you go fullscreen you can bypass the compositor. This is feasible for games but much less so for terminals and editors. I think that in some cases on Windows a swapchain can be promoted to a hardware overlay, and there you also get the opportunity to save one frame of latency. I think we'll see more of this in the future.
I'll write about this in more detail (with measurements) soon.
Is Unicode support in xterm really adding the latency?
But in practice, I'd expect non-ASCII to be associated with more expensive features, which has more of an effect.
This seems wrong. https://www.submarinecablemap.com/ has ~30,000 mi (48,000 km) from NYC -> London -> Japan -> Seattle.
The author gave "as the crow flies" numbers - you might want to contact them to point out their 30% is an underestimate.
How about the input latency of Parc Alto? PDP-7? :-)
I was highly skeptical at first, to put it mildly, but won over by the accuracy and consistency of the results. Caveats: it’s manual effort and you probably should have the same person measuring specific areas for repeatability.
The reason I thought it couldn’t work was that I knew about human reaction times, or at least thought I knew. The thing is: most of those reported times are for unanticipated events. If you are prepared, you can get much more precision than you’d think at high levels of accuracy.
Another trick: side by side comparisons (though that requires two identical machines, so not always feasible).
Let's say the clock flashes an LED every second, and you are trying to press a mechanical button that will make an audible click on the 10th LED flash.
To decide whether or not to accept a trial, the human has to judge whether or not the audible click and the flash of the LED were simultaneous. I have no idea how good we are at that, but I'd expect that we are better at that than at reacting to things.
Even better would be to make the button also flash an LED, so that there is no error from differences between aural and visual processing speed and latency.
The most interesting thing I learned about was the three modes of Saccadic latency. I wonder if there are aspects of computer GUIs that work for or against the three modes. Or decades later in 2017 if the three mode idea is still current thinking in the field. The variation of Saccadic modes does exceed your prediction of 40 ms.
I would assume it would be easier to test latency of FPS video game players rather than simulating skeet shooting using a FPS-like computer system. In fact that would be a novel scoring, ranking, or handicapping system for a FPS.
10.00 (on the dot)
How did you measure accuracy?
Same way I'm feeling right now. Any more reading on this (or easy experiments) you might be able to point us to? :-)
In most games, the joypad input is scanned once per game loop, which is generally the same thing as a graphical refresh, which is set by the hardware at 60Hz. So depending on at what point in the cycle you press the button, the worst case latency for the software to detect the change is ~16ms.
Supposing this button press resulted in some in-game action, it would then take visible effect at the next screen refresh.
Now, it's common that a game loop looks like this:
Update graphics from current game state
Update game state
Wait for next frame
As a result, it wouldn't be surprising if our input was delayed an extra 16ms while we wait for the frame AFTER the one where our input is detected.
So I would have expected a latency around 16-32ms - to get 80ms, either the button hardware would have to have added 50ms of latency, or the game they were using was very strangely coded, or the action the game took simply did not have any visible change on screen despite internal state changing - for example, an animation whose first few frames are the same as the standing still animation.
As an interesting additional note: If they were testing the Gameboy the same way as other mobile devices - scrolling the screen - it would be even faster. In the Gameboy scrolling is implemented in hardware - you draw tiles to a 32x32 tile memory region, but the screen is only 18x20 tiles. You can update a scroll X and Y register to pan the visible screen over the 32x32 area (this is how everything from sidescrollers to Pokemon do smooth scrolling, since tiles can only be placed on an 8x8 pixel grid). This register, if updated immediately after detecting the scroll input, would take effect immediately, even if it was midway through drawing a frame (resulting in a tearing effect which in most cases wouldn't be noticed but could be abused to produce interesting special effects). So the latency then would be 0-16ms + whatever the hardware adds.
Of course, you could pare this down further by writing specialised code that checks the joypad input rapidly instead of once per frame, or use the joypad interrupt to be immediately notified upon a button press (note this interrupt suffered from significant limitations, so no-one used it, and as far as I can tell the Gameboy Color doesn't have it at all, much to my chagrin since I'd figured out how to use it in a good way). That would likely violate the spirit of the comparison though, since you could equally program any of the other contenders to be a dedicated "fast scroll" machine.
$ brew search alacritty
Closed pull requests:
alacritty HEAD (new formula) (https://github.com/Homebrew/homebrew-core/pull/8727)
thay was on the frontpage of HN.
IIRC there was some critism there. He didn't include the mechanical switch type and it's actuation point and IIRC the text was very unprofessional (capitalization errors).
Edit: Someone disagrees? I have definitely had text entry into a JIRA box slow to crawl — when everything else ran fine — because the AJAX calls had to close out before it could accept the input. Not being snarky, you can definitely be in a position where you do have to wait for your packets to finish round trips before you get your keystrokes on the screen.
Where can I find more articles, such as this, with objective and practical perspectives on computing fundamentals?
I speculate that it's possible to measure a different value. First, it may include key travel. Second, a given program may choose to sit on an input for more than one frame (perhaps he picked a "bad" game for this test). Anyway, I highly suggest reaching out to him, because I agree this is an interesting discrepancy.
Beyond a lot of wasteful programming, we also understand better today what we are engineering for than we did in 1977.
It is a long-haired black cat. It doesn't like us. But it does like taking a shortcut through our front yard. If it sees us, it freezes. It waits, mid-stride. Then when it's satisfied we're not a threat, it starts moving again. If I move my arm or shift my body, it freezes again, sizing me up, sure that this is the day -- after all these years -- that I'm finally going in for the kill.
What's weird is I am convinced that this cat knows when I'm going to start moving. It halts before I move. By the time my head is moving or my arm is waving, it is already frozen and on high alert.
My working theory is not that this cat is telepathic or that it has defocused temporal perception. Rather, I think that because the cat is smaller than I am, its eyes are closer to its brain, its nerve impulses don't have to travel as far, and it really and truly knows before I do that I'm moving. Otherwise I can't explain this cat's superhuman reflexes or its blatant disregard of general relativity.
When your eyes focus on something new, your brain deletes the blur from your memory and fills it in with what you start looking at. The cat could notice and stop during your refocus, and because you can't see it during that period, all you see is it already stopped and perceive it as having been stopped the whole time.
Nonsense. You can test this yourself. Open a fast terminal, eg. xterm (libvte based terminals have a low framerate and are unsuitable).
sleep 0; echo "test"
sleep 0.05; echo "test"
This test shows your visual system can detect the difference of 50ms between animations. Indeed, the human visual system can also tell the difference between 60 FPS (16ms) and 120 FPS (8ms), and even higher! This doesn’t mean that you can tell the difference striking a key and seeing the character draw on the screen after 8ms instead of 16ms.
> 0.1 second: Limit for users feeling that they are directly manipulating objects in the UI. For example, this is the limit from the time the user selects a column in a table until that column should highlight or otherwise give feedback that it's selected. Ideally, this would also be the response time for sorting the column — if so, users would feel that they are sorting the table. (As opposed to feeling that they are ordering the computer to do the sorting for them.)