
Latency matters - majewsky
https://blog.bethselamin.de/posts/latency-matters.html
======
emerongi
Latency is the reason I just can not use some editors, most notably Atom. You
get used to it after a while, but it's not fun.

I remember when I used IntelliJ's product line for a good while and then
opened up Vim. I was dumbfounded that I somehow had not noticed that IntelliJ
was slow. It's fast enough - not Atom - but Vim was amazing next to it.

~~~
al452
IntelliJ often shows latency, but it's not "slow", it's doing a stupendous
amount of work behind the scenes and building an incredibly rich model of your
code and your application.

If you're not getting more out of IntelliJ than vim, then sure, vim will make
you happier. But also then you're not getting from IntelliJ what it's trying
to provide.

~~~
hvidgaard
I haven't used IntelliJ for the last 5 years or so, so I do not know how it is
today. My remark here is not about IntelliJ, but a generally about GUIs.

A GUI can be responsive and fast even when doing a lot of work behind the
scenes. I'm perfectly fine with it taking it's time figuring out what argument
I can use, or something else. But do not drop percieved responsiveness - I
want to be able to type and/or click menus at all times.

~~~
dlsniper
I recommend reading this: [https://pavelfatin.com/typing-with-
pleasure/](https://pavelfatin.com/typing-with-pleasure/)

------
Animats
Alan Kay once said something like "There should no more be a delay between
pressing a key and something happening on a computer than on a piano."

~~~
edejong
Beautiful back-on-the-envelope exposition on the latency of playing the piano:
[http://northernsounds.com/forum/showthread.php/37769-Acousti...](http://northernsounds.com/forum/showthread.php/37769-Acoustic-
piano-
latency?s=da097aee6ac23ded20434b03d6f1d748&p=333833&viewfull=1#post333833)

------
richdougherty
I can't help but think that since he's measuring his own perception at the ms
level - for a new program he's written himself - that it would be easy for
unconscious bias to creep in. To really know the truth about whether latency
matters in this case, a better experiment is probably needed.

Not that I think he should bother doing this, I'm just being the fun police.
:)

~~~
majewsky
I don't really know how to eliminate my own biases here, but a quick test
shows that the new prompt feels very much like the old one if I add a little
latency manually:

    
    
      PROMPT="\$(prettyprompt \$?)"
      PROMPT="\$(sleep 0.04; prettyprompt \$?)"
    

With the second one, I can always see a frame or two where the cursor is
already on the next line where the prompt will go, before the prompt actually
shows up.

~~~
darawk
Presumably the test would be whether he can, in a blind experiment, reliably
discriminate between them.

~~~
alecbenzer
Yeah -- seems like it'd be a bit tricky to set up, but would be super
interesting to see the results.

edit: Tried something out: [https://github.com/AlecBenzer/terminal-
latency](https://github.com/AlecBenzer/terminal-latency) Had 92% accuracy
differentiating 10ms from 100ms out of 28 tries

~~~
richdougherty
Nice!

------
pasbesoin
300 baud --> 1200 baud. Now there was a noticeable difference.

And yet, many years later, I had an impossible time explaining to a the more
pointy-haired end of a large development division, why IBM Rational products
might not work so well when accessed from halfway around the world. (They
are/were very "chatty" between client and server, and the cumulative latency
of all those round-trips simply killed you.)

Latency's still a sticking point, and a lot of people don't understand/believe
it until they see it for themselves.

P.S. It may sound like I'm talking about something else -- apples and oranges.
But no. What I mean is, in _many_ domains, people don't get latency until they
experience it.

I suppose that decades ago, that typewriter was just fine. Until you got fast
enough. Then, as the stuck key mashes and/or other physical limits started to
cap and impact your performance, you realized what people meant about "better"
mechanisms.

I wonder whether in some cases, people like musicians have a better natural
perspective on it. Domains where it's more readily apparent and dealt with.

Sorry, my mind's a bit all over the place, today...

~~~
aidenn0
Ugh; one site I visited had the requirement that all developers work from
clear-case dynamic views. The server (yes, singular server) was in a different
building.

The developers had turned build-parallelism up to 99 simultaneous jobs to try
to keep the CPU busy while waiting for the network, but a build was still only
using about 10% of a single CPU. An incremental build would take over a minute
for dependency checking, and a full rebuild would take hours.

------
jlgaddis
From "Russians Engineer a Brilliant Slot Machine Cheat—And Casinos Have No
Fix" [0]:

> _Finally, the St. Petersburg team transmits a list of timing markers to a
> custom app on the operative’s phone; those markers cause the handset to
> vibrate roughly 0.25 seconds before the operative should press the spin
> button._

> _" The normal reaction time for a human is about a quarter of a second,
> which is why they do that,” says Allison, who is also the founder of the
> annual World Game Protection Conference._

[0]: [https://www.wired.com/2017/02/russians-engineer-brilliant-
sl...](https://www.wired.com/2017/02/russians-engineer-brilliant-slot-machine-
cheat-casinos-no-fix/)

~~~
kevmo314
Sometimes I wonder if it would be called cheating if you could do the math to
figure out the PRNG parameters in your head, then calculate the next known
timestamp. It seems similar in premise to Ethereum contracts, where an
"exploit" is technically just following the contract.

------
valarauca1
The reason they sell monitors at 144FPS and now 256FPs is because people can
see and react that fast.

Sure those are generally for video games. But pretending <Xms is the end-all-
be-all of human perception is arrogant.

~~~
stanlarroque
I'm working on an AR headset and we made some experiments with the human eye.
I barely can notice the difference for motion above 120 fps. But my associate
is a retired fighter pilot and he can react to motion at ~280 fps.

~~~
mattkrause
There's got to be something else going on there.

280 frames/sec is about 3.5 ms/frame. Signals don't even leave the retina that
fast, let alone propagate to brain areas linked to perception and action.

Perhaps the higher frame rate makes the motion smoother, which allows the
pilot to estimate and extrapolate an objects' motion more accurately, or
something like that.

~~~
hvidgaard
The visual system do not work in frames (well, on a fundamental level I supose
it does, with the indivitual photons and nerve impulses being the smallest
unit, but that is not a limit in practice). The information the retina
captures is streamed continuesly to the brain, so higher FPS provides a higher
temporal resolution for the visual experience.

I don't think a fighter pilot inheriently react faster because of the
resolution, but he is trained to detect motion in his peripheral vision.
Perhaps he notice the smooth motion more than people who rarely train that
part of their visual system. It would be interested to see how a trained and
experience hunter would compare, as they often use peripheral vision as well.

~~~
mattkrause
My point was that 3.5ms frames _are_ close to that limit.

The nervous system is pretty slow. Each action potential takes about a
millisecond, and is followed a refractory period of another 1-2 milliseconds
or so. This limits the speed of transmission in the nervous system generally.

The visual system, specifically, is even slower. There's a fairly complicated
electrochemical cascade that turns photons into electrical impulses in rods
and cones, and it's not fast. Drum (1982) describes several attempts at
measuring the latencies of cones. The exact value depends on several factors
like adaption levels and the visual stimuli, but they're all in the tens of
milliseconds.

~~~
hvidgaard
I have no doubt there is a latency, but I was trying to say that the stream of
information is not discrete. The stream is continues, so higher framerate
means that more information will flow to the brain because the temporal
resolution is higher.

Are you saying that the rods and cones only "sample" every so often?

~~~
mattkrause
No, in that there is not a master clock, shutter, or anything like that that
separates the visual input into discrete frames.

Yes, in that phototransduction is a sluggish chemical process. The impulse-
response functions of photoreceptors are pretty well-known (e.g, Figure 1
here:
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1189952/?page=5](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1189952/?page=5)).
The tiny square wave at the bottom is the visual stimulation; the other traces
are the responses of individual photoreceptors.

Figure 4 of the paper (p. 689) shows the response to a pair of flashes. In the
bottom-most trace, you can see that the photoreceptor "misses" the second
visual stimulus because the responses evoked by the first and second stimuli
overlap in time and interfere.

I'd say that the visual system does get a continuous stream of inputs, but its
output is low-pass filtered (and clipped, rectified, etc) so that faster
inputs are not always reflected in its output.

~~~
hvidgaard
That is interesting. When the stimuli is long enough, you get full resolution,
but since the receptors are not syncronized I would expect the eye to react to
framerates faster than the response time of the receptors, at the cost of
resolution.

It is indeed an interesting topic, thanks for the pointers.

------
jstimpfle
Latency is one of the reasons why I still use xterm and bitmap fonts. The
other being font crispness on standard ~100dpi displays.

~~~
kristianp
The crisp shape of the text often makes me nostalgic for systems pre cleartype
on windows and pre-gnome 3 on linux.

------
secure
I agree. A while ago, I was so fed up with how long my prompt took to render,
I went through my zshrc and cleaned up a bunch of stuff to make it faster:

[https://github.com/stapelberg/configfiles/compare/5ccd3cbd38...](https://github.com/stapelberg/configfiles/compare/5ccd3cbd38b67e5ec91671b6fe07d5dded0e96fa...9ee8b13de00c2c84463681fc91f3fd64ae2e4342)

The general techniques used in order of preference are eliminating/skipping
work, lazily doing work later, replacing external commands with zsh builtins.

~~~
majewsky
Viewing your diff made me wonder if there is a market for custom-compiled
shells, where things like the custom prompt or termtitle are baked in at
compilation time.

------
stinos
How do some applications practically get to latencies smaller than 16.6mSec?
That can only be possible if the character typed is being rendered directly
into the GPU buffer _which currently is being drawn_ right? Or does the screen
used for testing this run at a higher refresh rate than 60Hz? Or does
'latency' not mean time between hardware keypress and character appearing on
screen?

~~~
Qwertious
16.6ms is the _maximum_ lag between the monitor receiving the framebuffer and
drawing it - the average would therefore be half of that, and the minimum
would be essentially zero.

 _But_ , that's ignoring all the other little delays in the system. For
instance, monitors have a delay of a few milliseconds.

In short, this is a massive oversimplification.

------
zde
Python startup latency is horrible, and unfixable. It got much worse in
python3 because it loads much more modules (think of all the unicode crap).

~~~
Skunkleton
Yeah, I did a little test on my system where I wrote a program in C, Python,
Perl, and Bash that just prints hello to the console then timed how long it
took to fork/exec from bash. C takes ~0.5ms, Perl is ~1.5ms, Bash is ~2.5 ms,
and Python 3 is ~35ms.

Kind of sad because there are lots of places where a python script would be a
nice way to solve a problem, but I cant use it because it adds an annoying
delay.

~~~
hvidgaard
In what use case, is a 35ms startup time an issue?

~~~
majewsky
In TFA.

~~~
hvidgaard
I am well aware of the article, but it's nothing but "it feels magical".

Perhaps, since I haven't had any workflow lately which blocks 30-50ms on my
input, I'm not the right person to comment on this. That said, the article
uses python for a script that will be run multiple times from scratch every
time. Perhaps an interpretated language with a known slow start was the wrong
tool right from the start?

~~~
majewsky
Well yeah, that's the point. Plenty of people argue that Python is good enough
for any job (except maybe kernel modules, but hey, people even wrote kernel
modules in Java, with the obvious consequences). Plenty of people also argue
that a runtime of 50ms is not worth optimizing anymore. And I disagree.

------
mjevans
This guy must have not noticed after adding a thousand papercuts to each
terminal prompt. Premature optimization is one thing, but sometimes it is a
good idea to compare against a baseline.

~~~
majewsky
It's not like I'm the only one adding papercuts to the stack. ;)

From the top of my head, each keypress goes at least through the keyboard
driver, X11, possibly the window manager, the terminal, the line discipline,
the terminal again, font rendering, compositing, and then display. It only
takes one of these steps to mess up for latency to go up.

~~~
mjevans
The latency was in constructing the 'prompt' string (via his custom program).
The actual input latency in the stack is both 'constant' and rather difficult
for an end user to directly affect. Thankfully the millions of human-hours put
in to the hardware and software stack have made that quite efficient.

~~~
majewsky
> The actual input latency in the stack is both 'constant' and rather
> difficult for an end user to directly affect.

The article starts with a link to [https://danluu.com/term-
latency/](https://danluu.com/term-latency/), which disproves both assertions.

------
majewsky
As a semi-related followup, since my blog made the frontpage for the first
time, I looked at whether the HN crowd is visible on the monitoring charts:
[https://blog.bethselamin.de/posts/latency-matters-
aftermath....](https://blog.bethselamin.de/posts/latency-matters-
aftermath.html)

~~~
dredmorbius
And a semi-related question: is there some particular reason you've replaced a
sensible, directly-proportional, and _useful_ vertical scrollbar with an (IMO)
insensible, _literally orthogonal_ , top-placed _horizontal_ scrollbar ...
which cannot actually be used to control location within the document?

I've noted the demise of the vertical scrollbar before. This is the first
chance I've had to ask someone who's apparently implemented a horizontal one
directly.

I'm quite curious as to the rationale.

Previously:
[https://ello.co/dredmorbius/post/0hgfswmoti3fi5zgftjecq](https://ello.co/dredmorbius/post/0hgfswmoti3fi5zgftjecq)
[https://ello.co/dredmorbius/post/0hgfswmoti3fi5zgftjecq](https://ello.co/dredmorbius/post/0hgfswmoti3fi5zgftjecq)

------
KirinDave
For those who are fans of Powerline but fed up with the latency of Python, you
can use powerline-server OR you can take a similar cue and use:

[https://github.com/rdnetto/powerline-
hs](https://github.com/rdnetto/powerline-hs)

~~~
jlgaddis
I like powerline but it really is slow as molasses -- even _after_ I disabled
a bunch of the (useless) default stuff. powerline-daemon was supposed to help
but, if it did, it wasn't really noticeable.

~~~
KirinDave
Powerline-hs is pretty quick.

------
twhb
Along similar lines, I noticed some latency to Apple Terminal's prompt and
made this to fix it.

[https://github.com/twhb/terminal-prompt-
perf](https://github.com/twhb/terminal-prompt-perf)

------
dlsniper
See here how to actually check for latency: [https://pavelfatin.com/typing-
with-pleasure/](https://pavelfatin.com/typing-with-pleasure/) :)

------
diegorbaquero
Check mosh! It's an amazing tool for that lag

~~~
majewsky
mosh is definitely on my list of "stuff that I need to check out", but the
things that I showed in the article was all on localhost.

------
Cieplak
With an arduino and an LED, it's simple to test the limits of one's
perception. Copy one of the hello world LED blinking programs (basically a
square wave to a GPIO pin) and modify the frequency of the blink. I see a
rapidly blinking light around 15ms period, and a solid light around 10ms
period. To me that implies that my human sample rate is about 12ms, although
there could be other factors at play, such as capacitance in the circuit.

Edit: s/frequency/period/

~~~
zokier
That test does not really measure the ability to notice latency. The simplest
latency test I can imagine would be a system where you press a button and LED
lights up (or a sound is played). Do that in pairs where randomly either the
first or second trial has some latency added in. Then ask the subject which
one seemed to respond faster. Rinse and repeat about gazillion times with
different amounts of added latency.

The problem with such setup is that it relies on conscious notice of latency.
It is very likely that the boundary where latency begins to impair performance
is far lower than where you actually notice it.

A more sophisticated test would be to measure e.g. typing speed vs latency.
You'd probably need some customized hardware to get low baseline latency so
that you can then add latency in a controlled way. Even then I imagine that
quite many trials would be needed to make any conclusions.

~~~
Moru
Another way would be moving the mouse and seeing the cursor lag behind. Old
wireless mices could have some real bad lag. It feels somehow like holding a
garden hose and trying to hit different spots on the lawn with the water.

There was people that just didn't notice anything but everyone playing Quake
halfway good had bad problems with those. Some went so far as to only use
wired mices with ball, not even optical sensor.

------
skybrian
This is a good point, but remember that if you're in the habit of using
computers too much, you shouldn't want lower latency. Higher latency is less
addictive if you're trying to cut back.

~~~
majewsky
> in the habit of using computers too much

If you have any advice how to use computers less in my job as a developer and
Devops engineer, I'll gladly take it.

~~~
skybrian
Yeah, I'm thinking more about social networks and games. I've never heard of
anyone addicted to the command line.

~~~
reificator
Might as well face it, you're addicted to bash

EDIT: My Unicode musical notes at the start and end of the line didn't make
it. :(

