Hacker News new | past | comments | ask | show | jobs | submit login
Terminal latency (2017) (danluu.com)
181 points by andyjpb on March 20, 2019 | hide | past | favorite | 109 comments



As a keyboard player, I can tell you that anything over 10ms latency noticeably degrades the experience. 100ms of keypress-to-sound latency makes the instrument unplayable.

Your ears will detect very small latencies; offset something just by 1-2ms, and you get an echo feel; 50ms will be off-beat.

(Organ players have to deal with that kind of latency on their instrument[1], but it's a hard skill to learn, and limits the music that can be performed on the instrument. Say, following a drummer is out of the question.)

Why people tolerate huge amounts of unnecessary latency is beyond me. And everything is laggy these days.

Except MIDI/audio setups, that is. And even in that scene, Android got a low-latency API only last year.

I say, the decade of 2020 will be not about increasing performance, but decreasing the goddamn latency down to unnoticeable levels. Or I really hope so.

[1]https://music.stackexchange.com/questions/30454/how-does-a-p...


Life can be rough for bass players, given the wavelength and speed of sound - you have to play ahead of the beat just for a note to show up on time! ;-)

10ms of added latency is very irritating - even adding 4ms feels clunky on guitar/headphones and that's typical for playing through a DAW and digital effects. As much as I like the iPad ... drawing on it is still like drawing underwater, and touch-to-sound latency is still too high in many cases. On the other hand... I can almost tolerate a bit of latency playing a weighted keyboard because I imagine that a hammer is hitting a string so I adjust my playing accordingly. The keyboard also seems to trigger before the note bottoms out, which helps a bit (except when you're playing very quietly.)

There was a terrible idea in the CRT era that when a user clicked on a menu you could do all sorts of computation to update menu contents and still have time to draw the menu before the next ~80Hz refresh. Even if you missed a frame "humans couldn't notice it."

Unfortunately this failed horribly in practice because it frequently pushed things over the edge, resulting in two frames of latency (or many more if latency wasn't managed well) which made things feel sluggish and unresponsive.

Even in 2019, though we have 120+ Hz user interfaces, asynchronous refresh, multi-gigahertz processors, and insane GPUs, menus are still clunky. Every time I click on a menu in macOS (Safari, for example, which I am using as I write this) I can almost hear it go "kachunk." Not even 10ms. ;-( Maybe the delay is an intentional fade-in or ... something, but I still hate it.

Moral of the story: 10ms is terrible, and every bit of delay you add to a UI can push it (and your users) closer to the edge.


> Every time I click on a menu in macOS (Safari, for example, which I am using as I write this) I can almost hear it go "kachunk." Not even 10ms. ;-( Maybe the delay is an intentional fade-in or ... something, but I still hate it.

Not to be too down on one player, but Apple is especially bad for this. GTK+ is considered slow by many, but compared to Cocoa, it's an SR-71. Cocoa rarely ever completes a window resize fast enough to hit a 60Hz VSync, GTK+ at least usually keeps up with 144Hz (for me) if you have a good GPU, and 60Hz if you have anything faster than a mid-grade Sandybridge IGP. Depending on the application, Qt is also sometimes dramatically faster than GTK+ (though on Linux the controls are somewhat clunky, since they seem to be emulating some Windows control behaviours which did not stand the test of time).


Cooperative Multitasking may have been a massive hack but boy was it ever responsive when you selected a menu back in System 7


I think some of the benefit of this could be replicated in a preemptive system if your process could yield to a specific other process, and grant your remaining scheduler time (and maybe some of the target's) to the target of your IPC, and preempt the preemption.


> Organ players have to deal with that kind of latency on their instrument[1], but it's a hard skill to learn, and limits the music that can be performed on the instrument. Say, following a drummer is out of the question.

Field musicians (marching band, drum & bugle corps, etc.) have a similar issue because of the sheer size of the "stage"; the distances between performers is large enough that the speed of sound actually causes notes to be off-beat to the audience if they're "on-beat" to the performer.

Instead, each performer has to basically ignore one's own ears (at least for rhythm; one should by all means use one's ears for matching pitch, though in theory a sufficiently-fast-moving performer might need to pitch up/down relative to what one hears to account for Doppler shifts) and watch someone up front (e.g. a drum major). Even in non-marching settings (e.g. your typical orchestra or symphonic band or even rock/jazz/etc. band), the percussion section is typically positioned rear-most on the stage for this exact reason (and in this case, it is okay to listen for the beat, since the sound of said beat would be coming from behind you).

So for an organ player (organist?), your best bet would be to place the drummer behind the organ, then give the organist a mirror to watch the drummer's sticks and keep time with the visual beat. That way, by the time the organ's sound actually comes out, the drummer's sound will have arrived and they'll be in sync (assuming you have the distance right between the drums and organ; this is something you'll probably have to tune for every stage, given air-pressure-related differences in the speed of sound).


Organs are even harder because it isn't just speed of sound. There is often mechanical delay before the pipe even gets the air after the key is pressed. In some organs the pipes are "slow" to speak meaning that rank of pipes won't make a sound for some time after the air arrives. Skilled pipe manufactures can minimize this (called voicing, though voicing covers other areas of pipe fine tuning), but in some cases that involves other compromises that aren't desired.

Organs have gotten better and worse in this regard several times over the hundreds of years since they have been known.


My point is that if you put the drummer far enough behind the organist and have the organist track the drummer by eye instead of by ear, the speed-of-sound-induced delay will match the organ's mechanically-induced delay and everything will sound right.

Another option would be to place the organist next to the drummer, but keep the organ itself far enough in front of the drummer such that by the time the sound comes out it's in time with the drummer. I don't know if there are any organs that work this way, though (I'd imagine it'd have to be either mechanically complex or electronically controlled; the latter, though, gives an option for a synthesized "monitor" that the organist can hear, which should hopefully make the delayed sound of the actual organ less of a mindscrew).

This all assumes you have a large enough stage to be able to pull this off, and (in the case of fixed organs) have actually built the organ such that the sound does emerge from in front of the drums.


As a Dance Dance Revolution player, I can tell you that it affects rhythm games too. I don't have the exact values, but DDR is unplayable with a PS2 on most hi-def flatscreens because of the scale-up lag, even when you choose the "gaming mode" that optimizes for speed.

It's related to most software engineering not being engineering: http://tonsky.me/blog/disenchantment/


"As a keyboard player, I can tell you that anything over 10ms latency noticeably degrades the experience."

Which explains why unless I missed some new development, there's no way professional music can be played live on any Android based product. By music I mean music, not triggering prerecorded loops, a thing which could be done on a 50+ years old Mellotron as well. I'm 100% sure Java is to blame: low latency and virtual machines don't mix well.


> I'm 100% sure Java is to blame

I shouldn't but I will anyway: People who are 100% sure of things are generally wrong and this case is no exception.

I'm being facetious here but what if I told you that nearly all machines are virtual machines? Your CPU doesn't really execute instructions, it interprets them by first translating them to microcode.

There's nothing inherent to the idea of not coding directly on the lowest level possible which forces your programs to have bad performance. Zero-cost abstractions really do exist.

I mean, Erlang uses a virtual machine called BEAM, erlang programs sit at a remarkably high level of abstraction, and it's not like erlang systems are known for their chronically high latency.

The Android Runtime also compiles applications before running them [1]. The instructions being run are native code in elf files [2].

Certainly Android has had latency issues, and I'm no fan of Java, but that was more of an architectural thing than a Java-Is-Such-A-Bad-Tool-Lol thing.

[1] https://source.android.com/devices/tech/dalvik [2] https://stackoverflow.com/questions/26254538/difference-betw...


> I'm being facetious here but what if I told you that nearly all machines are virtual machines? Your CPU doesn't really execute instructions, it interprets them by first translating them to microcode.

Well that’s not true.



You can develop apps in C/C++ with the NDK, cutting out Java almost entirely and avoiding its associated latencies. This is what all serious games do.

However, the lack of a widely-supported low-latency audio stack is still a killer.


Android Oboe[1] might fix that (an API that goes through (OpenSL ES or AAudio depending on what is available).

Then, in future far, far away we might get a touch input method that isn't too laggy for interaction with audio. Last year, the best we could get was around 30ms[2] on iSomething; and I hope that 120hz refresh rate becomes the standard (but even that is huuuuge compared to a MIDI controller).

[1]https://github.com/google/oboe

[2]https://www.reddit.com/r/Android/comments/7teiwe/lets_discus...


>There's no way professional music can be played live on any Android based product.

There are some very recent developments on that frontier[1]. Let's hope that the new Oboe API will result in usable music software on Android.

I have a DX7 emulator on Android that is pretty good latency-wise when used with a MIDI controller.

Sadly, touchscreen latency is still way too big for anything musical (and the new Oboe API doesn't address that).

[1]https://android-developers.googleblog.com/2018/10/introducin...


This isn't just a high-tech problem.

I'm a pianist who sometimes plays Pipe Organ, and you always have to "play ahead." It can take 250ms or more for some 16' or 32' pipes to start sounding.


I think Carillonists have the same problem.


As a keyboard player, I can tell you that anything over 10ms latency noticeably degrades the experience. 100ms of keypress-to-sound latency makes the instrument unplayable.

When I've used Audacity to record multi-track, here's my experience: Over time very good players will notice 5ms or above of de-sync.


For reference, sound travels 0.343 meters per millisecond, so 10ms latency is the equivalent of sitting 3.4 meters from a speaker and playing your keyboard.


Sure, but not constantly, or it wouldn't be an issue (which I guess is what you're implying).

It's like jumping instantaneously 3.4m away or toward, each note the timing is off by 10ms.


Not sure what you are saying, the time it takes after you hit a key for the sound to reach your ears is 10ms when sitting at that distance. It's the same time it takes if you use headphones and use a plugin with a 10ms latency. If anything the speaker situation could be worse since you may add room reverbation to the sound as well. If this is an issue or not depends on the player and how sensitive they are to this.


That's orthogonal to the article. There aren't that many situations in programming where one urgently needs to adjust the speed (or velocity!) of what one is typing based on the feedback of what they see on the screen.

You, however, must constantly make such adjustments or the entire musical endeavor may fall apart. :)


You do need rapid feedback to know if you've made a mistake typing at 110wpm. Latency also makes a terminal feel like molasses; just sticky and unresponsive. The feel of a musical instrument and the feel of a technical instrument are similarly affected by latency.


I agree with both you and the parent comment :)

Yes, I don't even always need to look at the screen when I am typing fast. However, when I do, molasses UIs make me sad.

As an interesting aside, I notice that my brain tends to make interesting typos when I am typing without looking at the words that I type and get into a flow state when I am not thinking about the spelling of words, especially at high WPM.

By "interesting" I mean "I would have never consciously chose to spell that word like that" interesting. It shows that typing is happening on a different level than spelling; it's a more direct pipeline than writing with a pen where one has to think about shapes of letters. It sort of happens as an independent process, where <send a word to keyboard> happens without active thinking, and can have interesting "optimizations" along the way. I am looking at the monitor to make sure that these "optimizations" do not, in fact, occur.

To give an example of this, here's a snippet of what I type with my eyes closed:

"I'm a rocket ship on my way to mars, on a collision course. I'm a sattelite, I'm out of contrrol. "

So, I need to occasionally look at the screen to catch "contrrol" errors. And it's annoying to do so when the letters lag behind key presses :)


Music is extremely sensitive for timings, but when it comes to visuals you seem to adapt quickly and tolerate lag up to 1/10s, how is that !? And what about "hidden" messages in videos (running as low as 30 FPS!) ? That only the subconscious can process. It seems that our human image processing only runs at around 30-60 FPS when idle, but can increase in stressful situations. I recall a study where people bungy jumping was able to read something off a clock during a jump, but not when safely on the ground.


>Music is extremely sensitive for timings, but when it comes to visuals you seem to adapt quickly and tolerate lag up to 1/10s, how is that?

Our visual systems are not optimized to give accurate, realtime representation of geometry around us. They evolved to give us a somewhat abstract representation that is most useful.

There is a lot of post-processing involved between photons hitting your eye and your brain "seeing" things. In that post-processing, both geometry and time information are "corrected".

The most striking example of this is saccadic masking[1]. Basically, your eyeballs move all the time, but you will never see that when you are looking at yourself in the mirror. However, look at your eyeballs on your selfie camera, and ta-dah! They move!

That's because your eyes essentially block your visual processing pipeline while the eye is moving, and then backfill the missing frames.

That's why if you move your eyes around rapidly while looking at an analog clock, you will perceive some seconds to last longer than others. The eyes are lying about how long things take.

>Our human image processing only runs at around 30-60 FPS when idle, but can increase in stressful situations

In general, our vision doesn't really have a "frame rate", and depending on how you define that concept, our "frame rate" goes well into the hundreds of FPS[0].

But something like what you said is happening. Visual perception of things gets on-the-fly adjustments as necessary.

For example, baseball players see balls as being larger than you would see the same ball while being stationary at the same distance[2].

That's why, yes, we can get away with a lot of visual trickery, taking advantage of the visual post-processing pipeline (monitors giving the illusion of smooth motion is one example).

However, this all doesn't mean we won't feel latency when it's there.

As a good example, compare drawing a line with a pencil vs. drawing a line on a touchscreen device with a stylus. You will notice even the smallest latency. In fact, there is no device on the market where something simple as drawing a line following a stylus would happen fast enough for you to be fooled.

As long as there is any visible distance between the tip of the stylus and the end of the line appearing on the screen, you will see it (or feel it), even if takes a very small time to fill that gap.

Same with scroll latency: it's something you will not be able to see on a video recording of someone scrolling on a touchscreen, but you will absolutely feel the difference when you do it.

And all that said, our aural perception is more sensitive to timing, and we also do Fourier transforms on the fly with our ears (that's how we can tell chords apart!), amonf other things.

Our perception is an interesting thing.

[0]https://www.pcgamer.com/how-many-frames-per-second-can-the-h...

[1]https://en.wikipedia.org/wiki/Saccadic_masking

[2]https://www.thecut.com/2016/10/the-science-of-how-baseball-p...


> (Organ players have to deal with that kind of latency on their instrument[1], but it's a hard skill to learn, and limits the music that can be performed on the instrument. Say, following a drummer is out of the question.)

I suppose following a metronome instead of the drum isn't practical/possible ?


No, they have to lead. In some of the worst cases there is more than a full second between pressing a key and hearing a sound. For fast music that can be several beats.

Learning an organ means you need to learn to trust the instrument (not easy when it is a mechanical beast with many parts that can fail) and feel what the music will be not react to what it is.


I hooked a ukulele up to my friend’s Tesla cool the other day using my laptop as a buffer by just piping arecord into aplay (hooking it up directly didn’t work) and the half second latency made it totally impossible to focus even though I could still hear the notes immediately after playing them.


100ms of keypress-to-sound latency makes the instrument unplayable.

What about organ players?


Read on :)


libvte is a major offender here. It's used in several popular terminal emulators, and it's deliberately capped at approximately 40fps[0], which means on a 60Hz display you'll sometimes get an additional frame of latency, and not even predictably because the timing is so uneven.

Set your keyboard autorepeat to 60Hz, hold down a key in a libvte-based terminal, and watch how bad it is. Now try again with xterm and see what it's supposed to be like. I can only assume the slow update is some misguided attempt at power saving.

[0] https://github.com/GNOME/vte/blob/master/src/vte.cc#L10547


People keep knocking xterm but honestly it still is probably the best and most complete terminal emulator.


You know, just today I had to install Xterm on my Lubuntu box because the DosBOX debugger is completely mangled by LXTerminal.

LXTerminal doesn't get all the blame though, DOSBox devs should know better than to use a terminal emulator (with all its in-band signaling fail) and instead create a new non-terminal console window. It's not like they don't already have the code for one just lying around.


It also has the awesome Secure Keyboard feature, which I haven’t yet seen anywhere else.


MacOS Terminal.app


It is absolutely my favorite my terminal. I'm happy to see it's still being actively developed.


I didn’t know what libvte is. In case it helps anyone:

Terminal emulator widget for GTK+ 3.0

The VTE library provides a terminal emulator widget VteTerminal for applications using the GTK+ toolkit. It also provides the VtePTY object containing functions for starting a new process on a new pseudo-terminal and for manipulating pseudo-terminals.


Too bad they didn't measure xterm. That one has always felt the best to me on a native Linux system by a large margin.

Also Microsft's default WSL terminal (ubuntu.exe if you happen to be running Ubuntu WSL) is a lot faster than anything else I tried on Windows. I suppose that's what you get for using a low level UI API that's optimized for performance. Its input latency is lower than any other app on Windows.

Back in December I did a Windows terminal review at: https://nickjanetakis.com/blog/conemu-vs-hyper-vs-terminus-v...

There's no scientific measurements on input latency but input latency is something I pay a lot of attention to, especially considering all you do is type into a terminal. It is the most important thing IMO. The TL;DR is the default WSL terminal is a lot better than anything else for low latency, but WSLtty is what I use personally due to it being more configurable while still being OK'ish for latency. Hyper and Terminus were the 2 worst performing terminals in both input latency and having UI bugs that made the terminal unusable for day to day usage (both Electron apps).


I use xterm for the same reason. It's really fast and lightweight in terms of memory usage and dependencies. With a tiling window manager, I don't need many features other terminals provide. urxvt feels closer in terms of latency, and offers some extra features.

A slight annoyance is that since xterm is quite old, it doesn't stick to some conventional X variables and renders colors in a slightly inconsistent way. So configuration of e.g. fonts requires an extra line or two in .Xresources. And if you run redshift, it looks a bit different than other applications. Since xterm is seeing active development again, I hope some of this gets fixed.

Only eshell feels slightly faster, something that this post kind of confirms. Eshell is a very interesting alternative to classical shells. Too bad it's a bit unfinished and sparsely documented.


Did you do any modifications to the WSL default? It's by far the slowest program on my work computer, like multiple seconds of typing latency at times. It's disgusting to use. Hyper is a bit better but it's obvious Windows wasn't designed by people who care about minor details like typing latency.


Nope. I just launched ubuntu.exe running Ubuntu 18.04 WSL on Windows 10 Pro latest stable release and it was the fastest terminal I ever used besides xterm on native Linux.

I do have Windows defender disabled. I'm not sure if it scans every key press though, but it will bring WSL's output to a grinding halt.

I use WSL / wsltty / tmux / terminal Vim (in WSL) as my primary dev environment. It's really solid, even with 1.5mb to 5mb markdown files. Vim uses about 8mb of RAM and a few % of CPU when typing. VSCode used 800MB of RAM and 50%+ of my CPU to open and idle in the same markdown file with 65%+ CPU load when typing.


Did you do any modifications to the WSL default? It's by far the slowest program on my work computer...

There’s a interesting comment by nickjj further down this post:

https://news.ycombinator.com/item?id=19444804

> A few months ago I asked the MS devs why their default WSL terminal felt incredibly responsive and they replied with a ton of details.

> You can read the thread here: https://github.com/Microsoft/console/issues/327

it's obvious Windows wasn't designed by people who care about minor details like typing latency.

Given the number of people who have worked on the Windows OS with competing priorities and goals, over the course of more than 30 years, I don’t know that you can extrapolate that conclusion from personal experience (which seems to differ from the common experience) with one application.


In fairness, that might be less WSL and more Windows itself.


He was on MacOS so unfortunately couldn't do xterm.

They do uxterm (which is a wrapper around xterm to put it into UTF-8 mode) here.

https://lwn.net/Articles/751763/

TL;DR: This quote sums it up:

"The first thing that struck me is that old programs like xterm and mlterm have the best response time, having worse case latency (2.4ms) better than the best case for all other terminals (10.6ms for st)."


Thanks for the link. Glad to see the numbers back up exactly what I felt.

I remember trying xterm, st and gnome terminal and gnome terminal was really bad. It felt comparable or maybe even worse than hyper. I also remember thinking "for such a less bloated terminal, st sure doesn't feel minimal". There was this level of latency present on st that xterm doesn't have.

xterm with tmux in my opinion is the ultimate combo. You get a lightning fast terminal and all of the features you could ever want (windows, panes, searching, etc.). With a bit of configuring, xterm can do all sorts of nice things like having clickable URLs, customized keybinds, font adjustment, etc.. Plus you can make it look good.


Tune xfps and actionfps values in config.h to really high numbers and st can be on the same level as xterm/ml-term.


macOS has xterm if you run it under XQuartz.


I use (and recommend) wsltty. Just mind your font selection on HiDPI (I use one of the Fira variants that renders fast and speedily).


How does one of the Fira variants compare to Consolas?

I never thought about my font slowing me down. Although with the ubuntu.exe terminal, Consolas was blazing fast while wsltty has a noticeable amount of delay.

But there's a new releasing coming out that should remove 30ms of input latency. That might put it on par with ubuntu.exe. We'll see.



I'd love to see a followup study. GPU based terminal emulators (alacritty, kitty) have evolved a lot over the last two years.


My personal experience with GPU accelerated GUIs is that latency is always worse.

Drawing text is something that the cpu is pretty good at and the GPU just means more back and forth over the buss, more driver coupling, more weird libraries.


I think you're right about GPU tending to have worse latency, but I think it's because that's the way the swap chain is set up and the use of blocking present calls. I've been studying this and am pretty sure I know how to make low latency on Windows work, mostly it's following the advice in [1]. If you do it right, then on modern hardware, DirectFlip kicks in and that saves you an additional frame that would ordinarily be eaten by the compositor. I plan to measure this at some point in the not too distant future, or would be happy to mentor somebody interested in taking it on.

[1]: https://devblogs.microsoft.com/directx/dxgi-flip-model/



That would be my concern. But there are so many factors, I'd like to see some scientific measurement.


AFAIK, GPUs are throughput devices, not latency-reducing devices.

Happy to be proven wrong!


iTerm is a GPU based terminal emulator now too, as long as the metal backend is active


Thanks, I knew I had seen this before but unbelievably there is no date on that article.


Submissions on HN only get dated when they are aren't from the current year. That one was.


It would be interesting to see an updated test. Since this report, multiple apps have undergone speed overhauls. The latest version of Hyper, for example, dropped DOM rendering in favor of Canvas/WebGL.

That being said, there is a major difference in perceived speed working with terminals over a GUI like in macOS versus something with fewer layers. I hooked up a Raspberry Pi to do some vim editing using Raspbian Stretch Lite and it felt like editing on a supercomputer (right up until I needed to compile something).


The only time I ever noticed a difference in input lag was when I had to fumble my way through the UEFI console.

I had no idea what I was doing, but the speed was eerie.


This is common for anything that emulates text terminal on highres-ish graphical mode using common denominator VGA/VESA interfaces without any HW acceleration whatsoever (and also often with banked framebuffer).

By the way, on 4k monitor you can discern that grub2 draws its menu from bottom up as it takes second or so.

Linux FB console is significantly faster but still scrolling (ie. fullscreen redraw) is noticeably CPU intensive.


Yep, and alacritty now _does_ support scrolling (was only available on a branch at the time of the post, so tmux used).


iPhone 6s Touch Screen had ~40ms end to end latency, it was good but wished it was better.*

The iPhone XS had about 40 as well, but the iPhone XR had 60ms+ and in some case closing in to 80ms. ( It was 60ms + 10+ from Display if I remember correctly )

I know it is a hard to sell "figure", but I wish Apple could find a way to market this, and improve on it. The Apple Pencil 2 and Pro Motion, are still no where close to real time. It still feels lag, or Jank.

Note: I try to Google for the source but unfortunately nothing came up.

Edit: The only thing I got was this Chinese Site with a Table of input latency. http://bbs.nga.cn/read.php?tid=16347225&page=1


"I know it is a hard to sell "figure", but I wish Apple could find a way to market this"

Blackberry did that with the Playbook pitting it against iPads. They showed it was more responsive by having a video game, a browser, and other stuff running with no lag. Apple could similarly highlight faster response time.


As mentioned, this is somewhat out of date. iTerm2 has gotten a GPU renderer since then, which should reduce the latency. Unfortunately I can't get typometer to work. I guess it's because it's not requesting the right permissions on macOS 10.14.


If you have an iPhone, http://isitsnappy.com/ is very easy to test with


Throw in things like text/syntax highlighting, autocomplete, search etc., and 100ms latency will be a dream (speaking as oh-my-zsh user).

Also: fancy fonts and ligatures


That kind of attitude is how some projects got so far behind. On MacOS I see 10-15ms with all of those features in either the terminal or VSCode.


Is zsh slower than bash or ksh on terminals?


I think op is mostly referring to how zsh users often load it up with extensions to do fancy completions, and fancy prompts requiring special fonts and doing lots of things like showing current git branch. It can add some latency!


You've just described my situation :)


I spend lots of time on remote servers and VPS with ssh via Tor. So I'm used to ~500 msec latency.

But hitting VirtualBox VMs via VRDP via Tor, that can bring me to tears. Installing Debian, say, is fine. Then you try to login.

    debian login: uuuss
    debian login: usssss
    debian login: uuus
And if you get to password, it's far worse, because you can't see what you've typed. So I'd set user as "x" and its password as "z". And once I'd logged in, do:

    setterm -repeat off
    setterm -repeat off >> /etc/issue
But then there's the joy of using GUIs. Mouse pointers with ~500 msec or more latency are crazy making.


This is where mosh (https://mosh.org/) really helps


Thanks.

Except that Tor doesn't route UDP :(

Could do mosh via tinc via Tor, though. I'll try it.


Considering its simplicity, I wonder why st fared so poorly. Also, I’m surprised that eshell did well on the latency part, since it sometimes feels laggy (which could be due to its reimplementation of basic Unix tools in elisp).


It's quite fast when you are not on macOS :^)


Eeeeeuuugghh... desktop Linux.


The feeling is mutual, I'm sure.


> Considering its simplicity, I wonder why st fared so poorly.

I don't understand why in earth you would expect simple solutions to be fast. Most very fast software need some level of complexity in order to special-case for the fast platform primitives. See for instance glibc which is not simple when compared to musl but also rolls over it in benchmarks, because it uses many state of the art algorithms and many architecture-specific optimizations.


I think he means that st is simple regarding its features not its implementation (which I know nothing about).


I don't understand why in earth you would expect

That’s unnecessarily aggressive.


From the comments of the corresponding LWN article (https://lwn.net/Articles/751763/):

You can easily get st down to 5-7ms latency by tuning the fps settings in config.h. Also the shell running inside it makes quite a difference


I find complexity more correlated with performance than simplicity.

I am also very surprised at the eshell performance, how on earth is it faster than most terminals written in C? Elisp is the slowest language ever.


I am curious how hn would construct a program which tried to minimize the latency of "simplest" test possible. Lets say -- press the space bar and the pixel at top right of screen turns green.

I used to use nasm on a dos machine - where it was possible to write a rather straightforward assembly program that would take a keypress and update a pixel -- but I am not sure how abstract/layered graphics stacks are on operating systems these days -- IE -- what is the simplest/low latency method to tell a computer to flip an pixel? SDL?


A few months ago I asked the MS devs why their default WSL terminal felt incredibly responsive and they replied with a ton of details.

You can read the thread here: https://github.com/Microsoft/console/issues/327


Wow this is really wonderful! Thanks for spawning that thread into existence and posting it here :)


While the reply is great I feel it is missing the point a bit. The reason it feels so fast is IMO because everything else is so slow. Look at the numbers in below link (near the bottom). Windows 10 is extremely slow. Linux is often twice as fast with the same hardware and the same software. Why they still haven't fixed it is beyond me.

EDIT. Doh, forgot the link.

http://isitsnappy.com/


> Linux is often twice as fast with the same hardware and the same software. Why they still haven't fixed it is beyond me.

Linux is no different. Gnome terminal has a ton of input latency compared to xterm. It's like night and day. A lot of graphical apps in Linux have added input latency just like Windows. It's due to layers of UI abstractions and frameworks (the same as Windows but different UI toolkits of course).

That links just shows that VSCode and Atom are both horrendously bad for input latency on any OS.



I personally use wsltty (based on mintty).

It's noticeably worse than ubuntu.exe in terms of input latency but has better features for screencasts (ie. hotkeys to adjust font sizes).

Last week they released a version that supposedly removes 30ms of latency which sounds very promising, but the author didn't create a Windows installer for it yet.


Thanks!

With msys2, pacman -Syu then pacman -Su will update your minty to 2.9.9, which according to https://github.com/mintty/wsltty/issues/145 should include the 30 ms patch.

It feels quite smooth to me, but I haven't done measurements


https://www.blurbusters.com/blur-busters-lagless-raster-foll...

is pretty good starting point. The whole website is dedicated to low latency and high refresh rate graphics, with plenty of indepth information. As far as I can tell, PCs should be capable of single-digit millisecond end-to-end latencies with the right hardware (display and input devices) as long as you can afford to burn cpu cycles on your drawing routine.


Can confirm that latency on iTerm2 is awful. I tried it for a while at the behest of a colleague and switched back to Terminal.app because I couldn't bear the lag.


How long ago did you test that? I recall trying out iTerm2 some time ago and being appalled at the latency as well, but fairly recently I reconsidered it and found it to be more than quick enough for my needs.


At least a year ago, but my recollection is that it was context-dependent too. Like, sometimes it was fine, and other times it wasn't, but it wasn't really clear to me what caused the problem (wasn't obvious things like low memory or having a giant scrollback).


I've used KDE's Konsole for 20+ years. How would I got about testing its latency??


oh, I guess they were using https://github.com/pavelfatin/typometer ?


Looks like around 31ms: https://i.imgur.com/qHblAg1.png Good enough for me I guess.


Why is xterm not included? It’s probably the most popular terminal on Linux. It’s also noticeably laggy.

I’m surprised that eshell performance is so good. Maybe because it supports less features than a standard terminal? I use Emacs and Elisp everyday and both are dog slow. It’s unfortunate that they still haven’t managed to replace the interpreter and bytecode compiler with a faster scheme compiler after all this time.

As an aside, GNU projects seem broken in their management. Every project seems to be run by some get off my lawn egomaniac. (ex RMS, Ulrich Drepper)


> Why is xterm not included?

Because it doesn't run on a Mac. It was tested elsewhere though and found to be among the quickest, so maybe your configuration is suboptimal.

> Every project seems to be run by some get off my lawn egomaniac. (ex RMS, Ulrich Drepper)

The list of projects currently run by that list of people comprises "emacs and nothing else." Drepper left glibc maintainership many years ago and RMS has not been actively involved in anything other than emacs for a similar duration (aside from crippling gcc backend splitting).


Doesn't X11 ship with macOS (i.e. to run "legacy" Unix apps)? I could've sworn I've used xterm before on it.


It doesn't ship on it anymore. It used to a number of years ago.


No, but you can install it: https://www.xquartz.org/


Doesn't XQuartz come with xterm?


There are Linux tests in that benchmark. Moreover, you can run xterm on OSX.


I would assume that this is caused by emacs having some pretty complex logic to minimize screen updates to absolute minimum in order to support slow serial links to text terminals which is presumably also used in graphical modes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: