Since it's debouncing rather than outright spurious activation, it's possible to latch/register/sample on the initial switch closure and only delay recognition of the subsequent release. This seems like a good idea since the press is almost always where a user's attention is focused, and the timing of the release being important is usually associated with holding the key much longer than the bounce period (e.g. modifier keys, cursor/player movement). Anyone know if keyboards are doing this?
In low power/mobile situations this allows the host CPU to sleep as much as possible and even the scanner, which walks the rows (or cols) need not be running all the time.
While many SOCs have a integrated keypad interface, there are also keypad controller ICs (some GPIO expanders have the ability to double as keypad controllers) that free the programmer from having to manually implement the scanner logic. I suspect these types of chips would be found in a typical USB keyboard. Software may still have to deal with additional debouncing and ghost keys depending on the complexity of the IC.
Edit: I suppose one could configure the keypad controller to apply no debouncing and then perform a software debounce the way you describe.
Use a responsive editor (makes the most difference).
* Use a low-latency keyboard, if possible.
* Choose programs that add global keyboard hooks wisely.
* Turn off unnecessary “image enhancers” in you monitor.
* Enable stacking window manager in your OS (e.g. in Windows 7, 8).
TESTED EDITORS RESULTS, Average latency, ms
> So today, after almost 6 months of extensive testing, we are enabling zero latency typing as the default setting in all IntelliJ-based IDEs, including CLion.
IDEA without the zero-latency mod is 198.8.
This is amazing. I thought I knew everything about mech keyboards, but this opens new perspective.
For instance old “complicated” Alps switches (circa 1990) have an extremely clean switch from off to on, with almost no bouncing.
Update: on my system Emacs has an average of 6ms vs VSCode 17ms.
My $20 raspberry pi with a cheap USB keyboard has neither problem though...
It seems difficult to make the screen worse than the previous model.
I thought it was a fascinating read.
Yep. There is one built-in frame of latency in the Windows composition engine. It's even documented here: https://msdn.microsoft.com/en-us/library/windows/desktop/hh4...
Rather than performing composition as late as possible, which would be beneficial for latency, Windows performs composition as early as possible, at the start of the frame. This introduces a completely unnecessary 16.67ms extra latency into everything you do. There is no supported way to disable the compositor on Windows 8-10 (though the article links to a scary-looking hack that apparently works in Windows 8,) so you're stuck with this.
I really hope this situation will be improved with future updates to Windows 10. Microsoft are still making improvements to the compositor, for example window resizing is much smoother in the creators update, but as far as I can tell, the one-frame delay still exists.
It made the thing a pleasure to use and I didn't hate it before, I just didn't know what I was missing.
The latency problem began when I upgrade from 10.8 to 10.11. (I use Mitsuharu Emacs, but would be stunned if the problem were absent from plain-FSF Emacs.)
Interestingly, this topic is quite old. The original mainframe systems had channel controllers for I/O which did a lot of processing locally, which included echoing and even local editing, freeing the CPU for "real" work. This approach was thrown out when minicomputers arrived; this is why the Unix IO system looks the way it does and why C, in a then-noteworthy departure from most languages of its time, didn't include I/O operators.
Even in the pre-TCP ARPANET, network latency on interactive connections was an important topic (this is when the main backbone was a single 56K line IIRC). The MIT SUPDUP protocol (Super Duper remote access alternative to Telnet) included a local editing protocol for connections to remote machines. Even non-line-mode applications could interact with it so essentially run part of the interface remotely all in the interest of zero latency.
Idk, I don't expect word to pain as quickly as vim because why would it? It's huge.
Word is what I use for editing rich text, and I can tell you from firsthand experience that doing the same in vim or emacs is much more of a chore and less intuitive.
Idk, seems like comparing a wrench to a hammer from my perspective, that's all.
But that's just lazy design. The immediate effect of typing a character (i.e., showing up on screen) hasn't changed in decades. Yes Word may do other stuff, but none of that other stuff is in the critical path for typing latency.
Think of a database like Oracle. Oracle does lots of stuff, but its critical latency path (committing simple transactions to the log) is as fast or faster than "simpler" ACID databases.
Obviously over time a bunch of extra frames of latency have snuck in, and at a refresh rate of 60Hz it's just not noticeable enough for enough people to have proven worth fixing.
(I've read of a lot of people finding 60Hz monitors more annoying to use after they've spent some time with 144Hz. So roll on 144+Hz... perhaps either we'll all upgrade, and the cycle will repeat, or our eyes will be retrained and we'll start to demand more from our existing equipment.)
 - https://mosh.org
Several (Ok, many. I feel old now. Satisfied?) years ago, doing remote development over a VPN, Emacs + Tramp mode was a lifesaver.
1. mosh instead of ssh
2. run editor locally and edit files remotely (either using the editors built-in support for that or something like sshfs)
lsyncd - https://github.com/axkibe/lsyncd
Does anyone know if there is an existing tool for these kinds of measurements on a Mac?
Doing a few google searches mostly turns up this article. But maybe my googling skills are weak.
It took a few trials, and I had to disable transparency. I think it also doesn't like blinking cursors, and if (...) is turned into it's own glyph, you should start the line with some dots of your own to prevent that.
Although at the time this was published Atom 1.3 had been released, however I do not know when the author recorder their data. Atom 1.1 was from Oct 2015 so thats not to out of date.
For contrast, Linux takes whole microseconds.
That table shows 0.09us (or 90ns) for a 1-way IPC; a cheap system call on my laptop (using https://raw.githubusercontent.com/tsuna/contextswitch/master... because I'm lazy) is about 59ns, 2-way.
As I recall, the problem is that raw IPC costs are a red herring. It's possible to get the IPC costs almost arbitrarily small, if you're not actually toting any data or if you don't have memory protection domains separating the components (as in Scout).
If you are toting data around, such as reading or writing to a filesystem server, you have three options:
* Copy it. That's kind of expensive.
* Share it. Copy-on-write magic, for example. Unfortunately, that requires fiddling with the VM system, to set up a mapping between user-space and filesystem-server-space, for example. Fiddling with the VM system can be surprisingly expensive, too.
* Pre-establish a shared-memory buffer. This is what L4 does(?), if I'm reading section 3.2.2 correctly. It may be much better than the other options, I have no experience there.
(Excellent paper, by the way. I'm hoping to get to do something with L4 at some point; microkernels are neat and it seems like it doesn't suck.)