1. The precision of your resistors is a limiting factor. If you use standard 1% resistors, you can't even put 8 switches on a single ADC; the precision of the largest resistor ends up being greater than the value of the smallest one. Increasing the precision of the resistors helps to a degree, but it drives up the price dramatically.
2. Contact resistance and switch bounce becomes a problem -- instead of just making a key show up twice, it could potentially result in a key you didn't press showing up.
3. Using ADCs doesn't get you away from scanning. ADCs have a sampling rate too -- and unless you pay a lot for your ADCs, it'll be slower than you could scan a diode matrix.
> 1. The precision of your resistors is a limiting factor.
Suppose your resistors and measuring equipment were good enough that you needed 5% between neighbouring values. Then you can squeeze in 141 distinct values between 1 Megohm and 1 kilohm. Seems plenty to me.
> 2. Contact resistance and switch bounce becomes a problem -- instead of just making a key show up twice, it could potentially result in a key you didn't press showing up.
They definitely become issues, but they strike me as manageable. For good switches, contact resistance should add perhaps 1 ohm of variation , and above we saw that 50Ohm between the nearest switches is good enough.
Switch bounce means that you can't afford to accidentally average over a bounce, you need to sample fast (say 200 kHz), and then take say the Nth highest of 20 samples. So you get a result 100 microseconds after the first switch went high. (Obviously that algorithm is just a guess, just to show the kind of options engineers have when they come up against the real system).
> 3. Using ADCs doesn't get you away from scanning. ADCs have a sampling rate too -- and unless you pay a lot for your ADCs, ...
Yes you need a good ADC, but that ADC will still be an single IC that does nothing vastly fancy. Such a thing is difficult and expensive to buy retail, but if I was Cherry I might well be able to negotiate a deal that brought it to a fraction of the end price of the keyboard (which I would sell at a premium).
How would you detect multiple simultaneous keypresses, either for n-key rollover or for shift/ctrl/alt types of keys?
Consider that 80wpm is right around 60ms a keystroke. 50ms jitter will have some keystrokes arriving twice as slowly as others. I suspect that's a noticeable difference; indeed, I suspect I've been noticing it for decades now.
I also never liked GNOME 3, and suspected it must have something to do with it being a compositing WM and the latency it probably adds on top.
Compositors tend to aim for 60fps, because that's what most displays can handle. That means you _start_ with 16ms worst case latency on the output side.
I still think that there are much longer (and noticeable) delays in the terminal emulators themselves, dwarfing the compositing and display delays.
A while back, I was showing symptoms. It turns out, that I was tensing during periods of waiting to see feedback. Now, being aware of it, it's no big deal, and I don't do it anymore. Symptoms gone.
Are you sure you're typing 20 letters per second?
In game terms, input can approach that speed, which is really what I was writing to. Should have said, by analogy.
For me, a whole lot depends. If I am transcribing, I look at the screen, and it can matter. Same for a quick type of a common phrase. I'll blast that out.
For most things, it's a few CPS and slower.
And this is all modal. Interactive, like cursor navigation is different from general character input, which is different from common, short words.
They may just do it when the latency varies.
However, for a string of them? Yeah, could be a factor. My own trigger was mere awareness.
Just knowing it couldbe latent to excess triggered that tensing, like "stay ready."
All I'm saying is its plausible. People can, do, will respond very differently to these kinds of things. Worth investigation.
For i.e. the arrow keys, or anything interactive you will probably have to be watching for the response.
> Is it really the fastest terminal emulator?
> In the terminals I've benchmarked against, alacritty is either faster, WAY faster, or at least neutral. There are no benchmarks in which I've found Alacritty to be slower.
Despite them already acknowledging that they have problems with latency
I think the author may have walked back on that a bit, but this is definitely the thing that turned away most of the people I know who tried it.
The problem with the IT community is we often try to argue the personal preference using some bastardisation of the scientific method. So you'll get people make claims about readability et al. But really it's just personal preference.
Personally I quite like them when used with a typeface that doesn't go nuts with ligatures. Hasklig is a great example; it's based on Source Code Pro and only really uses ligatures for character combinations that are generally only used together in an ASCII art kind of way. But that's just what I find pretty; others will undoubtedly hate it and have their own reasons too.
Sonny advice is just experiment. If.you find it pretty then use them; if you don't then don't use them.
I mean, spot the difference between ==, = and ＝. Or <= and ≤. I especially hate != as ≠.
And since at some point we have to use a different editor or read a git diff, we need to read the regular ASCII form anyway. So what's the point and why bother learning ligatures?
Graphics are a likely culprit but even then there can be multiple layers to the problem, sometimes literally. Putting bits in a window can be surprisingly expensive and it’s hard to have nice bells and whistles and speed at the same time.
It’s hard when given many bytes at a time. I once observed that simply by splitting a “vim” buffer vertically, my terminal received a significantly greater number of bytes (such as extra spaces for layout and several more special terminal sequences). The split also seemed to trigger more “full screen” or “most of screen” refreshes, versus smaller and cheaper updates that were typical of single editors. Scrolling, as it turns out, is a lot more complex in a split buffer.
If I ever need to blow off some steam, I just start smacking the "degauss" button.
Puts the meta key in the wrong place but I'm usually using vim for remote editing anyway.
1. GTK3 is just slow.
2. The GTK2 code path in Vim is older and thus more optimized than the GTK3 code path.
Which one is it?
> It turns out the VTE library actually writes the scrollback buffer to disk, a "feature" that was noticed back in 2010 and that is still present in modern implementations. At least the file contents are now encrypted with AES256 GCM since 0.39.2, but this raises the question of what's so special about the VTE library that it requires such an exotic approach.
https://bugzilla.gnome.org/show_bug.cgi?id=664611#c48 seems to clarify. This affects all vte-based terminal emulators, but only when setup to unlimited scrollback.
See https://lwn.net/Articles/752924/ for a quick test I made which seems to confirm that in the limited scrollback case, nothing is written to disk.
I'd like to see it more in these terminal-related articales and discussions...