I remember when I used IntelliJ's product line for a good while and then opened up Vim. I was dumbfounded that I somehow had not noticed that IntelliJ was slow. It's fast enough - not Atom - but Vim was amazing next to it.
If you're not getting more out of IntelliJ than vim, then sure, vim will make you happier. But also then you're not getting from IntelliJ what it's trying to provide.
A GUI can be responsive and fast even when doing a lot of work behind the scenes. I'm perfectly fine with it taking it's time figuring out what argument I can use, or something else. But do not drop percieved responsiveness - I want to be able to type and/or click menus at all times.
I have an HP48 calc, even though it had a monstrous cpu for its time, the system had layers of interpretation rendering the UX well... sluggish. But the ergonomics and paradigm was so neat, that you didn't need real time. You could keep stacking functions on the HP48 stack, you'd know in advance how it would behave, so it wasn't an issue. And counter intuitively, I enjoyed the pauses so I could think about what to do next.
On a PC, the latency is both less predictable in occurrence and in how it is handled, so when a window is unexpectedly slow to appear, you have to notice and wait rather than continue to type, otherwise your input may be sent to the wrong window.
If I'm editing a file and type several repeating j or l commands (e.g., 'jjjjj' in command mode), there is noticeable latency between the last movement command and the last cursor movement.
I get no latency, however, when performing the same commands using the right or down arrow keys.
The 'h' or 'k' commands have no latency, either, just 'j' and 'l'.
I've been using vim for years and still find it frustrating. I haven't noticed the same problem with other vi clones (e.g., nvi or elvis).
(aside-rant: evil-mode is also not really a substitute for vim -- when you say you're a vim user, people often ask "why not just use the vim emulation in editor-X?" and the reason is because vim emulation has never, in my experience, been as good an experience as vim, things always work subtly differently)
I think there were two or three things that weren't right for me out-of-the-box, but 15 minutes and 30 lines of elisp later it was fixed. I really need to maintain a fork or something that is as much like vim as possible, but with options for enabling specific emacs features, because everyone has some things that emacs does that they like better than how vim does it.
My personal example: s/foo/bar/g on a line containing "foo Foo" becomes "bar Bar" by default in emacs, which I find to be amazing, but is definitely not how vim works.
Even for solo performance, but especially when performing with an ensemble or choir.
Not that I think he should bother doing this, I'm just being the fun police. :)
PROMPT="\$(sleep 0.04; prettyprompt \$?)"
edit: Tried something out: https://github.com/AlecBenzer/terminal-latency Had 92% accuracy differentiating 10ms from 100ms out of 28 tries
And yet, many years later, I had an impossible time explaining to a the more pointy-haired end of a large development division, why IBM Rational products might not work so well when accessed from halfway around the world. (They are/were very "chatty" between client and server, and the cumulative latency of all those round-trips simply killed you.)
Latency's still a sticking point, and a lot of people don't understand/believe it until they see it for themselves.
P.S. It may sound like I'm talking about something else -- apples and oranges. But no. What I mean is, in many domains, people don't get latency until they experience it.
I suppose that decades ago, that typewriter was just fine. Until you got fast enough. Then, as the stuck key mashes and/or other physical limits started to cap and impact your performance, you realized what people meant about "better" mechanisms.
I wonder whether in some cases, people like musicians have a better natural perspective on it. Domains where it's more readily apparent and dealt with.
Sorry, my mind's a bit all over the place, today...
The developers had turned build-parallelism up to 99 simultaneous jobs to try to keep the CPU busy while waiting for the network, but a build was still only using about 10% of a single CPU. An incremental build would take over a minute for dependency checking, and a full rebuild would take hours.
Exactly. That's also precisely what I was going for in the article.
> Finally, the St. Petersburg team transmits a list of timing markers to a custom app on the operative’s phone; those markers cause the handset to vibrate roughly 0.25 seconds before the operative should press the spin button.
> "The normal reaction time for a human is about a quarter of a second, which is why they do that,” says Allison, who is also the founder of the annual World Game Protection Conference.
Sure those are generally for video games. But pretending <Xms is the end-all-be-all of human perception is arrogant.
280 frames/sec is about 3.5 ms/frame. Signals don't even leave the retina that fast, let alone propagate to brain areas linked to perception and action.
Perhaps the higher frame rate makes the motion smoother, which allows the pilot to estimate and extrapolate an objects' motion more accurately, or something like that.
I don't think a fighter pilot inheriently react faster because of the resolution, but he is trained to detect motion in his peripheral vision. Perhaps he notice the smooth motion more than people who rarely train that part of their visual system. It would be interested to see how a trained and experience hunter would compare, as they often use peripheral vision as well.
The nervous system is pretty slow. Each action potential takes about a millisecond, and is followed a refractory period of another 1-2 milliseconds or so. This limits the speed of transmission in the nervous system generally.
The visual system, specifically, is even slower. There's a fairly complicated electrochemical cascade that turns photons into electrical impulses in rods and cones, and it's not fast. Drum (1982) describes several attempts at measuring the latencies of cones. The exact value depends on several factors like adaption levels and the visual stimuli, but they're all in the tens of milliseconds.
Are you saying that the rods and cones only "sample" every so often?
Yes, in that phototransduction is a sluggish chemical process. The impulse-response functions of photoreceptors are pretty well-known (e.g, Figure 1 here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1189952/?page=5). The tiny square wave at the bottom is the visual stimulation; the other traces are the responses of individual photoreceptors.
Figure 4 of the paper (p. 689) shows the response to a pair of flashes. In the bottom-most trace, you can see that the photoreceptor "misses" the second visual stimulus because the responses evoked by the first and second stimuli overlap in time and interfere.
I'd say that the visual system does get a continuous stream of inputs, but its output is low-pass filtered (and clipped, rectified, etc) so that faster inputs are not always reflected in its output.
It is indeed an interesting topic, thanks for the pointers.
Moving a cursor horizontally across a 4k display in a relatively slow one second is 64 pixels per frame at 60Hz, which is significantly wider than the cursor itself. At 144Hz it's about 26 pixels per frame, and at 280Hz it would be about 14 pixels per frame.
Even if you have a low-persistence display, you need to have a pretty high refresh rate to draw smooth motion. Hardly anything needs to be animated all the way down at 1 pixel per frame, but hardly anything currently comes at all close to that ideal strobe-free smoothness.
I'm legitimately curious, since I've never heard anything like that.
My subjective results were somewhere in between the values you have as an example. I stopped noticing significant difference somewhere north of 160Hz, and at 200Hz they were barely perceptible.
Very interesting to hear that someone has actually been measuring this in a more controlled way than I did some 18 years ago.
It's also quite interesting how reluctant people are to even acknowledge the existence of something they themselves haven't noticed. Almost everyone refuted my eye witness account simply because they had read/heard that they eye can only notice <some number between 20-60> frames per secobd, and thus I couldn't possibly see any difference in motion above that threshold.
You could likely data mine from stat sites before before/after purchasing very high refresh rate monitors.
OFC there maybe some bias to try harder once you drop >$400 on a hobby. In my own subjective experience 60 -> 144FPS was night and day. Roughtly ended up being a 50% pref improvement.
Did you notice how movies always look smooth at 24 FPS while games tend to look horrible at the same frame rate?
That's because unlike with real cameras, games render frames with a zero exposure time (no motion blur). You can also call it time aliasing. Modern 3D engines use tricks to alleviate the problem it is far from perfect.
The result is that you need a much faster framerate than what is normally perceivable in order for fast-paced games to look smooth.
As for latency, even at 60FPS, the resulting 16.7ms is well under human reaction time. It matter in competitive gaming because what matters is not your gaming experience but how faster you are than your opponent.
Also note that VR has its own set of problems and requires even faster refresh rates and lower latencies. That's because the brain coordinates vision and head movement and any discrepancy feels weird.
One or two years later, It turns out because human uses both our eyes for vision, we need double the frame rate, 60fps is the key and limit of human perception. All First Person Shooter should try and hit 60Fps. That was the era of Doom, Half Life, and Counter Strike.
60Fps settled for a long time. No one were discussing any thing more. ( Or we did but we stay at 60fps, Doom 3 even try to cap the frame rate at 60. ) We needed 120Fps or more for AR for an immersive experience, but i thought that was only for AR.
The iPad Pro 's 120Hz screen, it really was similar to when i first saw and use Retina Screen. The wow factor. For a long time I thought 60Fps was enough. And the sluggish of Computer UI was all software problem.
The conclusion is, we may, or may not be able to react to 120fps timing. But we do definitely feel the difference. And as a matter of fact, after knowing it is a frame rate problem, you can see how 120fps is still not quite enough. It is like the early days of Retina 330ppi Smartphone screen, It is not quite up to imperceivable difference yet. A lot of people may not agree this latency, slight sluggishness matters, ( The early days of Android users dont think it is problem ) but i like where all these direction are going, latency is now taking a front seat for optimization.
Competitive games have long been running full speed, not locked to the screen refresh because of the latency in screens. You don't want to wait for the screen to display something that is already ancient history.
(In Quake 2 you could jump higher if your computer could run more than 200 fps so it was popular to look down on the ground before jumping up on crates for a while.)
But the thing that's most visible to me personally when comparing 30 Hz vs. 60 Hz is not so much latency, but the smoothness of quick movements (e.g. when you turn the camera really fast). Mighty be different for a serious FPS gamer, though; I mostly play Minecraft, only.
Next screen I buy will be something higher but right now I'm back at 60 Hz with an IPS screen and I'm not totally happy with the flickering. Haven't tried overclocking yet, thought the flatscreens were locked to a certain Hz.
The general techniques used in order of preference are eliminating/skipping work, lazily doing work later, replacing external commands with zsh builtins.
But, that's ignoring all the other little delays in the system. For instance, monitors have a delay of a few milliseconds.
In short, this is a massive oversimplification.
Kind of sad because there are lots of places where a python script would be a nice way to solve a problem, but I cant use it because it adds an annoying delay.
It does raise an interesting question about whether there is now an opening for a new scripting language to develop. I don't think any of the well-known languages developed in the last ~10 years have been what you might call "scripting" languages.
What if startup time for python scripts were 0.1ms? In this case you could imagine writing throwaway scripts and just spawning a new process every time you wanted a result. No need to build a shell to (say) hold onto a DB connection or things like that. Just write the core business logic.
That being said, for the most part, I/O is more than 35ms.
If you have a "serious" Python program, though, you easily enter seconds-long-startup from dependencies. There's some libraries that do not pay a lot of attention to this and mean things like web servers a bit more frustrating. You can no longer spin up a server when the request comes in, but must do it before.
Perhaps, since I haven't had any workflow lately which blocks 30-50ms on my input, I'm not the right person to comment on this. That said, the article uses python for a script that will be run multiple times from scratch every time. Perhaps an interpretated language with a known slow start was the wrong tool right from the start?
From the top of my head, each keypress goes at least through the keyboard driver, X11, possibly the window manager, the terminal, the line discipline, the terminal again, font rendering, compositing, and then display. It only takes one of these steps to mess up for latency to go up.
The article starts with a link to https://danluu.com/term-latency/, which disproves both assertions.
I've noted the demise of the vertical scrollbar before. This is the first chance I've had to ask someone who's apparently implemented a horizontal one directly.
I'm quite curious as to the rationale.
The latency went down to 200µs (except when git is used, then it’s ~4ms), and it’s fast enough.
Here’s a video: https://s3.kuschku.de/public/2017-08-21_16-43-41.mp4
I’ll put it on https://github.com/justjanne/powerline-go later today, once I’m done.
The problem with such setup is that it relies on conscious notice of latency. It is very likely that the boundary where latency begins to impair performance is far lower than where you actually notice it.
A more sophisticated test would be to measure e.g. typing speed vs latency. You'd probably need some customized hardware to get low baseline latency so that you can then add latency in a controlled way. Even then I imagine that quite many trials would be needed to make any conclusions.
There was people that just didn't notice anything but everyone playing Quake halfway good had bad problems with those. Some went so far as to only use wired mices with ball, not even optical sensor.
1. Connect an LED and a button to the Arduino.
2. Program the Arduino to have the LED flash when the button is pressed.
3. In the program, add a constant delay between the button press being registered and the LED flashing. Increase this delay until the LED starts flashing noticably after the button is pressed.
If you have any advice how to use computers less in my job as a developer and Devops engineer, I'll gladly take it.
EDIT: My Unicode musical notes at the start and end of the line didn't make it. :(