Hacker News new | past | comments | ask | show | jobs | submit login
Latency matters (bethselamin.de)
112 points by majewsky on Aug 20, 2017 | hide | past | favorite | 105 comments



Latency is the reason I just can not use some editors, most notably Atom. You get used to it after a while, but it's not fun.

I remember when I used IntelliJ's product line for a good while and then opened up Vim. I was dumbfounded that I somehow had not noticed that IntelliJ was slow. It's fast enough - not Atom - but Vim was amazing next to it.


IntelliJ often shows latency, but it's not "slow", it's doing a stupendous amount of work behind the scenes and building an incredibly rich model of your code and your application.

If you're not getting more out of IntelliJ than vim, then sure, vim will make you happier. But also then you're not getting from IntelliJ what it's trying to provide.


I haven't used IntelliJ for the last 5 years or so, so I do not know how it is today. My remark here is not about IntelliJ, but a generally about GUIs.

A GUI can be responsive and fast even when doing a lot of work behind the scenes. I'm perfectly fine with it taking it's time figuring out what argument I can use, or something else. But do not drop percieved responsiveness - I want to be able to type and/or click menus at all times.



yeah latency is not all.

I have an HP48 calc, even though it had a monstrous cpu for its time, the system had layers of interpretation rendering the UX well... sluggish. But the ergonomics and paradigm was so neat, that you didn't need real time. You could keep stacking functions on the HP48 stack, you'd know in advance how it would behave, so it wasn't an issue. And counter intuitively, I enjoyed the pauses so I could think about what to do next.


The key thing that made the HP48 work in spite of its latency is that the behavior was completely predictable. You could press a long key sequence faster than the GUI could react, but each keypress would be interpreted as if the GUI had kept up. So you could press an arrow key several times to navigate through a menu or dialog box that hadn't been drawn yet. The input event buffer was large enough to handle almost any input sequence that you could actually memorize.

On a PC, the latency is both less predictable in occurrence and in how it is handled, so when a window is unexpectedly slow to appear, you have to notice and wait rather than continue to type, otherwise your input may be sent to the wrong window.


Agreed


For what it's worth, PyCharm (and I assume the other IntelliJ IDE's) has added zero-latency typing [0 (that blog post mentions rolling out the feature in an Early Access build, but given semantic highlighting is in normal Pycharm now, I would imagine zero-latency input is as well).

0: https://blog.jetbrains.com/pycharm/2016/12/pycharm-2017-1-ea...


That reminds me of another thing that I noticed. My vim gets really sluggish (as in, 1 fps and worse) when I have a Firefox window on the other screen showing a Concourse pipeline [1]. If I change to a different tab, vim is suddenly fast again.

[1] https://concourse.ci


Hm, when did you last try atom? I just tried picking it up coming from vim in a terminal and haven't really noticed speed as a major difference. Speed where, in how quickly characters show up? Or in doing other things like navigating around?


Having used vi over low-speed serial links means that I can handle latency just fine. Also hjkl were noticably faster because they sent fewer characters over the lines compared to arrow keys which used escape codes...


Funny, I've noticed just the opposite using vim on a desktop or laptop--and its always driven me nuts.

If I'm editing a file and type several repeating j or l commands (e.g., 'jjjjj' in command mode), there is noticeable latency between the last movement command and the last cursor movement.

I get no latency, however, when performing the same commands using the right or down arrow keys.

The 'h' or 'k' commands have no latency, either, just 'j' and 'l'.

I've been using vim for years and still find it frustrating. I haven't noticed the same problem with other vi clones (e.g., nvi or elvis).


I don't notice any difference with Spacemacs' evil-mode (a vim mode)


I know this isn't what you're responding to, but... I do notice the difference between vim (or neovim) and spacemacs -- spacemacs is noticeably slower. I do still use spacemacs more than vim now, though, because its Clojure integration is better.

(aside-rant: evil-mode is also not really a substitute for vim -- when you say you're a vim user, people often ask "why not just use the vim emulation in editor-X?" and the reason is because vim emulation has never, in my experience, been as good an experience as vim, things always work subtly differently)


Having used vi for 25 years and vim for 20 years, I'm going to say that evil-mode is the first vim-emulation editor mode that was close enough for me.

I think there were two or three things that weren't right for me out-of-the-box, but 15 minutes and 30 lines of elisp later it was fixed. I really need to maintain a fork or something that is as much like vim as possible, but with options for enabling specific emacs features, because everyone has some things that emacs does that they like better than how vim does it.

My personal example: s/foo/bar/g on a line containing "foo Foo" becomes "bar Bar" by default in emacs, which I find to be amazing, but is definitely not how vim works.


Yeah, its minor differences, but they add up. Maybe its time I learned a little elisp and fixed it for myself, I really like spacemacs aside from the (minor) annoyances caused by not being vim.


What OS?


I've experienced this on every version of Debian or Ubuntu I can recall using.


Do you experience the same thing with a blank config?


do you have a mapping of jl to esc?


Alan Kay once said something like "There should no more be a delay between pressing a key and something happening on a computer than on a piano."


Beautiful back-on-the-envelope exposition on the latency of playing the piano: http://northernsounds.com/forum/showthread.php/37769-Acousti...


Church organs, on the other hand, take some getting used to.

Even for solo performance, but especially when performing with an ensemble or choir.

https://music.stackexchange.com/questions/30454/how-does-a-p...


That I can get behind, even if it is just a busy icon showing.


I can't help but think that since he's measuring his own perception at the ms level - for a new program he's written himself - that it would be easy for unconscious bias to creep in. To really know the truth about whether latency matters in this case, a better experiment is probably needed.

Not that I think he should bother doing this, I'm just being the fun police. :)


I don't really know how to eliminate my own biases here, but a quick test shows that the new prompt feels very much like the old one if I add a little latency manually:

  PROMPT="\$(prettyprompt \$?)"
  PROMPT="\$(sleep 0.04; prettyprompt \$?)"
With the second one, I can always see a frame or two where the cursor is already on the next line where the prompt will go, before the prompt actually shows up.


Presumably the test would be whether he can, in a blind experiment, reliably discriminate between them.


Yeah -- seems like it'd be a bit tricky to set up, but would be super interesting to see the results.

edit: Tried something out: https://github.com/AlecBenzer/terminal-latency Had 92% accuracy differentiating 10ms from 100ms out of 28 tries


Nice!


No, you can easily measure this stuff with "time". It was 75ms vs 3ms for python vs go... this is for some custom prompt he wrote, so this is the time it takes to get the new prompt every time he was pressing return. I don't think it's that big a deal, if it was navigation keys in a text editor then it's a big deal, but when you hit return you are usually executing something much longer than ~50ms anyway so it's not an overhead that's going to multiply that much.


I just realised that may have come off as sarcastic... I was referring to "time" as in the linux cli tool, not the concept :P


300 baud --> 1200 baud. Now there was a noticeable difference.

And yet, many years later, I had an impossible time explaining to a the more pointy-haired end of a large development division, why IBM Rational products might not work so well when accessed from halfway around the world. (They are/were very "chatty" between client and server, and the cumulative latency of all those round-trips simply killed you.)

Latency's still a sticking point, and a lot of people don't understand/believe it until they see it for themselves.

P.S. It may sound like I'm talking about something else -- apples and oranges. But no. What I mean is, in many domains, people don't get latency until they experience it.

I suppose that decades ago, that typewriter was just fine. Until you got fast enough. Then, as the stuck key mashes and/or other physical limits started to cap and impact your performance, you realized what people meant about "better" mechanisms.

I wonder whether in some cases, people like musicians have a better natural perspective on it. Domains where it's more readily apparent and dealt with.

Sorry, my mind's a bit all over the place, today...


Ugh; one site I visited had the requirement that all developers work from clear-case dynamic views. The server (yes, singular server) was in a different building.

The developers had turned build-parallelism up to 99 simultaneous jobs to try to keep the CPU busy while waiting for the network, but a build was still only using about 10% of a single CPU. An incremental build would take over a minute for dependency checking, and a full rebuild would take hours.


> What I mean is, in many domains, people don't get latency until they experience it.

Exactly. That's also precisely what I was going for in the article.


From "Russians Engineer a Brilliant Slot Machine Cheat—And Casinos Have No Fix" [0]:

> Finally, the St. Petersburg team transmits a list of timing markers to a custom app on the operative’s phone; those markers cause the handset to vibrate roughly 0.25 seconds before the operative should press the spin button.

> "The normal reaction time for a human is about a quarter of a second, which is why they do that,” says Allison, who is also the founder of the annual World Game Protection Conference.

[0]: https://www.wired.com/2017/02/russians-engineer-brilliant-sl...


Sometimes I wonder if it would be called cheating if you could do the math to figure out the PRNG parameters in your head, then calculate the next known timestamp. It seems similar in premise to Ethereum contracts, where an "exploit" is technically just following the contract.


The reason they sell monitors at 144FPS and now 256FPs is because people can see and react that fast.

Sure those are generally for video games. But pretending <Xms is the end-all-be-all of human perception is arrogant.


It's not sold, but a lot of research seems to indicate that for certain sensory inputs you can be trained to react to faster than you consciously can perceive. Baseball players reacting to pitches is often used as an example. Certainly high-level video game players could be in that category as well.


I'm working on an AR headset and we made some experiments with the human eye. I barely can notice the difference for motion above 120 fps. But my associate is a retired fighter pilot and he can react to motion at ~280 fps.


There's got to be something else going on there.

280 frames/sec is about 3.5 ms/frame. Signals don't even leave the retina that fast, let alone propagate to brain areas linked to perception and action.

Perhaps the higher frame rate makes the motion smoother, which allows the pilot to estimate and extrapolate an objects' motion more accurately, or something like that.


The visual system do not work in frames (well, on a fundamental level I supose it does, with the indivitual photons and nerve impulses being the smallest unit, but that is not a limit in practice). The information the retina captures is streamed continuesly to the brain, so higher FPS provides a higher temporal resolution for the visual experience.

I don't think a fighter pilot inheriently react faster because of the resolution, but he is trained to detect motion in his peripheral vision. Perhaps he notice the smooth motion more than people who rarely train that part of their visual system. It would be interested to see how a trained and experience hunter would compare, as they often use peripheral vision as well.


My point was that 3.5ms frames are close to that limit.

The nervous system is pretty slow. Each action potential takes about a millisecond, and is followed a refractory period of another 1-2 milliseconds or so. This limits the speed of transmission in the nervous system generally.

The visual system, specifically, is even slower. There's a fairly complicated electrochemical cascade that turns photons into electrical impulses in rods and cones, and it's not fast. Drum (1982) describes several attempts at measuring the latencies of cones. The exact value depends on several factors like adaption levels and the visual stimuli, but they're all in the tens of milliseconds.


I have no doubt there is a latency, but I was trying to say that the stream of information is not discrete. The stream is continues, so higher framerate means that more information will flow to the brain because the temporal resolution is higher.

Are you saying that the rods and cones only "sample" every so often?


No, in that there is not a master clock, shutter, or anything like that that separates the visual input into discrete frames.

Yes, in that phototransduction is a sluggish chemical process. The impulse-response functions of photoreceptors are pretty well-known (e.g, Figure 1 here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1189952/?page=5). The tiny square wave at the bottom is the visual stimulation; the other traces are the responses of individual photoreceptors.

Figure 4 of the paper (p. 689) shows the response to a pair of flashes. In the bottom-most trace, you can see that the photoreceptor "misses" the second visual stimulus because the responses evoked by the first and second stimuli overlap in time and interfere.

I'd say that the visual system does get a continuous stream of inputs, but its output is low-pass filtered (and clipped, rectified, etc) so that faster inputs are not always reflected in its output.


That is interesting. When the stimuli is long enough, you get full resolution, but since the receptors are not syncronized I would expect the eye to react to framerates faster than the response time of the receptors, at the cost of resolution.

It is indeed an interesting topic, thanks for the pointers.


Smooth motion is definitely a big factor. Just look at things in terms of how fast something on screen (eg. the cursor) is moving in terms of pixels per frame.

Moving a cursor horizontally across a 4k display in a relatively slow one second is 64 pixels per frame at 60Hz, which is significantly wider than the cursor itself. At 144Hz it's about 26 pixels per frame, and at 280Hz it would be about 14 pixels per frame.

Even if you have a low-persistence display, you need to have a pretty high refresh rate to draw smooth motion. Hardly anything needs to be animated all the way down at 1 pixel per frame, but hardly anything currently comes at all close to that ideal strobe-free smoothness.


Actually, human eyes don't have a "frame rate" per se, or if you want there are different frame rates depending on the feature - and how far removed it is from the retina. Essentially, the retina itself can detect >300hz, then there's a bunch of processing as you go back the optic nerve that causes that number to drop for higher-level features.


Any chance you have a source for that?

I'm legitimately curious, since I've never heard anything like that.


In my hardcore gaming days using a fantastic Hitachi CRT monitor I did some basic experiments with refresh rates.

My subjective results were somewhere in between the values you have as an example. I stopped noticing significant difference somewhere north of 160Hz, and at 200Hz they were barely perceptible.

Very interesting to hear that someone has actually been measuring this in a more controlled way than I did some 18 years ago.

It's also quite interesting how reluctant people are to even acknowledge the existence of something they themselves haven't noticed. Almost everyone refuted my eye witness account simply because they had read/heard that they eye can only notice <some number between 20-60> frames per secobd, and thus I couldn't possibly see any difference in motion above that threshold.


How do you test that; can you measure how you react to motion at 120fps vs 280fps, or is it just a feeling?


It's not a feeling, we do the measures. It's like measuring your reaction time, few experiments are on screen and some are analogs. At the end you have a measure in a unit of time, and you deduce the fps from that


In FPS there are people called "stat junkies" the name expresses its hobby well. You try to play for the best stats. Head shot ratio (how often you get head shots), KDR (kill death ratio).

You could likely data mine from stat sites before before/after purchasing very high refresh rate monitors.

OFC there maybe some bias to try harder once you drop >$400 on a hobby. In my own subjective experience 60 -> 144FPS was night and day. Roughtly ended up being a 50% pref improvement.


The situation is a bit different with video games.

Did you notice how movies always look smooth at 24 FPS while games tend to look horrible at the same frame rate? That's because unlike with real cameras, games render frames with a zero exposure time (no motion blur). You can also call it time aliasing. Modern 3D engines use tricks to alleviate the problem it is far from perfect.

The result is that you need a much faster framerate than what is normally perceivable in order for fast-paced games to look smooth.

As for latency, even at 60FPS, the resulting 16.7ms is well under human reaction time. It matter in competitive gaming because what matters is not your gaming experience but how faster you are than your opponent.

Also note that VR has its own set of problems and requires even faster refresh rates and lower latencies. That's because the brain coordinates vision and head movement and any discrepancy feels weird.


Speak for yourself, 24fps looks super jerky to me. I just get used to it since I don't have to control it (not to discredit your comment, what you said is accurate)


In the old days, I was told 30fps were the holy grail of 3D Gaming, 2x frames wouldn't do it. You will need an additional 3dfx card ( Voodoo ) to do it!. All Magazine start pushing out this answer with benchmarks showing which card hit 30Fps or above.

One or two years later, It turns out because human uses both our eyes for vision, we need double the frame rate, 60fps is the key and limit of human perception. All First Person Shooter should try and hit 60Fps. That was the era of Doom, Half Life, and Counter Strike.

60Fps settled for a long time. No one were discussing any thing more. ( Or we did but we stay at 60fps, Doom 3 even try to cap the frame rate at 60. ) We needed 120Fps or more for AR for an immersive experience, but i thought that was only for AR.

The iPad Pro 's 120Hz screen, it really was similar to when i first saw and use Retina Screen. The wow factor. For a long time I thought 60Fps was enough. And the sluggish of Computer UI was all software problem.

The conclusion is, we may, or may not be able to react to 120fps timing. But we do definitely feel the difference. And as a matter of fact, after knowing it is a frame rate problem, you can see how 120fps is still not quite enough. It is like the early days of Retina 330ppi Smartphone screen, It is not quite up to imperceivable difference yet. A lot of people may not agree this latency, slight sluggishness matters, ( The early days of Android users dont think it is problem ) but i like where all these direction are going, latency is now taking a front seat for optimization.


For a helicopter simulation I was told 24 fps was the minimum to be able to handle hovering. Yes, once I had a computer able to handle 3D flight simulator at 30 fps I was finally able to hover! The mind needs the queues from the eyes since the balance is just too slow. And if it's a PC emulator there is no balance input at all to help you.

Competitive games have long been running full speed, not locked to the screen refresh because of the latency in screens. You don't want to wait for the screen to display something that is already ancient history.

(In Quake 2 you could jump higher if your computer could run more than 200 fps so it was popular to look down on the ground before jumping up on crates for a while.)


I can't comment on 144 Hz. (When I got my current monitor, I had to choose between 4K and 144 Hz, and 4K was more important for me.)

But the thing that's most visible to me personally when comparing 30 Hz vs. 60 Hz is not so much latency, but the smoothness of quick movements (e.g. when you turn the camera really fast). Mighty be different for a serious FPS gamer, though; I mostly play Minecraft, only.


I overclock my 4k display from 60 to 65Hz. The difference is noticeable.


My Atari could switch between 50 and 60 Hz because of US/EU TV standards. The difference was very noticable especially if you didn't look directly on the screen. The flicker is very hard on the eyes at 50 Hz and much better already on 60 Hz. Later I had a CRT screen that could be run at 75 Hz with some tweaking and that was very nice.

Next screen I buy will be something higher but right now I'm back at 60 Hz with an IPS screen and I'm not totally happy with the flickering. Haven't tried overclocking yet, thought the flatscreens were locked to a certain Hz.


Latency is one of the reasons why I still use xterm and bitmap fonts. The other being font crispness on standard ~100dpi displays.


The crisp shape of the text often makes me nostalgic for systems pre cleartype on windows and pre-gnome 3 on linux.


I agree. A while ago, I was so fed up with how long my prompt took to render, I went through my zshrc and cleaned up a bunch of stuff to make it faster:

https://github.com/stapelberg/configfiles/compare/5ccd3cbd38...

The general techniques used in order of preference are eliminating/skipping work, lazily doing work later, replacing external commands with zsh builtins.


Viewing your diff made me wonder if there is a market for custom-compiled shells, where things like the custom prompt or termtitle are baked in at compilation time.


How do some applications practically get to latencies smaller than 16.6mSec? That can only be possible if the character typed is being rendered directly into the GPU buffer which currently is being drawn right? Or does the screen used for testing this run at a higher refresh rate than 60Hz? Or does 'latency' not mean time between hardware keypress and character appearing on screen?


16.6ms is the maximum lag between the monitor receiving the framebuffer and drawing it - the average would therefore be half of that, and the minimum would be essentially zero.

But, that's ignoring all the other little delays in the system. For instance, monitors have a delay of a few milliseconds.

In short, this is a massive oversimplification.


Python startup latency is horrible, and unfixable. It got much worse in python3 because it loads much more modules (think of all the unicode crap).


Yeah, I did a little test on my system where I wrote a program in C, Python, Perl, and Bash that just prints hello to the console then timed how long it took to fork/exec from bash. C takes ~0.5ms, Perl is ~1.5ms, Bash is ~2.5 ms, and Python 3 is ~35ms.

Kind of sad because there are lots of places where a python script would be a nice way to solve a problem, but I cant use it because it adds an annoying delay.


I think Python's primary use case has now shifted far enough from scripting to server-side web development that ~35 ms of startup time to deal with Unicode better is a worthwhile tradeoff for the majority of Python users.

It does raise an interesting question about whether there is now an opening for a new scripting language to develop. I don't think any of the well-known languages developed in the last ~10 years have been what you might call "scripting" languages.


I am not knocking python here at all, just pointing out that it doesn't work well for some tasks. I wonder if the startup latency could be fixed though?


In what use case, is a 35ms startup time an issue?


It's not a dealkiller for most things, but imagine the opposite:

What if startup time for python scripts were 0.1ms? In this case you could imagine writing throwaway scripts and just spawning a new process every time you wanted a result. No need to build a shell to (say) hold onto a DB connection or things like that. Just write the core business logic.

That being said, for the most part, I/O is more than 35ms.

If you have a "serious" Python program, though, you easily enter seconds-long-startup from dependencies. There's some libraries that do not pay a lot of attention to this and mean things like web servers a bit more frustrating. You can no longer spin up a server when the request comes in, but must do it before.


There are plenty of times a tool is called in a tight loop ($PS1, xargs, or a ... | while read). Paying 35 ms over and over again adds up quick.


In TFA.


I am well aware of the article, but it's nothing but "it feels magical".

Perhaps, since I haven't had any workflow lately which blocks 30-50ms on my input, I'm not the right person to comment on this. That said, the article uses python for a script that will be run multiple times from scratch every time. Perhaps an interpretated language with a known slow start was the wrong tool right from the start?


Well yeah, that's the point. Plenty of people argue that Python is good enough for any job (except maybe kernel modules, but hey, people even wrote kernel modules in Java, with the obvious consequences). Plenty of people also argue that a runtime of 50ms is not worth optimizing anymore. And I disagree.


Could you expand on "all the unicode crap"?


This guy must have not noticed after adding a thousand papercuts to each terminal prompt. Premature optimization is one thing, but sometimes it is a good idea to compare against a baseline.


It's not like I'm the only one adding papercuts to the stack. ;)

From the top of my head, each keypress goes at least through the keyboard driver, X11, possibly the window manager, the terminal, the line discipline, the terminal again, font rendering, compositing, and then display. It only takes one of these steps to mess up for latency to go up.


The latency was in constructing the 'prompt' string (via his custom program). The actual input latency in the stack is both 'constant' and rather difficult for an end user to directly affect. Thankfully the millions of human-hours put in to the hardware and software stack have made that quite efficient.


> The actual input latency in the stack is both 'constant' and rather difficult for an end user to directly affect.

The article starts with a link to https://danluu.com/term-latency/, which disproves both assertions.


As a semi-related followup, since my blog made the frontpage for the first time, I looked at whether the HN crowd is visible on the monitoring charts: https://blog.bethselamin.de/posts/latency-matters-aftermath....


And a semi-related question: is there some particular reason you've replaced a sensible, directly-proportional, and useful vertical scrollbar with an (IMO) insensible, literally orthogonal, top-placed horizontal scrollbar ... which cannot actually be used to control location within the document?

I've noted the demise of the vertical scrollbar before. This is the first chance I've had to ask someone who's apparently implemented a horizontal one directly.

I'm quite curious as to the rationale.

Previously: https://ello.co/dredmorbius/post/0hgfswmoti3fi5zgftjecq https://ello.co/dredmorbius/post/0hgfswmoti3fi5zgftjecq


For those who are fans of Powerline but fed up with the latency of Python, you can use powerline-server OR you can take a similar cue and use:

https://github.com/rdnetto/powerline-hs


I decided to rewrite https://github.com/banga/powerline-shell in go after reading this article yesterday evening (almost done, only mercurial, svn and fossil are missing), and the performance is amazing.

The latency went down to 200µs (except when git is used, then it’s ~4ms), and it’s fast enough.

Here’s a video: https://s3.kuschku.de/public/2017-08-21_16-43-41.mp4

I’ll put it on https://github.com/justjanne/powerline-go later today, once I’m done.


I like powerline but it really is slow as molasses -- even after I disabled a bunch of the (useless) default stuff. powerline-daemon was supposed to help but, if it did, it wasn't really noticeable.


Powerline-hs is pretty quick.


Along similar lines, I noticed some latency to Apple Terminal's prompt and made this to fix it.

https://github.com/twhb/terminal-prompt-perf


See here how to actually check for latency: https://pavelfatin.com/typing-with-pleasure/ :)


Check mosh! It's an amazing tool for that lag


mosh is definitely on my list of "stuff that I need to check out", but the things that I showed in the article was all on localhost.


With an arduino and an LED, it's simple to test the limits of one's perception. Copy one of the hello world LED blinking programs (basically a square wave to a GPIO pin) and modify the frequency of the blink. I see a rapidly blinking light around 15ms period, and a solid light around 10ms period. To me that implies that my human sample rate is about 12ms, although there could be other factors at play, such as capacitance in the circuit.

Edit: s/frequency/period/


That test does not really measure the ability to notice latency. The simplest latency test I can imagine would be a system where you press a button and LED lights up (or a sound is played). Do that in pairs where randomly either the first or second trial has some latency added in. Then ask the subject which one seemed to respond faster. Rinse and repeat about gazillion times with different amounts of added latency.

The problem with such setup is that it relies on conscious notice of latency. It is very likely that the boundary where latency begins to impair performance is far lower than where you actually notice it.

A more sophisticated test would be to measure e.g. typing speed vs latency. You'd probably need some customized hardware to get low baseline latency so that you can then add latency in a controlled way. Even then I imagine that quite many trials would be needed to make any conclusions.


Another way would be moving the mouse and seeing the cursor lag behind. Old wireless mices could have some real bad lag. It feels somehow like holding a garden hose and trying to hit different spots on the lawn with the water.

There was people that just didn't notice anything but everyone playing Quake halfway good had bad problems with those. Some went so far as to only use wired mices with ball, not even optical sensor.


The thing you're measuring here is called the flicker fusion threshold--essentially the transition point between seeing on/off/on/off and a solid light--and is a little different from latency. Latency would be more like the delay between the LED turning on/off and your perception of that change. That delay is at least 100ms, and often a bit longer, though it depends on a lot of factors.


If you want to test latency perception with an LED, here's what you would do:

1. Connect an LED and a button to the Arduino.

2. Program the Arduino to have the LED flash when the button is pressed.

3. In the program, add a constant delay between the button press being registered and the LED flashing. Increase this delay until the LED starts flashing noticably after the button is pressed.


Check out http://isitsnappy.com (not my app, I've just used it before), lets you measure latency empirically. Set up a loop where you press a key as soon as something appears on screen, and that will give you your end-to-end latency.


move the board, or sweep your eyes from left to right across the table it's on. You'll detect the individual events, with a sense of order. your eyes are edge triggered with a relatively slow reset. I think you'll find you can detect much finer grained deltas in time.


ms is a unit of time so your numbers denote "period" not "frequency" (unit : Hertz).


This is a good point, but remember that if you're in the habit of using computers too much, you shouldn't want lower latency. Higher latency is less addictive if you're trying to cut back.


> in the habit of using computers too much

If you have any advice how to use computers less in my job as a developer and Devops engineer, I'll gladly take it.


I realize that smartphones are computers, but you might find the SSH client "Prompt 2" by Panic on iOS useful for the devops part. Avoids the need to whip out the laptop to SSH in somewhere.


Yeah, I'm thinking more about social networks and games. I've never heard of anyone addicted to the command line.


Might as well face it, you're addicted to bash

EDIT: My Unicode musical notes at the start and end of the line didn't make it. :(


hear it now.


Ken M, is that you?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: