
Just how fast is too fast when it comes to web requests? - weinzierl
https://rachelbythebay.com/w/2019/11/12/post/
======
FooBarWidget
There _is_ such a thing as too fast. Flight ticket comparison websites like
skyscanner.com insert fake progress delays ("scanning airline website") to
make it seem like they're spending a lot of effort to do important work for
you. Research has shown that, without that delay, users trust such websites
less, or value them less, because the instantaneous response time is equated
to less valuable work.

~~~
Kaveren
I hear this every time TurboTax gets brought up. I think it's a lot healthier
to foster good relationships between users and tech so that they don't have to
distrust instantaneous actions.

As an aside, I'm also suspicious of research that shows ~100ms as the
threshold for "instantaneous" action, because in video games like FPS shooters
most players seem to dislike playing on ping that high.

~~~
TeMPOraL
I have the same view. 100ms is fine for actions that _require_ round trip to
the server, in particular when you hide the latency the way multiplayer games
do. But for things that are just local UI operations, if you're taking longer
than 16.7 ms, you're doing something wrong. A typical videogame can update the
whole screen _and_ all game logic in less than that.

~~~
londons_explore
Please pass this memo on to people who feel the need to animate UI elements to
hide laggyness, and then animate into and out of a loading screen (eg. Loading
an app on Android).

------
Deimorz
Hacker News itself is a good example of a related topic: voting here _feels_
fast because the vote button disappears as soon as you click it, but it's
actually very slow.

If you open your browser's Network panel in dev tools and vote on something,
you'll see that it sends a request, gets back a 302 redirect, and then does
another request to load a whole new copy of the page you were voting from in
the background (and then just discards it). At least from my location, it
consistently takes about 1.2 seconds for each vote to finish, even though it
feels instant while using the site.

One consequence of this is that if you vote on multiple comments quickly, some
of your votes are probably being lost with no indication. If you try to vote
on something else before the first vote has fully finished, the second one
gets a 503 error, but there's no indication of this at all.

It happens to me often - I read a good reply comment, vote it up, and then
immediately vote up its parent (which I've already read) as well, since it
resulted in that good comment. If I come back to the page later I'll notice
that my vote on the parent didn't go through, and if you open the Network
panel and try this, you'll see it - the second vote 503s if your second click
was before the first one finished, but the site acts the same whether it
failed or not.

~~~
TeMPOraL
That's a good observation. I've noticed it happening too, since I have a
similar upvoting pattern: I tend to read a whole subtree, and then go back up,
rapidly upvoting comments in it I considered insightful. I've noticed that the
votes sometimes don't register, which is why I periodically reload the comment
thread and reupvote comments.

'dang, is there any chance this gets improved?

~~~
Deimorz
I tried emailing him about it a couple months ago, after I made this comment
about a different downside of it (bandwidth usage / response size):
[https://news.ycombinator.com/item?id=20854662](https://news.ycombinator.com/item?id=20854662)

He didn't consider it an issue worth fixing, and didn't reply to my long
follow-up email trying to explain how wasteful and inefficient the current
voting process is and how simple it would be to improve. It seems unlikely to
change.

------
narsil
This is how I felt about Algolia's search when I first enabled it for our
Vuepress site at
[https://developers.kloudless.com/guides/enterprise/](https://developers.kloudless.com/guides/enterprise/)
(the search bar at the very top). I assumed it had loaded some kind of index
in memory via JavaScript since the XHR requests take < 30 ms (!) from my
location in San Francisco, which is pretty much instant. That's faster than
the delay between my keystrokes.

~~~
benbristow
Returns in the same speed from Scotland (Glasgow) too. One request was as low
as 16ms. Impressive.

------
neiman
I wrote a comment system for articles once that was super-duper fast.

The result was that it turned into a chat, so we had to add a "fake delay" for
people to treat is as a serious comment system.

~~~
j88439h84
That's hilarious.

------
ricardobeat
Here's a classic from NNG on 'UX time scales':
[https://www.nngroup.com/articles/powers-of-10-time-scales-
in...](https://www.nngroup.com/articles/powers-of-10-time-scales-in-ux/)

You can infer from there that anything around/under 100ms will feel like
direct manipulatio, interpreted by the person in the post as 'no request was
sent to the server'. The same interpretation might not result from a non-
technical person, it will just feel 'different' or they won't notice what
happened and submit it multiple times, if there is no success message.

You can also trace it back to one of their 10 heuristics: visibility of system
status. If users cannot perceive a change because it happened too fast, the UI
has failed and users don't know what happened. One of the reasons some
websites add artificial delays, as mentioned in other comments, is not only to
signify 'work' being done, but that flashing a spinner for a split second is
also a bad experience. You're better off normalizing every action to take at-
least-one-second, and ensuring the state of the system is always clear.

------
arkadiyt
The server hosting this website doesn't support TLS 1.3 - if it did then you'd
have 0 round trip time (0-RTT) session resumption and it would be nearly
identical to the http latency.

~~~
Thorrez
Do servers automatically support 0-RTT? I thought generally you have to
explicitly enable 0-RTT because it's vulnerable to replay attacks. Generally
you would only enable it for idempotent requests, and feedback is not
idempotent (unless the database explicitly rejects duplicate feedback).

~~~
Nextgrid
0-RTT is dangerous for POST requests (or any requests that modify data) unless
an idempotency token is present.

------
disposedtrolley
I've been on the implementation side of this kind of thing more times than I'm
proud of.

Hardcoded delays are especially prevalent in systems which attempt to emulate
a human operator, such as virtual assistants which are starting to replace
human live chat agents. The excuse is always UX related. Progressive
disclosure is cited a /lot/. Apparently users get a better experience when
systems pretend to be human and respond slowly, so we would hardcode delays
which were a function of the length of the response message.

------
mcqueenjordan
No such thing as too fast. If a user is confused about the interaction because
of how fast it is, then that's a UX problem to fix.

Speed is one of the most important properties of exceptional user experiences.

I'm building a developer tool and I'm ruthlessly optimizing for speed. Waiting
20ms for a CLI command versus 100-300 is a huge difference.

~~~
afiori
> If a user is confused about the interaction because of how fast it is, then
> that's a UX problem to fix

In some cases the confusion comes from a "did this do any work at all"
question. like git branching compared to svn for big projects. As other have
brought up, instantaneous reply in airline comparison sites can cause concern
of how deep the search was.

A similar paradox is with psychiatrist hourly rates or the placebo effect. The
high price (delay) is part of the therapy (interface).

~~~
TeMPOraL
Having your program sit in the chair and twiddle its thumbs to make it seem
like it's working is just one possible solution. It's tragedy-of-the-commons-
esque, because it perpetuates the myth that a given category of work has to be
slow.

Alternatives would include: results speaking for themselves, or saying
something like "Searched all 124,568,902 connections", or otherwise
reaffirming users the work has been done without making _them_ pay for it with
time.

------
z3t4
You need to tell the user that the message has been sent. If nothing happens
when you press the button (just an ajax call, no refresh) the user will think
there is something wrong.

The reverse can be used in an UI to tell the user that something went wrong,
for example in a window pull-down menu, don't hide the menu right away, do
what the user request, then hide the menu, so if it the request didn't
complete, the menu will still be visible.

~~~
rachelbythebay
Hi, the web page in question has always had something to say what's going on.
It's not beautiful but it does tell you that things are happening.

Right before it kicks off the call to the server, it lights up something to
say "Submitting feedback", and as soon as it finishes, it flips that to
"Feedback saved" (which now has a time attached).

Odds are, most people have never actually noticed the first message, since it
is quickly replaced with the second.

The messages appear just to the left of the button which was just clicked (and
right under the text field). So, in theory, it's right by where your eyes are
looking anyway.

But, here we are.

~~~
z3t4
Aha. You are on to something then. Your setup is old school. Maybe the advance
in network and computer speed has made it too fast :P

------
jobigoud
In a hobby desktop app, I show a splash screen before the UI is fully loaded.
After optimizing startup time, it's now at a point where a hot start is very
fast, less than 300 ms, to the point that you can't really read anything on
the splash screen. Cold start still takes a second or so.

Is there a best practice here? At which point do you stop showing a splash
screen? I've seen applications where the splash screen lingers even after the
UI is loaded, which seems weird to me and gets in the way of getting things
done.

~~~
KarlTheCool
Probably a skeleton screen would be better. Show an empty version of the usual
ui but with blocky placeholders as stuff loads in.

[https://uxdesign.cc/what-you-should-know-about-skeleton-
scre...](https://uxdesign.cc/what-you-should-know-about-skeleton-
screens-a820c45a571a)

~~~
jobigoud
Thank you very much for introducing me to this concept! This gives me some
ideas on how to organize the loading. There is definitely stuff I do in the
background of the splash screen that could be done later, after a simplified
version of the main window is loaded.

------
spondyl
I remember attending a talk about speeding up web UIs once and it got to Q&A
time.

I’d read some article about how if you respond too quickly, users can begin to
doubt that any work is really being performed, and I’ve experienced that
feeling a handful of times over the years myself.

Anyway, I asked the speaker that and everyone just kind of laughed, which it
does seem a little absurd on the face of it.

I guess it’s also pretty far from most peoples minds given the web is caked in
unnecessary bloat a lot of the time :)

------
jchw
It’s not that people are used to slow stuff, even if they are. It’s that there
is a psychological “magic number” whereby something is short enough to seem
instant.

There’s different values and tons of articles about this so I’ll just link a
random one. I don’t know if there are formal studies on it but I fully believe
in the idea that there is a magic “instantaneous” feeling threshold, just from
personal experience, especially with tweaking animation delays.

[https://www.nngroup.com/articles/response-
times-3-important-...](https://www.nngroup.com/articles/response-
times-3-important-limits/)

------
jasonlfunk
I've run into this before too. Sometimes I've actually added a delay timer to
buttons that show loading spinners so that the spinner appears long enough for
the user to see more than a flash. Is there a better option?

~~~
hayksaakian
Another common option is a "toast notification" (see android) or other
separate confirmation message to acknowledge your action.

"Sent X" or "Finished Y!" is sufficient to distinguish a failed ajax call from
a successful button press

~~~
raverbashing
Yes but be very careful with those. You don't want to spam the user.

I remember some years ago when Ubuntu file sync would pop a notification every
time a file was synced. That's a good example of what not to do.

------
baud147258
I feel like it's an issue we will never have on the project I'm currently
working on, all requests have a noticeable delay. I guess there's too many
layers on the back-end. And maybe some requests are totally not optimised
(like doing 20 select 1 elt instead of select 20 elts). And maybe there's no
enough caching. Or maybe the thing we're caching are not those that matter…
But first we still have to migrate off Internet Explorer (or at least support
another browser).

------
_squared_
> It's a link in the SF Bay Area, and the server is in Texas, so it has to get
> out there and back. That's at least 50 milliseconds right there when
> measured by a boring old ping.

Am I the only one surprised by this 50ms ping? I can reach cloud servers in
the SF Bay Area from Paris in 50ms - and I'm on wifi. Surely SFO-TX should
take much less time..?

~~~
Thorrez
According to this it takes 42ms to get from Paris to SF at the speed of light
in a fiber in the great circle path across Earth's surface. Ping is rtt, so
that would be 84ms.

[https://www.wolframalpha.com/input/?i=paris+to+san+francisco...](https://www.wolframalpha.com/input/?i=paris+to+san+francisco+speed+of+light)

On the topic of websites taking a long time to compute something, Wolfram
Alpha is slow. I wonder if any of that slowness is artificial like flight
price websites.

------
scarejunba
It's like if I go to the restaurant and order a steak and you bring it out
right away. I'm going to be suspicious.

~~~
ken
Good point.

“Are we to believe that boiling water soaks into a grit faster in your kitchen
than on any place on the face of the earth?”

It’s simply Occam’s Razor. Which is more likely in 2019: a webpage is fast, or
a webpage has a JavaScript bug?

------
zeristor
Nice pointer about NTP steps and time-smearing, can anyone recommend a good
website for dealing with these issues?

I may not be working on real-time systems at the moment, but I’ve had enough
exposure to them in the past that I’d like to scratch that itch.

~~~
ricardobeat
While it's a good idea to use performance.now() for it's better precision* and
monotonic guarantee, it's not really a huge concern for this kind of
application (measuring time between A and B on the client). You're extremely
unlikely to experience clock skew during those brief windows, and the entire
web relied on Date.now for performance monitoring for decades.

For dealing with timestamps reliably on the client, we'll just instantiate all
dates based on server time instead.

* at least in FF it has been rolled back to 1ms resolution due to privacy/fingerprinting concerns

------
quantified
To the last comment within the article: yes, we’re accustomed to laggy
websites. Most sites have tons of chatter/bloat/trackers. Refreshing to
encounter those that don’t. Thankfully HN is fairly low-bloat itself.

------
calpaterson
Apache, CGI and a C++ handler is refreshingly old fashioned.

------
uwydr
Whoever wrote that feedback is probably an engineer at reddit.

"I didn't get any 503 codes forcing me to click send a few times, this comment
obviously didn't get through"

