Considering a user hovering over a bunch of links and then clicking the last one, and doing this in a second. Let's assume your site takes 3 sec to load (full round-trip) and you're server is only handling one request at a time (I'm not sure how often this is the case, but I wouldn't be surprised if that's the case within sessions for a significant amount of cases). Then the link the user clicked would actually be loaded last, after all the others - this probably drastically increase loading time.
The weak spot in this reasoning is the assumption that you're server won't handle these requests in parallel. Unfortunately I'm not experienced enough to know whether that happens or not, but if so, you should probably be careful and not think that the additional server load is the only downside (which part like is a negligible downside).
I used to use a preload-on-hover trick like this but decided to remove it once we started getting a lot of traffic. I was afraid I’d overload the server.
About your first statement though, which server software do you use that still sends data after the client has closed the connection? Doesn't it use hearbeats based on ACKs?
I use nginx to proxy_pass to django+gunicorn via unix socket. I sometimes see 499 code responses in my nginx logs which I believe means that nginx received a response from the backend, but can’t send it to the client because the client canceled the request.
I admit I haven’t actually tested it directly, but I’ve always assumed the django request/response cycle doesn’t get aborted mid request.
Closing a connection to Postgres from the client doesn't even stop execution.
Unless you are focusing on the word server and assuming that has nothing to do with the framework/code/etc, then I can assure you it can be done. I’ve done it multiple times for reasons similar to this situation. I profiled extensively, so I definitely know what work was done after client disconnect.
Many frameworks provide hooks for “client disconnect”. If you setup you’re environment (more appropriate term than server, admittedly) fully and properly, which isn’t something most do, you can definitely cancel a majority (if not all, depending on timing) of the “work” being done on a request.
> Closing a connection to Postgres from the client doesn't even stop execution.
There are multiple ways to do this. If your DB library exposes no methods to do it, there is always:
If you are using Java and JDBI, there is:
Which does cancel the running query.
If you are using Psycopg2 in Python, you’d call cancel() on the connection object (assuming you were in an async or threaded setting).
So yes, with a bunch of extra overhead in handler code, you could most definitely cancel DB queries in progress when a client disconnects.
If that's not how it works, it could easily be modified to add a throttle on how many links it will prefetch simultaneously.
Every site that can use a last mile performance optimization like this should already be serving everything from some form of cache, either from varnish or a cdn. So in theory, availability of the content should not be the problem.