Hacker News new | comments | ask | show | jobs | submit login

It seems everybody is missing this but this could actually slow down your experience, and I'd actually guess it will in some scenarios (ie. not only a theoretical situation).

Considering a user hovering over a bunch of links and then clicking the last one, and doing this in a second. Let's assume your site takes 3 sec to load (full round-trip) and you're server is only handling one request at a time (I'm not sure how often this is the case, but I wouldn't be surprised if that's the case within sessions for a significant amount of cases). Then the link the user clicked would actually be loaded last, after all the others - this probably drastically increase loading time.

The weak spot in this reasoning is the assumption that you're server won't handle these requests in parallel. Unfortunately I'm not experienced enough to know whether that happens or not, but if so, you should probably be careful and not think that the additional server load is the only downside (which part like is a negligible downside).






It actually cancels the previous request when you hover over another link

Client side canceled doesn’t necessarily translate to server side canceled.

I used to use a preload-on-hover trick like this but decided to remove it once we started getting a lot of traffic. I was afraid I’d overload the server.


I'd also hesitate wasting resources in such a way.

About your first statement though, which server software do you use that still sends data after the client has closed the connection? Doesn't it use hearbeats based on ACKs?


It doesn’t send the response to the client, but it still does all the work of generating the response.

I use nginx to proxy_pass to django+gunicorn via unix socket. I sometimes see 499 code responses in my nginx logs which I believe means that nginx received a response from the backend, but can’t send it to the client because the client canceled the request.

I admit I haven’t actually tested it directly, but I’ve always assumed the django request/response cycle doesn’t get aborted mid request.


The server is still doing all of the work in its request handlers regardless of whether client closed the connection.

Not if the server is setup correctly.

That doesn't make sense. You can't just "config a server" to do this. Even if a web framework tried to do this for you, it would add overhead to short queries, so it wouldn't be some universal drop-in "correct" answer.

Closing a connection to Postgres from the client doesn't even stop execution.


> You can't just "config a server" to do this.

Unless you are focusing on the word server and assuming that has nothing to do with the framework/code/etc, then I can assure you it can be done. I’ve done it multiple times for reasons similar to this situation. I profiled extensively, so I definitely know what work was done after client disconnect.

Many frameworks provide hooks for “client disconnect”. If you setup you’re environment (more appropriate term than server, admittedly) fully and properly, which isn’t something most do, you can definitely cancel a majority (if not all, depending on timing) of the “work” being done on a request.

> Closing a connection to Postgres from the client doesn't even stop execution.

There are multiple ways to do this. If your DB library exposes no methods to do it, there is always:

pg_cancel_backend() [0]

If you are using Java and JDBI, there is:

java.sql.PreparedStatement.cancel()

Which does cancel the running query.

If you are using Psycopg2 in Python, you’d call cancel() on the connection object (assuming you were in an async or threaded setting).

So yes, with a bunch of extra overhead in handler code, you could most definitely cancel DB queries in progress when a client disconnects.

[0] http://www.postgresql.org/docs/8.2/static/functions-admin.ht...


I don't think it cancels database queries.

Depending on the framework it can. That's the purpose of the golang Context and C# CancellationToken.

I believe PHPFPM behaves in this way. When the client disconnects from the web server, their request stays in the queue for a worker to pick up, I don’t believe there is a way to cancel it.

Per my cursory reading of the source code it looks like it might only prefetch one link at a time: https://github.com/instantpage/instant.page/blob/master/inst...

If that's not how it works, it could easily be modified to add a throttle on how many links it will prefetch simultaneously.


This library should be only handling cache logistics, moving a cdn cache to browser. Otherwise is ill advised because of the reasons specified.

Every site that can use a last mile performance optimization like this should already be serving everything from some form of cache, either from varnish or a cdn. So in theory, availability of the content should not be the problem.


It also has a minimum time of hovering, so if you're going by it quickly it won't fetch anything.



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: