Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What response time is acceptable for auto complete inputs.
11 points by chrisacky on Jan 8, 2012 | hide | past | favorite | 10 comments
Looking at sites like Google, their autocomplete can respond within 30-50ms, which is clearly well below the threshold which any human would regard as being slow.

What should ordinary folk try and aim for when aiming to deliver similar auto complete functionality for search fields?

I'm using Solr, for driving my autocomplete, but my main application stack (which is using Zend Framework/PHP) is just too darn slow. After a request comes in to my application, it takes about 100-170ms to respond to any request.

So I'm curious as to what we should aim to server these requests in, in order to appear extremely responsive.




Perhaps, I should of included in the original question...

The "auto complete" works over two queries.

The dataset on the first query doesn't change, however the facet results on the second query do. The documents which I'm searching over for the actual auto complete are place names from around the world. About 6 million in total.

A query comes in from the user, and I use the search string to check for all place names and known aliases (which is about 20million).

Once the server receives the response back from Solr, it then has to fire off another query back to Solr, in order to figure out the facet counts for each location. (The application is basically a mapper, where "entities" are tagged to specific geographical locations, so it is useful for the front end user, to have both auto complete and then counts of documents in that particular facet/location.

I'm thinking about dropping PHP for powering the auto complete, just due to the initial 50ms that my application takes to bootstrap itself. Or perhaps I should just try and employ a more sophisticated cache for the first query, in order to check APC for strings which are less than three characters in length, and then start to check Solr for when the character length of the search string is 4 or more. (In fact, this sounds like a pretty reasonable solution).


The user probably only needs an approximate count. So, instead of firing that second query to Solr, can't you just show a cached document count? You can update that in a separate background process depending on how often the document count changes.


We had a similar problem at http://www.formspring.me/ when we wanted to implement a search feature somehow similar to Facebook search.

Our main application stack suffers from the same symptoms, robust, but slow-ish.

After lots of thoughts, and because we knew how often the data would change, we came up with a solution of generating an index per user stored in s3 and retrieved from the client via JSONP.

The text matching and sorting is all done client side and the search feels fast.

If your data doesn’t change in an unpredictable way and with no concurrency issues, building indexes (json blobs) at write time is a strategy that'll get you pretty far.

Let me know if you want more details about our implementation, I'll be happy to share.


This reminds me of MySpace's method for searching messages by downloading as much as they could to the client and storing in local storage.

Unfortunately for me, as mentioned (in my second response), I need to search over 6million place names, and then I need to take the result of these locations, and figure out the number of documents that have been "Tagged" as being inside of them to display an interface such that you have place name followed by item count.

I could definately save a small selection of the first XXX possibility locations though. And regenerate this every day to reflect recent changes.


For Autocomplete, there are two usage cases

1) Partial Data, but fast 2) Complete List of data for user to select interactively

Google search belongs to 1) -- they simply can not display ALL matches to you. I guess they just show a few of the most relevant ones.

In other cases, your data set is not huge, so the goal is to make data selection more visible to user for easy selection. Speed is the secondary goal in this cases.

If you only goal is speed, obviously you need to cache subset of data based some conditions.

Hope this helps.


"Designing with the Mind in Mind" includes a summary of time intervals which are significant for interface design.

http://www.amazon.com/Designing-Mind-Simple-Understanding-In...


Apologies if it sounds like a 'me too' post, but we faced/may still be facing the same issue at http://www.RapiDefs.com

Please give it a quick try and let me know if you find the response time acceptable or not. Thanks.


Definitely under 150ms. It depends on context too. Have you tried testing with some real people to see what their reaction is ? Then try again with other people with faster version.


No I haven't yet. It's just what I observe when testing it myself. That's the concern, I could port this to node.js for example, but it might take me 4 days to migrate the environment correctly, which could be wasted if I don't see a significant improvement.


It helps to have the user know an autocomplete is taking place. A spinner with color change could work well in this regard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: