In 2017, on a low-end GPU, the indexing took about 10 minutes of 100% utilization, for a dozen patches covering each of 50,000 images (feature vectors were 1,024 32-bit floats).
After that, the app just queries a precomputed list (256 precomputed matches per query) depending on where in the image the user pressed. Even with hundreds of simultaneous users, it can hardly keep a $5 Linode busy.
If a government had many streams of temporal data, eg coordinates of people's locations, is it just a matter of no cache availability and computation that would slow this down?
If neither of those were concerns, I imagine applying a heuristic on points through time to predict where they would be next. If it's a sleep range, it would expect the current "pixel" on the next step until 9am. That might increase memory and computation costs, but increase speed with a reduction in accuracy.
If you're in the government blink once to confirm.
Voronoi is super popular and typically is a component of solutions used in practice, we did write another article that covers what some of those might look like: https://www.pinecone.io/learn/composite-indexes/
http://driftwheeler.com
uses a custom brute force search in CUDA, based on bitonic sort (https://en.m.wikipedia.org/wiki/Bitonic_sorter) and a fractional norm distance metric (f=0.5, i.e., sqrt):
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.23...
In 2017, on a low-end GPU, the indexing took about 10 minutes of 100% utilization, for a dozen patches covering each of 50,000 images (feature vectors were 1,024 32-bit floats).
After that, the app just queries a precomputed list (256 precomputed matches per query) depending on where in the image the user pressed. Even with hundreds of simultaneous users, it can hardly keep a $5 Linode busy.
Good times...