I see the terms "python" and "search" and "millions of sets" and think data science... except that in the data science contexts with which I'm familiar we're looking at billions of records among petabytes of data. I know the article is about what can be done on a laptop but I'm left wondering if this is a neat small-scale proof of concept or something that scales and I need to research more when I've had coffee rather than bourbon.
Author here. The algorithm used here is based on Google's 2007 paper "Scaling Up All Pairs Similarity Search." Since then I am sure they have started to look at billions of sets. Generally speaking exact algorithms like the one presented here max-out around 100M on not-crazy hardware, but going over a billion probably requires some approximate algorithms such as Locality Sensitive Hashing. You maybe interested in the work by Anshumali Shrivastava.
Data science is about drawing conclusions from data, not the size of it. Most data science happens on datasets numbered in the hundreds rather than those in the millions.
This is interesting - I am a bit confused about Big Data and data science etc. If I suggest my thoughts would you mind redirecting me?
- We have lots of data in different databases and just need a unified view (ETL / data warehousing) - it's where most data in most businesses is. trapped. Next steps: common data definitions across the company, top level imposition to get a grip
- We can pull data together but need it to undergo what-if analysis or aggregation for reporting. This is usually regulatory or data warehousing?
All the above are "size of Enterprise Oracle / other RDBMS". You could have billions of records here but usually billions comes from dozens of databases with millions each ...
Big Data seems to be at the point of trying to do the ETL/Data warehousing for those dozens of different databases - put it into a map reduce friendly structure (Spark, Hadoop) and then run general queires - data provenance becomes a huge issue then.
Then we have the data science approach of data in sets / key value stores that Inwoukd classify as predictive - K-nearest neighbour etc.
I suspect I am wildly wrong in many areas but just trying to get it straight
I don't understand your point, you're trying to make complexity out of simple concept imo.
Data science: the science of using data to draw conclusion. Can be thousands/hundreds of datapoint. Can be billions. Does not matter.
Big data: subset of data science applied to "big" dataset where the most trivial approach reach their limit. It does NOT mean billions of datapoint easier, it probably just means that it is not well suited for a spreadsheet anymore basically.
> except that in the data science contexts with which I'm familiar we're looking at billions of records among petabytes of data.
At the core of the author’s Show HN is an exact algorithm implementation / port for the all-pair similarity search. One of the steps of an all-pair similarity search, metric K-center, is an NP-complete problem. [1]
So we’ve got an exact algorithm that needs to solve an np-complete problem to produce a result, making it at least as hard.
Any speed increases to such an algorithm in the millions of data points is awesome! If you’ve got billions of data points chances are you can distill it down to millions, and if that’s possible you’d get an exact result. Or you could use a heuristic algorithm, some sort of polynomial-time approximation, which can scale to billions, and still get you a good-enough result.
> So we’ve got an exact algorithm that needs to solve an np-complete problem to produce a result, making it at least as hard.
This is not correct. It's very obvious that all-pair similarity search can be solved in O(n^2) calls to the similarity metric, as stated in the readme. So unless the metric itself falls outside P, this problem is easy (but still hard to scale up in practice, of course)
Which hash function are you using for the minhashes for the LSH benchmark? Example code from datasketch seems to indicate SHA-1. Is there good reason for that? Have you tried out murmur? I wonder if it improves runtime?
Interesting. I thought the Python builtin hashlib was more convenient (and more random). But yes you are right, good implementation of murmur3 hash is much faster.
SHA-1 is specifically built to have special properties as a secure hash function. As I understand it murmur actually comes from this world.
I also had a gander at some more of the datasketch source. I notice that you compute H_n(x) by x_n = (a_n + b_n * H_0(x)) with a_n, b_n being random seeds....
That's pretty cool, I was doing it by H_n(x) = H_n-1(x|n) and thought it would be pretty quick, but just applying a random round directly after to one hash value from precomputed seeds looks much faster.
Might also want to test with a bigger buffer? There are likely to be constant-time overheads, especially for libraries which are C/C++ wrappers, and in general for hash function setup.
If you were curious about the reference to MinHash in the OP, I just wrote a gentle guide to the MinHash family of algorithms (including our recent research extending it to probability distributions.)
https://moultano.wordpress.com/2018/11/08/minhashing-3kbzhsx...