Hacker News new | comments | show | ask | jobs | submit login
Levenshtein Automata (2010) (notdot.net)
61 points by beau on June 10, 2015 | hide | past | web | favorite | 34 comments



Levenshtein automata seem to pop up on here every once in a while. They are quite interesting from a theory perspective, but (like many things devised by the theory community) incredibly complex in practice. Lucene 4.0 uses them for fuzzy queries, you can read the full story of how they struggled to get them working somewhere in the Lucene blog.

If you want to implement fuzzy string matching, I would look at something like http://arxiv.org/abs/1008.1191 . The experiments look impressively fast.



The difficulty of Levenshtein automata is highly overstated. When I read that Lucene blog post I wrote an implementation in an hour, and prior to reading that post I hadn't even heard of Levenshtein automata.


Unless you are the mythical 100x programmer, I doubt that you wrote a full implementation of general Levenshtein automata in an hour. I read the paper that introduced them ( http://link.springer.com/article/10.1007/s10032-002-0082-8 ) and they are quite the complex beast. Not to mention that the paper is very technical and you need to keep a dozen definitions in your head.

That said, there seems to be a fairly readable implementation at https://github.com/universal-automata/liblevenshtein

I'm currently working on implementing fast Levenshtein queries in C++ with a friend, and we intend to implement the paper I linked in my original post. So far, our dynamic programming Levenshtein already beats Lucene++ (C++ implementation of Lucene), which is a bit embarrassing [1]. If you're interested, more advanced stuff will hit https://github.com/xhochy/libfuzzymatch when we get around to implementing it.

[1] Lucene++ spends more time converting strings between UTF-8 and UTF-32 than it does computing Levenshtein distances, says the profiler.


I'm not a 100x programmer, I just did a couple of things that drastically reduced the time:

1. I didn't follow that paper. Even trying to understand that paper would have taken way more time, so after 5 minutes of trying to understand it I gave up on that approach. See this comment for what I did do: https://news.ycombinator.com/item?id=9699870 That saved maybe 20x.

2. I used Python instead of C++ or Java. This saved 5x.

3. The code was throwaway quality code. This saved 2x.

Together that's 200x, but I'm at least a 2x worse programmer than them, so that gives you the 100x ;-)


(see my other comment as well)

An algorithmicist would say that all this saved you a constant factor of work for a linear slowdown ;)


That's a nice soundbite but it's not correct. The worst case performance with the DFA is linear, the same as them.


No that's just not true. Your step function takes time linear in the length of string. For example, `newstate = [0 for x in state]` takes θ(|state|) time, and because you initialise the state with `range(len(string)+1)`, that's linear in the string length.


Now you're talking about the cost of constructing the DFA, not searching the index with the resulting DFA. The cost of construcing the DFA is irrelevant, and even then you can construct the DFA in O(n) with my method for fixed max edit distance and fixed alphabet. Same as that paper.


I'd like to implement the same paper. Perhaps I'm missing something, but I'm not sure how the residual strings are created. Do you have a link to an implementation or a description of the residual strings?

I get that a residual string is the original string with a deletion, incrementing the deletions until you hit edit distance d. What I'm not sure about is if it's all permutations of possible deletions.


The residual strings are all subwords where exactly d letters were deleted. For d=1 and the word "Levenshtein", that would be {"evenshtein", "Lvenshtein", "Leenshtein", "Levnshtein", "Leveshtein", "Levenhtein", "Levenstein", "Levenshein", "Levenshtin", "Levenshten", "Levenshtei"}.

The paper does not specify how to generate those efficiently, and I haven't given it any thought yet. I don't know of any implementations of the paper, but this aspect of it should be common enough.

EDIT: sorry, didn't read your comment fully. I'm not sure what you mean with "all permutations of possible deletions". The d-deletion-Neighbourhood of w contains all sub-words of w that you obtain by deleting any d letters from w. For d=2, take any two letters and remove them. N₂(jamra) = {jam,jar,amr,jaa,ama,jmr,jra,ara,mra} (hope I didn't forget any...)

Does that make it clearer?


Yes that makes it supremely clearer. I also found a FastSS implementation, which uses the same d-deletion neighborhood. Here it is: http://fastss.csg.uzh.ch

I am looking at a python implementation for examples.


nice, that seems to be based upon a similar idea as the paper I mentioned (but earlier and less refined).


The paper you mentioned reduces memory consumption hugely and averages out the query and insertion time. It's a good improvement.


I.. have a hard time believing this.

Unless you're talking about a very simplified case (N=1 or something, maybe even for a specific word)?

On the other hand: Maybe the Lucene guys and me are just bad. :/


It was for the general case. The reason that I was able to do this is because I was less persistent than them. I tried to understand the paper but after 5 minutes I realized that even understanding the paper was going to take WAY more time than ignoring the paper and implementing it my own way. Here's how I did it.

If we are looking for

    string = "banana"
Then we can represent the state of the automaton as the last row of the matrix that you get when you compute the Levenshtein distance between two fixed strings. The initial state is (in Python):

    def init():
        return range(len(string)+1)
Then to take a step in the automaton:

    def step(state, char):
        newstate = [0 for x in state]
        newstate[0] = state[0]+1
        for i in range(len(state)-1):
            if i < len(string) and string[i] == char:
                newstate[i+1] = state[i]
            else:
                newstate[i+1] = 1 + min(newstate[i],state[i],state[i+1])
        return newstate
We step like this:

    s0 = init()
    s1 = step(s0, 'c')
    s2 = step(s1, 'a')
    s3 = step(s2, 'b')
    s4 = step(s3, 'a')
    s5 = step(s4, 'n')
    s6 = step(s5, 'a')
Now we can compute the lowerbound of the Levenshtein distance by doing min(s6). In this case it's 2. This means that whatever comes after "cabana", it will always have at least distance 2 to "banana". With this info we can prune away a search path in the full text index if that value is larger than our maximum edit distance.

Those handful of lines of code is all you need to do fuzzy string search in practice. This represents the automaton as a step procedure. If you want you can also generate a DFA from this (though it's probably not necessary in practice). If your maximum edit distance is n then if one of the numbers in the state is greater than n it doesn't matter what it is. In the above example s6 = [6, 5, 4, 3, 2, 3, 2]. If n = 3 then s6 = [4, 4, 4, 3, 2, 3, 2] is equivalent, because in the end it only matters whether a number is >3 or not. So you might as well keep the numbers on 4. Replace:

    newstate[i+1] = 1 + min(newstate[i],state[i],state[i+1])
with:

    newstate[i+1] = 1 + min(newstate[i],state[i],state[i+1],n+1)
where n is the maximum edit distance. Now the state space of the automaton is finite, and you can generate a DFA from it by just exploring all the states with the step() function. One more optimization is to not generate the DFA for the full alphabet. If your search word is "banana" then for the purposes of the automaton the letter 'x' is equivalent to the letter 'y' because both are not equal to any letter in "banana". So instead of creating a DFA for the full ASCII alphabet (or worse, the full unicode alphabet), you can instead work with the reduced 4 letter alphabet (b,a,n,X). X represents any letter other than b,a,n.

You could also do a hybrid where you generate the DFA lazily.

I don't know if that made sense, it's a bit difficult to explain in a short HN comment.


I'm not sure if I can follow - I'll give it some more thought and time. That said: You're doing something completely different as far I can tell. You build an ~automaton~ based on an input word. That's not what the paper does/what I struggled with. The paper describes a general automaton and creating a 'vector' based on the input word, that you use as steps.

At the moment I don't see how you could handle transpositions either.

I'm not saying that your approach is bad. But I do think that the 'I did it in an hour' comment was a quite a bit misleading, if you basically ignored the paper and did something that is different in most ways.

The tradeoffs are immensely different - the whole point of the paper is that you're precomputing a looot of stuff so that the lookup is fast.


Computing the step() is extremely fast. And if that's not fast enough then do the DFA construction as I described. The approach is different, yes, but that's the point.


I commented on your step function in another comment so I'm going to skip that.

Your DFA construction, while a bit incomplete (you don't say how you do the transitions), achieves roughly the same thing as Levenshtein automata do. But you spend significantly more time to construct it. The point of the original paper was not to show that DFAs can be used to compute Levenshtein distance, but to show how to do it quickly and efficiently.


I replied to this in the other thread to avoid splitting the conversation in two.


You implemented an automaton that computes Levenshtein distances. However, Levenshtein automata are quite different from what you describe. Your automaton executes the basic Wagner-Fischer / Needleman-Wunsch / ... algorithm.

Btw, see also https://news.ycombinator.com/item?id=9698785 for another discussion on basically the same problem.


This is not correct. The end result of the step() based automaton is the same: it prunes exactly the same search paths as any Levenshtein automaton would. And the part where I described how to build the DFA gives you exactly a Levenshtein automaton DFA. The approach is different, yes, that's the point: it's much simpler and still does the job.


It's not nearly as efficient though, your step function requires O(len(string)) time no matter how well you prune. Since you have len(query) many steps, that gets you to O(len(string)*len(query)), aka quadratic time if they're roughly the same length. Levenshtein automata can do this in linear time because they spend time building the automaton first (preprocessing). So yes, you implemented an algorithm using automata that computes the same result. But you didn't implement Levenshtein automata.


In practice with Lucene, len(string) and len(query) are like 10. So it's totally irrelevant. Furthermore computing the step is extremely fast: you're just doing a handful of min()'s. Even a single cache miss is going to completely dominate that, let alone a disk seek. What matters is that you don't scan a 10 million word index, instead you want to prune your search to hit, say, 50 words in the index.

That's just the step() approach. After that I described how to build the DFA, which gets you the same optimal linear time that you want which does not depend on the query string size.

Reply to other comment:

> Your DFA construction, while a bit incomplete (you don't say how you do the transitions), achieves roughly the same thing as Levenshtein automata do. But you spend significantly more time to construct it. The point of the original paper was not to show that DFAs can be used to compute Levenshtein distance, but to show how to do it quickly and efficiently.

Why is it incomplete? You just follow the step() automaton and construct a DFA that does the same. Every time you hit a new state you create a new node in the DFA, and if you hit an old state you just point to a previously constructed DFA node. You can even do the DFA construction lazily.

> But you didn't implement Levenshtein automata.

A Levenshtein automaton for a word W is a finite state automaton that accepts all words V within edit distance n from W. That's exactly what I built. The point here is that you turn a 30 second search over the whole index into a 0.02 second search by pruning. If you can then optimize that to 0.015 by making the automaton more efficient that's nice but you can hardly claim that what I did is not a Levenshtein automaton because it's a bit slower (and it's not even clear to me that it would be slower).


You can argue about the lengths of these strings and the simplicity of your step function all day long, at the end of the day it's still θ(n²) and no better than the simple dynamic programming approach. In fact, even for that it's a relatively bad implementation because it constructs a new list in every step, whereas you could just reuse the same two lists (called score in your implementation) all the time.

An L1 cache miss, L2 hit costs around 10 cycles, and the L2 cache is more than sufficiently large. Even your initialisation loop takes more than that. The min loop has 11 instructions (branchless) per iteration for the minimum calculations in C++: https://goo.gl/wjhRtb - not taking into account superscalarity.

You have not shown how you prune the search, so I can't say anything about that. Of course that's the entire point of having an index.

Your DFA construction again is massively slower than what is done in the paper. The authors show how to construct the entire automaton in time O(|string|), whereas each step in your implementation takes that much time.

Whether or not you built Levenshtein automata is a pointless discussion. You say you built a DFA for Levenshtein distance (true). I'm saying that you didn't implement the paper. Both are correct.

Look, I'm not claiming you did anything wrong. I'm just pointing out that your implementation, while it was fast to write, is also much much slower than their algorithm, and you shouldn't compare the two as if they were the same.


> You can argue about the lengths of these strings and the simplicity of your step function all day long, at the end of the day it's still θ(n²) and no better than the simple dynamic programming approach.

It's not O(n^2), but I forgot to mention that. You don't need to calculate the full state, only k entries when your maximum edit distance is k, because the other entries are going to be greater than k anyway. So it's actually O(nk), so it's linear in n (that paper assumes that the max edit distance and the alphabet size are fixed, so by their assumption O(nk) = O(n)). But I think you are missing what a Levenshtein automaton is. The point of a Levenshtein automaton is not to calculate the edit distance. In fact a Levenshtein automaton does not need to calculate the edit distance at all. What a Levenshtein automaton with max distance k to a word "banana" does is this: it determines whether given a prefix like "caba", is there a way to extend that prefix such that the distance to "banana" is less than k. The dynamic programming algorithm does not do this. They are two different (but of course related) problems.

> In fact, even for that it's a relatively bad implementation because it constructs a new list in every step, whereas you could just reuse the same two lists (called score in your implementation) all the time.

Of course, it's a proof of concept not a fast implementation. Plus it's not as simple as keeping two lists when you're searching a text index, because from the same state you will need to calculate multiple successor states. If you destroy the state s when stepping you can't step from that same state s with another letter.

> An L1 cache miss, L2 hit costs around 10 cycles, and the L2 cache is more than sufficiently large

A normal full text index does not even fit in L3. Perhaps it does not even fit in memory.

> You have not shown how you prune the search, so I can't say anything about that. Of course that's the entire point of having an index.

Pruning the search works exactly the same way given any implementation of a Levenshtein automaton. It's just intersecting the automaton with the index.

> Whether or not you built Levenshtein automata is a pointless discussion. You say you built a DFA for Levenshtein distance (true). I'm saying that you didn't implement the paper. Both are correct.

I never claimed to implement the algorithm in the paper. Whether or not it's a Levenshtein automaton is not a pointless discussion. Lucene improved their fuzzy text search times by 100x by pruning the search. You can do that with my method too. That's why the goal here is "implement a Levenshtein automaton" not "implement a Levenshtein automaton with the algorithm described in paper XYZ".

> I'm just pointing out that your implementation, while it was fast to write, is also much much slower than their algorithm, and you shouldn't compare the two as if they were the same.

Even if my algorithm for constructing the DFA is 10x slower than theirs (which I HIGHLY doubt), it still wouldn't matter because you can still use it to get their ~100x speedup because it does exactly the same pruning and the cost of the automaton is just noise.


Yup that's the blog post I was referring to, I thought it had been on some official Lucene blog. Thanks for linking it.


The automata is incredibly slow, though it's also incredibly cool. I implemented a trie that allows for Levenshtein distance queries here github.com/jamra/LevenshteinTrie

That was just a fun project, not something I optimized or even tested appropriately.

The theory behind the trie is that the dynamic programming algorithm for Levenshtein distance gets passed to each recursive step, reducing the amount of comparisons needed.

I like this subject so I'm going to read the paper you linked ;)


Plus, these automatons are somewhat useless (because too expensive to build/too big) when you need N >= 3


There is a trade off even in the paper that was linked. You have to disallow distances (d) that are too large when the words are small. If you have an array of words that are within distance 3 of a 3 letter word and you have many 3 letter words, you're going to have every single 3 letter word in your result set.

This is a running issue with fuzzy matching in general, but automatons are especially slow and take forever to build.


I originally read the paper and fought with the implementation for OCR use cases. And that was stupid / short-sighted. My problem wasn't a big result set though, it was the size of the automaton at N >= 3..

What I usually need: Fuzzy results, with a quality/rating. Given a database of all customer names, give me the best matches for the OCR result. Same for 'all streets in Germany'.

Since these input words can be quite long, N = 3 (or higher) might still provide decent quality (3 errors in 15 characters might still be interesting), but the automaton explodes in size.

On top of that, ignoring the ~high~ number of edit operations I am interested in: I usually want the real distance, not a short-cut/cut-off. So not 'Give me all results that have less then 3 errors', but 'Give me the 100 best results, ordered by quality (errors/length)'.

Or - and obviously that's always a valid option - I just failed to make that work in any decent fashion.


I did something similar to what you are looking for with my gocleo implementation. I create what I term the Jaccard(sp?) rating, which is the percent something is off by. github.com/jamra/gocleo

You can combine that with a thing like my LevenshteinTrie github.com/jamra/LevenshteinTrie (or if you can create a DAWG, it will be more compresssed)


I suggest everyone check out the rest of the algorithms on that site. They are cool. http://blog.notdot.net/tag/damn-cool-algorithms





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: