Won't help you learn the algorithms, but if you need to get stuff done quickly, its an easy to use solution. I use it on the 404 page of a website to suggest what pages the user may have wanted (logging had shown that a lot of 404 and error pages were due to mistyped urls).
I agree, I think choice B is in fact the closest since the only word from the original string which is completely different is BROWN. FOX and COW have the O in common and the rest of the words match completely, while Choice C has two words which do not match at all.
At least as far as levenshtein distance goes, choice B is indeed the correct one, even if its not whats closest logically.
Fantastically thorough research, the kind of stuff I love to read. However I can't help but feel the much less impressive other answer "Use Levenshtein distance, here is the link." (paraphrased) is the better answer.
The long-winded answer goes in to a huge amount of details on really interesting pattern matching problems... but not really the specific problem mentioned by the question asker.
I'm glad for a world with Stack Overflow and other similar resources, because I find the long answer fascinating but if I were asking the question I'd much prefer also having access to the shorter, succinct, and also-correct answer beneath it.
He did far more than use Levenshtein distance. He is matching phrases not individual words.
He created different metrics based on Levenshtein distance, combined them using a weighted formula, and used an optimization algorithm to choose weights. And he provides great visualizations on how it worked.
I've tried several approaches to fuzzy string matching, and I'm impressed with his results and approach. I've bookmarked it for future reference.
The main problem is though that he doesn't go into much detail on how specific his set of parameters is to the phrases he's matching against. I suspect that his optimization won't help much when doing a free match against any possible string, in which case the added detail doesn't do much for the person asking (at least not from what we can gather from the question).
I've had to do this a few times when building simple input normalization features. The trouble with scoring based on Levenshtein distance in that case is that it improperly penalizes phrases that are significantly different in length but similar in content.
For example, let's say I was searching a database of countries for "North Korea". In my list I have:
South Africa (LD: 6)
Congo (LD: 9)
Republic of Korea (LD: 11)
Democratic People's Republic of Korea (LD: 28)
The actual answer (DPRK) will be pushed far to the bottom based on a naive ranking that uses LD.
The hack that's worked best for me? Rank based on the number of common two-character substrings between the source and target. It's simple, easy to build an index for, and has surprisingly great results. Its ideal use case is if you don't need to return a single absolute best result and can, say, present the three best to a human being and let them pick the match.
For the above search I'd get the following results using the two-character method:
Republic of Korea: 4
South Africa: 1
The problem is that what you call the actual answer - DPRK - is subjective. By picking DPRK you need a more intelligent algorithm than fuzzy string matching because you are looking for the right answer in terms of semantic knowledge of countries, not blind character comparisons. Fundamentally, the fact that DPRK is the right answer is an accident of History and English, there really is no right answer and no algorithm can capture this insight without being told. By biasing in one direction you are sacrificing performance in some other unknown. The trick is in balancing the bias with general ability. As always.
Knowing which algorithm to use for a problem is most important; sometimes padded hamming will do, other times minhash or bitap or dice's coefficient etc are good. Each algorithm is a kind of balance between a metric and priorities - deciding what is subjectively most important in this problem. Your method for example is not as robust as Damerau Levenshtein when it comes to analyzing DNA sequences. As far as the best default string comparison, I have found dice coefficient to be ideal (your method appears to be based on similar ideas but my hunch is it's less robust).
For your example Dice's Coefficient does as well as can be expected without a concept of countries:
("dice", "South Africa", 0.125);
("dice", "Congo", 0.0);
("dice", "Republic of Korea", 0.44);
("dice", "Democratic People's Republic of Korea", 0.24)]
and by blind luck ("ort Korea"), longest common sub-sequence does even better:
[("lcs", "South Africa", 6.0);
("lcs", "Congo", 2.0);
("lcs", "Republic of Korea", 7.0);
("lcs", "Democratic People's Republic of Korea", 9.0)]
Oh, I'm under no illusions that it's a universally applicable string matcher. What it seems to be useful for is normalizing human input in situations where someone can pick the best match. It handles typical normalization transforms well (word fragments, transpositions) and is very easy to index.
In my example, DPRK is the objective best answer .. There's pretty clearly a correct match in the value domain, the challenge is to help the user find it as quickly as possible.
At my last job, I was always boggled by the architecture team, who would lock themselves in a meeting for 8 hours and completely design some complex system with just their notepads and a whiteboard. I used to think they must be über geniuses who already know everything needed to come up with the optimal design and algorithms without needing to do any research. In hindsight, with a few more years under my belt, I now realize that what I mistook for knowledge and skill was mostly just hubris on the part of most of the team members (which also manifested itself in strong objections to any outside suggestion that there may be a better way to do things).
Challenging yourself to come up with better ways to do things is a great way to keep you learning, keep your work interesting, and improve the quality of your projects.
At my old job, the other architects and I would spend days researching on our own, then lock ourselves up for days in a glassed meeting room with nothing but a notepad and a whiteboard and lots of coffee as we debated all the various methods we'd discovered, pointed out the flaws in each others' approaches, picking one person's research of choice and seeing how far we could push it and when/where/how it would fail in an attempt to reach the best solution.... then do more research, get some developers working on prototypes, and repeat the whole thing all over again in two weeks' time.
All I'm saying is, it's not necessarily hubris. Too often you find half-baked or half-researched ideas that work in theory but not in the real world or the opposite (being things that work great but in practice could only ever be used in that particular application). You need to look into what transpired before and after those 8 hours meetings before coming to that conclusion (not saying you haven't).
In my experience, class distinction between "architects" and "developers" is a red flag. Actually, the term "architect" is itself a red flag. Come to think of it, even the term "developer" is kind of a red flag.
Oh they did. I was a developer/architect, did more developing than any of the developers and more architecting than any of the architects.
However, not all the "architects" (mind you, none of us actually used that term/title) were developers. We had the head of the QA in charge of testing and real world deployments on the team, as well as the lead interface designer. Their feedback helped to find a solution that would be both feasible and user friendly.
That's why I said the developers would work on the prototypes.
This article talks about using levenstein distance to compare the similarity of two words/sentences another great algorithm for fuzzy string matching is http://en.wikipedia.org/wiki/Metaphone it basically compacts a word into its phonetic components and allows you to implement a google type did you mean.
Those guys love their spreadsheets - I can't even describe.
No, describe :)
Seriously, this kind of folklore is valuable (I work on spreadsheet systems and feed on this stuff). What do they do with spreadsheets? How do you know that they love them? What sort of enhancement has to be done in VBA? Does each of them make their own independent spreadsheets? if not, what kind of sharing do they do? What are the most significant limits/problems you've observed in their use of spreadsheets?
I can provide some folklore. I am a professional programmer with a Unix pedigree working as a quant in banking and even I succumbed to the charms of Excel (I still managed to evade VBA though).
I'll start with sharing. The idea of sharing is mostly .xls or .xslx file on mapped network drive and shouting over the floor "Jane, please close quarter results"
Sometimes they are shared on SharePoint which is only marginally better.
So, yeas, sharing is a problem.
But still most of the Excel use (at least with quants) is one off and the end result is a couple of charts that you put into presentation or some insights about numbers and data.
Excel is very good for interactive calculations and what if analysis when you have a complicated chain of formulas and you need results immediately updated when you change inputs
Excel is very good for exploratory data analysis: again charts are immediately updated when you change data and you do not have to remember a lot of API to make then in the first place unlike, say in Matlab or Matplotlib or ggplot.
Old Excel limitation on rows and columns was a problem but not anymore. Lack of column formula (when you would type say =B+C in D's header instead of =B1+D1 and then control-dragging for thousands of rows) (btw Guys from Resolver systems implemented that). Ugly charts.
Also, not a problem with Excel per se, but rather with people who try to use Excel for things it was not designed for: apply it to large datasets and build complex systems on top of it.
I've spent the past week building a system that parses spreadsheets into xml and feeds them into a relational db. Having relational integrity gives insight into relationships in data in ways not possible with a vlookup. Databases also provide concurrency so that many analysts can work in the same data. But because of VBA, you can bind an Excel spreadsheet to a database and get the best of both worlds.
It proves the language choice is irrelevant. It's the solution that counts. Even VBA can be written clearly and works well. It's fun to watch as hipsters flaming echo other for their adopted cool languages though.