>>> get_close_matches('appel', ['ape', 'apple', 'peach', 'puppy'])
>>> import keyword
>>> get_close_matches('wheel', keyword.kwlist)
>>> get_close_matches('apple', keyword.kwlist)
>>> get_close_matches('accept', keyword.kwlist)
At least as far as levenshtein distance goes, choice B is indeed the correct one, even if its not whats closest logically.
For example, let's say I was searching a database of countries for "North Korea". In my list I have:
South Africa (LD: 6)
Congo (LD: 9)
Republic of Korea (LD: 11)
Democratic People's Republic of Korea (LD: 28)
The actual answer (DPRK) will be pushed far to the bottom based on a naive ranking that uses LD.
The hack that's worked best for me? Rank based on the number of common two-character substrings between the source and target. It's simple, easy to build an index for, and has surprisingly great results. Its ideal use case is if you don't need to return a single absolute best result and can, say, present the three best to a human being and let them pick the match.
For the above search I'd get the following results using the two-character method:
Republic of Korea: 4
South Africa: 1
Knowing which algorithm to use for a problem is most important; sometimes padded hamming will do, other times minhash or bitap or dice's coefficient etc are good. Each algorithm is a kind of balance between a metric and priorities - deciding what is subjectively most important in this problem. Your method for example is not as robust as Damerau Levenshtein when it comes to analyzing DNA sequences. As far as the best default string comparison, I have found dice coefficient to be ideal (your method appears to be based on similar ideas but my hunch is it's less robust).
For your example Dice's Coefficient does as well as can be expected without a concept of countries:
("dice", "South Africa", 0.125);
("dice", "Congo", 0.0);
("dice", "Republic of Korea", 0.44);
("dice", "Democratic People's Republic of Korea", 0.24)]
[("lcs", "South Africa", 6.0);
("lcs", "Congo", 2.0);
("lcs", "Republic of Korea", 7.0);
("lcs", "Democratic People's Republic of Korea", 9.0)]
In my example, DPRK is the objective best answer .. There's pretty clearly a correct match in the value domain, the challenge is to help the user find it as quickly as possible.
over 5 letter windows:
TEST: A 0.4
TEST: B 0.540540540541
TEST: C 0.692307692308
TEST: A 0.555555555556
TEST: B 0.714285714286
TEST: C 1.0
north korea:congo 0.0
north korea:democratic people's republic of korea 0.0526315789474
north korea:republic of korea 0.111111111111
north korea:south africa 0.0
Here is a gist of the code in ruby if anybody wants to steal it. I stole it from somewhere else and thus you have the history of programming.
where there is more discussion, and links out to different string similarity metrics
I wonder how the weights would turn out if your algo were added as a term in his formula and run through the optimization, or even applied to his sample data standalone.
The long-winded answer goes in to a huge amount of details on really interesting pattern matching problems... but not really the specific problem mentioned by the question asker.
I'm glad for a world with Stack Overflow and other similar resources, because I find the long answer fascinating but if I were asking the question I'd much prefer also having access to the shorter, succinct, and also-correct answer beneath it.
He created different metrics based on Levenshtein distance, combined them using a weighted formula, and used an optimization algorithm to choose weights. And he provides great visualizations on how it worked.
I've tried several approaches to fuzzy string matching, and I'm impressed with his results and approach. I've bookmarked it for future reference.
The deservedly highly ranked response at least addresses some interesting ways of defining "close" while skipping the most important, and difficult, case -- semantic content.
Frederick jumps cows
Frederick never jumps cows
Fred leaps bovines
Challenging yourself to come up with better ways to do things is a great way to keep you learning, keep your work interesting, and improve the quality of your projects.
All I'm saying is, it's not necessarily hubris. Too often you find half-baked or half-researched ideas that work in theory but not in the real world or the opposite (being things that work great but in practice could only ever be used in that particular application). You need to look into what transpired before and after those 8 hours meetings before coming to that conclusion (not saying you haven't).
In my experience, class distinction between "architects" and "developers" is a red flag. Actually, the term "architect" is itself a red flag. Come to think of it, even the term "developer" is kind of a red flag.
A lot of red flags :)
However, not all the "architects" (mind you, none of us actually used that term/title) were developers. We had the head of the QA in charge of testing and real world deployments on the team, as well as the lead interface designer. Their feedback helped to find a solution that would be both feasible and user friendly.
That's why I said the developers would work on the prototypes.
What terms/titles did you actually use?
By the way, I didn't intend my "red flag" comment to imply that your particular setup didn't work well. There are lots of good variations. The sociology of software projects is fascinating.
C/C++ Team Lead, Web (C#/ASP) Team Lead, QA Team Lead, Design Team Lead, and The Boss (startup CEO).
Might as well be me! :-)
includes useful/relevant results like:
and from Wikipedia for a useful introduction:
No, describe :)
Seriously, this kind of folklore is valuable (I work on spreadsheet systems and feed on this stuff). What do they do with spreadsheets? How do you know that they love them? What sort of enhancement has to be done in VBA? Does each of them make their own independent spreadsheets? if not, what kind of sharing do they do? What are the most significant limits/problems you've observed in their use of spreadsheets?
I'll start with sharing. The idea of sharing is mostly .xls or .xslx file on mapped network drive and shouting over the floor "Jane, please close quarter results"
Sometimes they are shared on SharePoint which is only marginally better.
So, yeas, sharing is a problem.
But still most of the Excel use (at least with quants) is one off and the end result is a couple of charts that you put into presentation or some insights about numbers and data.
Excel is very good for interactive calculations and what if analysis when you have a complicated chain of formulas and you need results immediately updated when you change inputs
Excel is very good for exploratory data analysis: again charts are immediately updated when you change data and you do not have to remember a lot of API to make then in the first place unlike, say in Matlab or Matplotlib or ggplot.
Old Excel limitation on rows and columns was a problem but not anymore. Lack of column formula (when you would type say =B+C in D's header instead of =B1+D1 and then control-dragging for thousands of rows) (btw Guys from Resolver systems implemented that). Ugly charts.
Also, not a problem with Excel per se, but rather with people who try to use Excel for things it was not designed for: apply it to large datasets and build complex systems on top of it.
And VBA has a surprisingly great development environment.
I think the negativity towards it arises from it being accessible to non-programmers. The VBA in the SO post is beautiful work, however.