

Can an O(n!) algorithm be faster than an O(1) algorithm? Yes, of course it can. - ivoras
http://ivoras.net/blog/tree/2014/Jul-on-constants-in-algorithmic-complexity-and-comparing-apples-to-oranges.html

======
TheLoneWolfling
There's a better method.

Create a multihashmap of <sorted word -> list of words> for each word in your
input dictionary. This is O(n), but only has to be done once.

Then just sort your input word and look it up in the multihashmap. O(1) lookup
time (well, to be pedantic, O(|word|*log(|word|)), or just O(|word|) if you
use a weird sort).

So, roughly:

    
    
        from collections import defaultdict
        def makeWordMap():
            mm = defaultdict(set)
            for line in open('words.txt').readlines():
                word = line.strip().lower()
                mm[tuple(sorted(word))].add(word)
            return mm
        
        def anagrams(w, wordMap=makeWordMap()):
            return list(wordMap[tuple(sorted(w.lower()))])
    

(Please ignore both my (ab)use of default arguments and the dance with lists
being unhashable types)

This method ends up being ~17x faster than ana5 given on my machine for the
input "spare", even when using the SIL wordlist (109582 words) and changing
ana5 to precompute the word list.

(On a sidenote, the largest collection of anagramming words in the SIL
wordlist is ['spear', 'reaps', 'apres', 'asper', 'parse', 'apers', 'rapes',
'pares', 'pears', 'spare'], with length 10)

------
cauterized
It's naive to consider your second algorithm O(1) - you're just ignoring the
complexity abstracted by Counter.

