this is my code (using mighty, our python based cluster service):
word_generator = mighty.word_generator("wordlist",(mighty.APPEND_NUMERALS,5))
phrase_generator = mighty.word_concentate(word_generator,' ')
job = mighty.match_to_hash('PROVIDEDHASH',mighty.SHA1,phrase_generator.iterate_all)
job.priority = mighty.CRITICAL
EDIT: added CRITICAL as the priority or it takes longer :)
EDIT: ok so the code is a bit the wrong side of the tracks. It wouldnt take long to refactor it :)
Since this is much larger than the total hash space (2^160 = 10^48), you can hope you'll get a collision after the number of tries equal to half of the hashspace (10^48 / 2). That means you'll have to generate 10^47 hashes. At 5 minutes of runtime, you'll have to generate 10^45 hashes per second.
Bottom line: if you can run this on 200,000 cores, you'll have to generate 10^40 hashes per second per core, and that seems impossible to me.
I cant iterate the whole lot - but I can cover a substantial portion more of the keyspace in 5 mins than most could in the 30hrs. I bet 5 minutes will easily get me damn close :)
You won't even cover a fraction of a percent of the keyspace before the sun burns out.
Also, (a) you're allowed to permute case of letters, exploding the search space even more, which is handy because (b) I presume the target sha1 will come from words not on the word list, making an exact match unlikely (hence score based on hamming distance).