The group had a very effective technique for estimating depth from stereo camera systems called PMF. Now this was an edge based algorithm, that used a lot of 'high level' (at the time) data constructs; you do Canny, calibrate an align to the epi-polar geometry, search and compare edges under various criteria, create lists of matches, etc. It was accurate, which is what mattered but slow. Now one smart guy in the group worked out that you do achieve the same outcome using a very different approach (known as stretch correlation) to achieve a similar result that on the right hardware would run much quicker. Well to cut a long story short the algorithm got written (worked well) and then someone went off and designed and built the specific hardware, including a chip optimized to run the algorithm. Frankly it was incredible it worked considering this was a couple of PhD students and our access to fabrication was, well typically British shall we say, which lead to yield problems. However, by the time it was all up and running we had moved a lot of development onto Linux on some nice shiny new Pentium PCs (133MHz!) which had little trouble running the original PMF algorithm almost as quickly and more stably than the purpose built chip.
Now in some respects this was all fine. We were a University whose purpose was to teach and some smart people learnt how to design chips and develop algorithms, all good. However, the intention was also to create something of practical value and in this it failed. It failed because people forgot Moore's law and that today's optimized code is tomorrows cranky old, cryptic ball of mud.
I apologise if I'm just reminiscing here but I wanted to share.