Hacker News new | past | comments | ask | show | jobs | submit login

> Zero hits for "hardware".

Are there algorithm courses that take into account how hardware affects algorithms? For example with databases, you have implement theoretically inefficient algorithms which are faster in practice (mostly because they use sequential access).




The problem with accounting for hardware, is which hardware do you account for? If you really want to get into the performance optimization weeds you end up relying on e.g. specific characteristics of a vendor's CPU. The Mechanical Sympathy blog had posts that do this, and the problem with this is it doesn't generalize. Hardware changes (e.g. SSDs have different performance characteristics compared to spinning disks) and you can even design hardware to efficiently implement particular algorithms.

That said, there are areas within algorithms that do look at optimizing for typical hardware. E.g. cache oblivious algorithms are designed to take advantage of caches as found in modern CPUs.


Rather than insisting on things that generalize with no additional work, it may be useful to use a certain modern architecture as a target for learning techniques and then rely on the wetware to generalize.


Thinking about it more, Daniel Lemire publishes very applied algorithmic stuff: https://lemire.me/en/#publications

A lot of his work is in the journal "Experimental Algorithmics", so I guess there are a substantial number of people working in this way. I don't know this field very well, and I don't know if there are general principles that have been extracted from their work.


This Advanced Data Structure course [1] which, while not accounting for any particular hardware, had some interesting "cache-oblivious" algorithms (i.e. designed to make the best use of your cache no matter the cache sizes.) Is that the type of work you are thinking about?

[1] https://courses.csail.mit.edu/6.851/fall17/lectures/


This area is less mature than traditional algorithms work. It is the case that cache-friendliness and minimization of data dependency chains is a huge deal for hyper efficient modern algorithms but this hasn't been the case for too long so there is a less rich history of ideas to pull from.

This course is also more about solving problems in the first place rather than implementation efficiency.


Usually that lives in more applied courses, I guess partially because there is less useful theoretical treatment of that and the usually-taught tools don't really fit it. "A cache miss is 1000x slower" is just a constant factor after all ;)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: