The key takeaway for algorithms research seems to be that "[w]e don’t know how to build big networks that deliver lots of bandwidth". This is exactly what S. Borkar argued in his IPDPS'13 keynote [1]. An exa-scale cluster can't be cost-efficient unless the bisection bandwidth is highly sublinearly in the cluster's computing power.
We need new algorithms that
- require communication volume and latency significantly sublinear in the local input size (ideally polylogarithmic)
- don't depend on randomly distributed input data (most older work does)
It's really too bad that many in the theoretical computer science community think that distributed algorithms were solved in the 90s. They weren't.
We need new algorithms that - require communication volume and latency significantly sublinear in the local input size (ideally polylogarithmic) - don't depend on randomly distributed input data (most older work does)
It's really too bad that many in the theoretical computer science community think that distributed algorithms were solved in the 90s. They weren't.
[1] http://www.ipdps.org/ipdps2013/SBorkar_IPDPS_May_2013.pdf