Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Suppose that my "computing device" were a machine designed to build and program more copies of itself. This device would "compute" a function across n variables in time log n by building n-1 copies of itself, and then parallelizing the computation across them. With sufficient resources, this machine could could compute the function for any n in finite time.

All the author has done is posit an ever-increasing volume of data, such that the rate at which the volume increases causes it to outpace the speed of the computation. It seems that positing an ever-increasing computational capacity is a reasonable solution to this problem.



Your time complexity of log n appears from nowhere. It is entirely possible that the complexity of calculating the original function F was worse than polynomial in the number of variables, in which case a polynomial/linear increase in the number of machines could not reduce the complexity by more than a polynomial/linear factor, thus leaving the computational complexity worse than polynomial (e.g. exponential/super-exponential). I've made a comment further down that posits an idea similar to yours, though with a bit more precision.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: