
Algorithms vs. Moore’s Law - bootload
http://www.johndcook.com/blog/2015/12/08/algorithms-vs-moores-law
======
bootload
_" Proebsting’s Law: improvements to compiler technology double the
performance of typical programs every 18 years."_

Comment left by Peter Norvig, read paper at
[https://news.ycombinator.com/item?id=10699136](https://news.ycombinator.com/item?id=10699136)

------
joaorico
Here's two plots [1] which show vividly the effect mentioned in the article
[0]. Might even be the chart mentioned in the first paragraph.

[http://imgur.com/wD4XFvD](http://imgur.com/wD4XFvD)

[0] 7th comment [edit: actually still waiting approval..] in the article.

[1] “Graduate Education in Computational Science and Engineering” (p. 168)
SIAM Rev., 43(1), 163–177.
[http://epubs.siam.org/doi/abs/10.1137/S0036144500379745](http://epubs.siam.org/doi/abs/10.1137/S0036144500379745)

~~~
shas3
Multi-grid methods are of limited applicability. Out of the box, they work on
linear systems with special structures. Conceivably you can extend it to all
linear systems, but sparsity of matrix is a necessary condition.

~~~
joaorico
Yes, this is precisely the point the author of the article makes in the
comments:

"coca: That’s actually a good point. Radix sort is analogous to what people
have done with linear algebra. For dense matrices, you’re essentially stuck
with Gaussian elimination. But most matrices encountered in applications have
some exploitable special structure like sparsity. That’s where the real
progress has happened in linear algebra."

------
DennisP
Given that linear algebra and optimization algorithms are pretty relevant to
AI, this makes me wonder whether strong AI might creep up on us a lot faster
than Moore's Law would suggest. The main impediment might be storage capacity.

------
blt
Well, that was short.

Growth in memory size means we throw larger and larger problems at our
computers, so asymptotic behavior becomes more and more important.

At the same time, CPUs become more superscalar, pipelines get deeper, and the
CPU/memory gap increases. Constant factors become more important. The "each
instruction and memory access is O(1)" assumption of classical algorithm
analysis has never been less valid.

Good engineers need to care a lot about both algorithms and hardware.

------
dheera
On a related note, it also makes me wonder how blazingly fast computers would
be _if_ we implemented today's applications in raw, gory C/C++ code, like it
used to be, instead of JavaScript and Java. Besides sharing, Google Drive
Spreadsheets has about the same feature set as Microsoft Excel 2.0, but it
consumes _at least_ a hundred times more resources in terms of CPU cycles and
RAM. Every single line on the spreadsheet grid, every button, behind the
scenes, is a massive CSS and JS undertaking.

I distinctly remember when MATLAB went from version 6 to version 7, the entire
UI suddenly became slower by at least a factor of 10, because they redid the
entire UI in Java. With the Java version, there was even a perceivable lag
between pressing my mouse button down on a UI button, and the respective
button shading appearing on the button.

Most of my Android apps typically take 1-2 seconds to load, again, courtesy of
Java. I imagine the equivalent-feature-set apps of 10 years ago, if they were
adapted and cross-compiled from their C code, would probably load in <0.1 s,
faster than human perception.

Firefox's user interface uses XML to define its layout. For heaven's sake,
something that doesn't ever really change requires loading a bloated XML
parser each time?

We've come a long way in terms of connectivity and security, but I feel like
as Moore's Law has given us exponentially more powerful CPUs, we've also
written built exponentially more bloat into our apps, so nothing has actually
gotten that much faster.

I'm not disputing the merits of modern computing, but sometimes I just wonder
what it would be like if we had a system today whose UI response is
consistently faster than human perception.

~~~
mtdewcmu
I think most applications could be sped up significantly by rewriting them
more efficiently in their own language. You're rarely maxing out the
possibilities of the language you're using, because it requires time, skill
and a different philosophy of development than you typically find in corporate
software.

>> Most of my Android apps typically take 1-2 seconds to load, again, courtesy
of Java.

Java is kind of a special case, because the time spent re-compiling the code
every time is pure waste. Java code can be rewritten almost 1:1 in C++ and
derive a performance benefit, at least for things like initial startup. iOS
prefers ObjectiveC, so you could get some sense of the potential benefits of
native code by playing with iOS apps. It's probably not as big as you might
think, because there are a lot of other bottlenecks that have nothing to do
with language.

Firefox's use of XML for layout shouldn't make much difference, because XML
parsing isn't really that slow and there are much bigger bottlenecks involved
in rendering a GUI.

