

Going Nowhere Really Fast, or How Computers Only Come in Two Speeds (2010) - mwcampbell
http://www.loper-os.org/?p=300

======
mwcampbell
This article isn't new, but it's been on my mind lately. And whereas Stanislav
is primarily interested in inventing a new Lisp machine from scratch, I'd like
to understand the actual reasons why desktop computers have become noticeably
slow despite ever-increasing hardware speeds.

~~~
gvb
As always, there are many reasons.

1\. While specific operations are three or four orders of magnitude faster
than the old computers, the underlying operations are only small multiples
faster. When the computer salesmen of the world quote numbers, they are the
most optimistic numbers possible. MIPS == Misleading Information Provided by
Salesmen... a definition that goes back 30 years or more and still is being
practiced today.

For instance, when a CPU is rated at 3GHz, it means it can do 3x10^9 NOPs a
second, but a missed branch prediction will stall the pipeline for tens to
hundreds of (3GHz) clock cycles. Tens if the target of the missed branch is in
L1 cache. Hundreds if the target is in SDRAM in a closed page. Excruciatingly
slow if the target has to be swapped in off of disk.

Another example is (DDR3) SDRAM: it is rated at a phenomenal number of GB per
second, but that is only because (a) the GB/sec rating is only if it is
streaming sequential data perfectly and (b) it is a lot of bytes wide. If you
look at how long it takes to get the first byte of data from SDRAM on an
initial (non streaming) transaction, it is not that much faster than the old
PC-AT (8MHz clock).

Disk drives are similar: the rotational speeds have not increased
significantly. The actuators move faster and the data transport is faster, but
if it takes 6mSec (10,000 RPM) worst case for a bit to show up under the read
head, it isn't that significant if it is 1nSec or 1uSec to get that byte to
the CPU.

2\. Computers are doing more, and that ain't necessarily a bad thing.

3\. Aggravated by #1, the assumption that computers are essentially infinitely
fast, computer programmer have become sloppy either through carelessness or
through the use of libraries (time to market, y'know). An interpreted language
cannot be faster than finely tuned assembly language... except it is
practically impossible to do the amount of things expected of a computer
today, all coded in perfectly optimized assembly language, and get it done
before the computer is obsolete (and the programmers go insane).

tl;dr: The speeds quoted by the computer salesmen are assuming a perfect
world. High speed caches cannot hide all the imperfections of the real world.

Mel's blackjack on a RPC-4000 will beat a 3GHz Xeon processor running a sloppy
program written in an interpreted language... but it will only ever run on a
RPC-4000 and it is impossible to "fix." Ref:
<http://catb.org/jargon/html/story-of-mel.html>

~~~
mwcampbell
> 2\. Computers are doing more, and that ain't necessarily a bad thing.

Good point. I wonder how many lean-and-mean GUI implementations have been
written with no regard for, say, accessibility for blind users via a screen
reader, or even internationalization.

I suppose the way to arrive at fast software is to set realistic performance
targets, then continuously measure performance and pour time into optimizing.
But on general-purpose computers, I suppose no single development team
controls enough of the technology stack to thoroughly optimize it. I wonder if
even Apple could make a Mac go from cold boot to the Finder in 2 seconds or
less, even if Steve Jobs himself had demanded it.

