
Hennessy and Patterson win Turing Award - kercker
https://www.acm.org/media-center/2018/march/turing-award-2017
======
tlb
Younger programmers may not appreciate how big a revolution the quantitative
approach to computer design was. It now seems like an obvious idea: since all
machines are Turing-complete, instruction sets and architecture should be
optimized for performance across some representative workloads. Since the
instruction set is one of the variables, the tasks must be specified in a
high-level language and the compiler becomes part of the system being
optimized.

Before that, instruction sets were driven more by aesthetics and marketing
than performance. That sold chips in a world where people wrote assembly code
-- instructions were added like language features. Thus instructions like
REPNZ SCAS (ie, strlen) which was sweet if you were writing string handling
code in assembler.

H&P must have been in the queue for the Turing award since the mid-90s. There
seems to be a long backlog.

~~~
userbinator
_Before that, instruction sets were driven more by aesthetics and marketing
than performance. That sold chips in a world where people wrote assembly code
-- instructions were added like language features. Thus instructions like
REPNZ SCAS (ie, strlen) which was sweet if you were writing string handling
code in assembler._

REP MOVS is still the fastest and most efficient way to do mempcy() on most
Intel hardware, and likewise REP STOS for memset(), because they work on
entire cachelines at once.

It's worth noting that, if it weren't for a brief period of time during the
late 70s/early 80s when memory was _faster_ than the processor, RISC as we
know it may have never been developed; the CISCs at the time were spending the
majority of the time decoding and executing instructions, leaving memory idle,
and that's what the RISC concept could make use of --- by trading off fetch
bandwidth for faster instruction decoding and execution, they could gain more
performance. However, the situation was very different after that, with
continually increasing memory latencies and now multiple cores all needing to
be fed with an instruction stream putting RISC's increased fetch bandwidth at
a disadvantage. Now, with a cache miss taking tens to hundreds of cycles or
more, it seems a few extra clock cycles decoding more complex instructions to
avoid that is the better option, and dedicated hardware for things like AES
and SHA256 obviously can't be beat by a "pure RISC". It was almost "game over"
for "RISC=performance" in the early 90s when Intel figured out how to decode
CISC instructions quickly and in parallel with the P5/P6, and rapidly overtook
the MIPS, SPARCS, and ALPHAs that needed more cache, higher clock speeds, and
power consumption to achieve comparable performance.

Certainly, it makes one wonder whether, had that brief moment in time not
existed and memory was always significantly slower than the processor, would
CPU designs have taken a completely different direction?

~~~
jabl
> REP MOVS is still the fastest and most efficient way to do mempcy() on most
> Intel hardware, and likewise REP STOS for memset()

Would that be the original REP MOVS/STOS, or the "fast strings" (P6)? Or the
"this time we really mean it fast strings" (Ivy Bridge)? Or the "honestly,
just trust us this time fast strings" (Ice Lake)?

> Now, with a cache miss taking tens to hundreds of cycles or more, it seems a
> few extra clock cycles decoding more complex instructions to avoid that is
> the better option

You can have both, actually. E.g. RISC-V with the compressed instruction
extension achieves higher code density than x86-64.

> and dedicated hardware for things like AES and SHA256 obviously can't be
> beat by a "pure RISC"

Well, if you're really looking for minimal instruction sets, RISC is way
bloated; IIRC single-instruction computers can be Turing complete. Obviously
they are not very useful in practice. I think a better approximation of the
RISC philosophy is "death to microcode", that is, the instruction set should
match the hardware that the chip has. So if your chip has dedicated HW for
some crypto or hashing algorithm, I wouldn't consider it "un-RISCy" to expose
that in the ISA.

------
Maven911
Who else studied with their book called "Computer Architecture" and what are
your thoughts on it ?

I enjoyed that it was a simpler read then a lot of the circuits-type of books
that are part of a EE/CE curriculum, but I always felt there was this lack of
"hard science"/physics in the book.

And perhaps it was just not a topic they felt fit with the vision of what this
book is suppose to be, and it likely came to be a better decision to abstract
that part away for readability.

~~~
lvoudour
I read both _Computer Architecture_ and _Computer Organization and Design_ in
college and I really enjoyed them.

I would recommend COaD to any beginner who wants to learn some basic concepts
of computer design (and if working with FPGAs why not build one).

CA deals with more advanced concepts but doesn't overwhelm you with math and
circuit theory (as you noted) so it's a natural progression (from COaD). I
think something more advanced and "hard science" should be part of a post-
graduate curriculum

~~~
cbHXBY1D
I strongly prefer _Computer Organization and Design_ over their _Computer
Architecture_ book. I had to read both while at Berkeley. I've considered
picking up the new RISC-V edition of the former but I can't justify spending
that kind of money when RISC-V is so similar to MIPS. Could be a fun bit of
history if RISC-V ever takes over :-)

~~~
cakebrewery
I read these two as well. Computer Organization and Design and Computer
Architecture on my 2nd and 3rd-level computer architecture courses
respectively. Glad to hear Hennessy and Patterson getting the Touring award,
always had a good hunch about Computer Architecture despite not knowing as
much back then.

------
chrisaycock
Two years ago, David Patterson interviewed John Hennessy for _Communications
of the ACM_. They discuss the changing job landscape, MOOCs, and the future of
education.

[https://doi.org/10.1145/2880222](https://doi.org/10.1145/2880222)

~~~
seanf
Thanks for the link. Full interview at
[https://vimeo.com/146145543](https://vimeo.com/146145543)

------
zeroxfe
Really happy about this! Their book "Computer Architecture" (along with
Tanenbaum's book) played a huge role in my development as an engineer.

------
utopcell
Interestingly, four Turing Award winners are now affiliated with Google: Vint
Cerf, Ken Thompson, John Hennessy and Dave Patterson.

~~~
seanmcdirmid
MSR has a few also (Lamport, Lampson, Thacker, Gray), though ATM a couple are
deceased. Supposedly there is a fifth one, but I can't figure it out (must
have gotten the award before entering MSR).

~~~
sidereal
MSR also has Tony Hoare.

------
adultSwim
Hats off to Patterson

------
commandlinefan
In a just world, they'd also be as well-known as Bill Gates and Steve Jobs.

~~~
RcouF1uZ4gsC
John Hennessy was the president of Stanford is now the Executive Chairman of
Alphabet. He has been and is being well rewarded for his work.

