
Microprocessor Design (2017) - kercker
https://en.wikibooks.org/wiki/Microprocessor_Design
======
PyComfy
haven't played it but there is MHRD on steam which is a game about designing a
CPU.

[http://store.steampowered.com/app/576030/MHRD/](http://store.steampowered.com/app/576030/MHRD/)

------
xattt
The metaphors go from extremely simple to complex in the matter of a chapter.
I’m not quite sure who the target audience is.

~~~
Cthulhu_
Did you read the blurb?

> [...] students in computer science or computer or electrical engineering who
> are in the third or fourth years of an undergraduate degree > [...] The
> reader should have prior knowledge in Digital Circuits and possibly some
> background in Semiconductors [...]

~~~
xattt
Then why does the author use a truck analogy at the start? An student with
that type of background would most likely be able to understand the concepts
more abstractly.

------
nerdponx
Knowing next to nothing about microprocessor design, I suppose the Mill
architecture would also be vulnerable?

~~~
Symmetry
I'm genuinely not sure. Mills certainly speculate instruction just like the
next guy but they handle memory access faults differently. Instead of stopping
the world on a bad read they just tag the data as bad. With speculative
execution and stop the world you have to supress stopping the world until the
speculation is resolved or risk interrupting good programs because your branch
predictor made a mistake. But there's no reason for a Mill to suppress
tagging. Loading from invalid data should just short circuit to producing more
invalid data instead of actually producing a load so I don't see why there
should be data ex-filtration from arbitrary memory locations as in Spectre.
And they're certainly not intensely optimizing everything for every last drop
of performance in the way Intel did to suffer from Meltdown. I wouldn't be
surprised if there was some sort of Spectre-esque speculative attack you could
run against current Mill designs to get some data out but probably not from
arbitrary memory locations?

But all the above is pure speculation.

------
dschuetz
Please, let's not design and implement a hundred different architectures now.
It's hard to keep track of Intel bugs alone already. The whole Von-Neumann
system architecture is way too old for today's problems. We need heterogeneous
systems. There needs to be a discussion on fundamental issues regarding which
architectures are fit for what purposes.

~~~
imglorp
I think it _is_ time to talk about new directions because we're still _STUCK
HARD_ in a rut begun in 1971 for general purpose compute. 4004 -> 8008 -> 8080
-> x86 and here we remain. Requirements and tech have both changed vastly
since then. We've made some minor widening, into RISC, GPUs, and VLIW, plus
we're just now looking into many cores, now that atomic physics is starting to
push back on minimum feature sizes.

Two areas that have not been remotely explored are asynchronous design and
massively parallel (think Connection Machine). In both cases, the compilers
and tools were not up to the task, but perhaps we're ready to give both
another try.

~~~
dschuetz
I think that there have been discussions, but everyone being on this one-big-
complex-cpu-doing-everything train over the years made companies pursue
financial goals rather than security. You cannot have a complex _and_ secure
CPU. That would be too expensive, so it's not a question of will-it-work,
rather will-it-sell.

~~~
imglorp
Yeah, security is still not a profit feature.

------
phoe-krk
I find it highly ironic that this link has appeared on Hacker News literally
hours after Meltdown and Spectre vulnerabilities have been officially
announced and described.

~~~
rubayeet
Not a coincidence. Articles with the words CPU or Microprocessor in the title
are getting a lot of attention on HN (including a 2016 article on CPU bugs).

~~~
phoe-krk
Honestly, I find buzz like this to be extremely bad taste. It is trivially
simple to post links like that on HN and wait for the "ha, Intel, this is how
you should have been doing things!!!1".

If this bug is really present in all chips since 1995, then where have all
these people been for the last 22 years? If they're so competent about CPU
design, why haven't we seen literally tens of Intel competitor startups pop up
and succeed?

~~~
CthulhuOvermind
I can comment on this.

Hardware is a long and tedious process. CPU design is an extra level of
difficult on top.

In my field, it is routine to hire people in late 40s/50s to make CPUs. I'm in
my late 20s, and viewed as a weird specimen. In my career I've made a armv8
CPU and now work RV64.

It takes a couple of years to get a guy from uni into working shape. And
that's just to teach him how to do 1-2 tasks in his field. If verification,
this usually is coverage specification and test writing. Combine this with a
cpu project duration average of 5ish years and it's quickly evident that you
won't beat someone with a startup.

In my office today, in a team of 10, no one, other than me is under 40.

~~~
xemdetia
Something that always strikes me is how insanely complex it is to even
experiment as an outsider to the university path. From my understanding a
relatively simple 10+ year ago tech ASIC is still going to run like $10k USD
for a shared die prototype from limited providers, compared to software,
component level systems design, or even mechanical devices where you can get a
local machine shop involved or at least do some parts of it yourself.

Yes, FPGA's exist but they are not even relevant when you are talking about
the skillset/engineering discipline to make the FPGA itself and actually
producing the physical thing.

