
Digging into RISC-V and how I learn new things - ingve
https://blog.jessfraz.com/post/digging-into-risc-v-and-how-i-learn-new-things/
======
CalChris
> Alpha is a great example of an ISA being pretty obsolete outside of owning
> an old DEC computer

Actually, that isn't the case. The Chinese Sunway is based on the Alpha 21164
[1]. The Sunway TaihuLight is 3rd on the supercomputer list [2].

[1]
[https://en.wikipedia.org/wiki/Sunway_(processor)](https://en.wikipedia.org/wiki/Sunway_\(processor\))
[2]
[https://www.top500.org/lists/2018/11/](https://www.top500.org/lists/2018/11/)

(In fact, the Sunway SW26010 has moved on from the Alpha but is still heavily
inspired by it.)

As for RISC-V itself, I'd be interested in a comparison article between RISC-V
and Cray. I think Seymour Cray came up with most good RISC ideas long before
Patterson+Ditzel. Indeed, RISC-V is now basing their vector extension on ...
Cray. RISC-V as an open source ISA/implementation is one thing. But as a
collection of ideas it is another.

What ideas distinguish RISC-V from Cray? My personal take is that RISC-V is
RISC without the architectural warts of its predecessor RISCs. No register
windowing, no tagged integers, no branch delay slot, .... Less is good. But
what does RISC-V _add_ that distinguishes it from Cray?

~~~
bogomipz
It seems like there is quite a gap between the Cray's RISC based hardware
which I believe were very successful and the introduction of Sun SPARC
hardware. Could you or anyone else comment on the reason for this? Was it
simply the arrival of research funding from DARPA at Berkeley/Standford in
this area?

~~~
CalChris
Berkeley RISC and Stanford MIPS are circa 1980. I think the enabling factor
for them was the availability of fab resources. The RISC-1 had 44,500
transistors. In 1980, Berkeley could get that fabbed for really not much
dinero. So it wasn't the funding _per se_. It was the reduction of fab costs
to something they could get funded. Same with the Stanford Geometry Engine.
Call it the Carver Mead revolution.

The CRAY-1 is 1975. The IBM 801 is 1980 (started in 1975). SPARC comes along
in 1987.

So I don't think it was the funding. I really think it was the availability of
cheap fabs for small tapeouts.

The Cray-1 was announced in 1975 and shipped in 1976. IBM took 5 years. It
reminds me of the IBM memo all over again:

[https://www.computerhistory.org/revolution/supercomputers/10...](https://www.computerhistory.org/revolution/supercomputers/10/33/62)

To which Cray responded,

    
    
      “It seems like Mr. Watson has answered his own question.”
    

Arguably, Berkeley and Stanford's research groups were a lot more like Cray's
team.

~~~
bogomipz
Interesting yeah. I just came across the following items:

"why did Sun and most other minicomputer oem's turn to RISC?

the relatively low cost accessibility to semiconductor fabrication using gate
arrays. Chip manufacturers who needed high volumes to recoup their capital
investments of hundreds of millions of dollars (now billions) had figured out
a way of producing standard product families of chips called gate arrays,
which were identical except for the last few steps of the production process
which defined the interconnections. This enabled computer systems companies to
buy state of the art fabrication by effectively buying batches of wafers on a
time-share basis from a few thousand dollars upwards, instead of the millions
of dollars required previously."[1]

I'm somewhat unsure or gate arrays themselves would have enabled this though
as I think of gate arrays as alternative to a microprocessor. Might you know?

Also it seems that SUN had their prototypes built down at USC's Viterbi School
FAB:

"MOSIS has prototyped more than 50,000 chip designs for businesses, government
agencies and universities, including the originals of many now widely used
commercial chips, such as those used in Sun Microsystems SPARC and SGI’s MIPS
systems."[2]

Also it appears that the rise of CAD also helped usher in the era of producing
RISC chips for these vendors as well:

"the deskilling of the chip design process by sophisticated computer aided
design (CAD) tools enabled computer systems designers to design, test and
simulate most of the design stages at their workstations, without needing
knowledge of the underlying manufacturing process parameters. The software
enabled designers to work at a logic function-block level without having to
worry too much about the realities of interconnections, datapaths and process
variations. This revolution in hardware logic design was akin to the invention
of high level programming languages which enabled programmers to write their
applications in terms of the problem, rather than needing to know exactly how
the computer was built or wired together."[1]

[1]
[http://www.sparcproductdirectory.com/history.html](http://www.sparcproductdirectory.com/history.html)
[2]
[https://viterbi.usc.edu/news/news/2005/2005_05_23_mosisstude...](https://viterbi.usc.edu/news/news/2005/2005_05_23_mosisstudent.htm)

~~~
CalChris
I really don't know much but I do remember people who were deskilled out of
hand chip layout. It's like component or analog skills. Yes, those are skills
but then suddenly no one needs them. Initially hand layout could say _but we
're better_ and that was true. But then tools got better and designs got a LOT
bigger.

I think you can add this to the affordability argument. It was affordable for
a research team to do a RISC where previously only big companies could do
that.

~~~
bogomipz
Sure, that makes sense. Any idea how gate arrays contributed to the
affordability of CPUs? I didn't understand that part of the quote and links I
mentioned above.

~~~
CalChris
I don't think gate arrays had anything to do with this then. They were just
too small at the time; clearly that's not the case now. Someone else, anyone
really, would know more.

~~~
pvg
It seems like they did. The page has a couple of references to contemporary
papers that probably have more details.

[https://en.wikipedia.org/wiki/MB86900](https://en.wikipedia.org/wiki/MB86900)

------
analognoise
I'm actually more interested in having MIPS go open source than I am in
RISC-V.

It's complete, hardware is extant, and the tooling is mature. I'm really,
really hoping the MIPS Open initiative doesn't miss the boat.

~~~
ZirconiumX
So, I consider myself a MIPS fan, having studied the early (SGI era) MIPS
revisions, but if I started from a clean sheet, I would pick RISC-V over MIPS,
because while MIPS is a fairly clean architecture, RISC-V is cleaner.

For example, the MIPS R4000 CPU had three microarchitectural branch delay
slots of which the latter two had to be cancelled, and that must have been
awkward to design.

In contrast, RISC-V has no ISA branch delay slots, so requires less from
lower-end implementations, and provides more to higher-end out-of-order CPUs.

~~~
analognoise
The OpenMIPS initiative is targeting the 32 and 64-bit MIPS Instruction Set
Architecture, Release 6 - I'm unsure if the branch delay slot mess is present
in it.

Hopefully someone can chime in with the answer.

------
gumby
> were discussing how we learn new things

If you're interested in the meta process I highly recommend Polya's "how to
solve it" which is full of heuristics for problem solving which are often also
about understanding the thing you're trying to solve, i.e. learning!

~~~
person_of_color
How does this compare to the Coursera course Learning How to Learn?

~~~
gumby
It’s designed for problem solving. Plus you get to read a book instead of
having to watch a video.

------
ribalda
Unfortunately, 1000 eur is stil a lot for a toy :(. I am sure that the day
that they can deliver a 400EUR board the number of developers will explote.

~~~
tverbeure
Here's how I got my feet wet with RISC-V, for next to nothing:

\- I bought a cheap FPGA board. Pretty much anything is large enough for a low
performance RISC-V CPU.

\- I added a small design with a picorv32 CPU and experimented with that
first.

\- I designed my own little RISC-V CPU. I've designed digital hardware for
decades, and I've learned all about CPUs in college etc, it was still eye
opening to actually design one myself.

\- I write blog posts about all the hobby stuff that I do now. I never did
that before, and I'm not looking for a large audience or anything. But what I
discovered is that I learn way more about a subject when I write about it: a
major part of it is that I want to avoid public embarrassment about writing
something wrong. :-) By writing about it, I _have_ to understand things better
than when I don't.

This may not be the best route for those who don't already have a hardware
background, but there are various open source tools now with make it
accessible to hobbyists as well.

It's been a very fun journey.

~~~
bogomipz
>"I write blog posts about all the hobby stuff that I do now."

What is your blog? I would be interested in reading how you designed your own
RISC-V chip.

>"This may not be the best route for those who don't already have a hardware
background, but there are various open source tools now with make it
accessible to hobbyists as well."

Can you recommend some of those open source tools that would aid in learning
for people who don't already have hardware backgrounds?

~~~
tverbeure
Here you go: [http://tomverbeure.github.io](http://tomverbeure.github.io)

This one is specifically about the RISC-V core:
[https://tomverbeure.github.io/risc-v/2018/11/19/A-Bug-
Free-R...](https://tomverbeure.github.io/risc-v/2018/11/19/A-Bug-Free-RISC-V-
Core-without-Simulation.html)

> Can you recommend some of those open source tools that would aid in learning
> for people who don't already have hardware backgrounds?

I haven't played with them myself, but LiteX and FuseSoC are things to look
into.

There is now also a whole fully open source tool suite to go from RTL
(Verilog) all the way to a compiled bitstream for Lattice iCE40 and ECP5
FPGAs. That's not necessarily for beginners though.

------
mshockwave
Traditionally GPU will not classified as VLIW

~~~
monocasa
GPUs were very much VLIW machines for a while there. The other DSPs that
they're hinting at are commonly VLIW machines too.

~~~
bogomipz
I'm curious was there some inflection point at which GPUs stopped being VLIW
machines? I'd be curious to hear when and why that was. Thanks.

~~~
mastax
[https://www.anandtech.com/show/4455/amds-graphics-core-
next-...](https://www.anandtech.com/show/4455/amds-graphics-core-next-preview-
amd-architects-for-compute/2)

