
AI chip startup Wave to buy Silicon Valley old-timer MIPS - mindcrime
https://www.cnet.com/news/ai-chip-startup-wave-buys-silicon-valley-old-timer-mips/
======
bem94
Context: MIPS and Wave are owned (mostly) by the same VC firm, Tallwood. The
head of Tallwood, Dado Bannato sits on the board / is the interim CEO of Wave
and MIPS.

~~~
nereye
FWIW, it's Dado Banatao. E.g. see
[https://en.wikipedia.org/wiki/Dado_Banatao](https://en.wikipedia.org/wiki/Dado_Banatao).

~~~
bem94
Ah, Thanks.

------
walterbell
Where can someone buy a device with a virtualization-enabled MIPS CPU?
[https://www.mips.com/products/architectures/ase/virtualizati...](https://www.mips.com/products/architectures/ase/virtualization/)

From
[https://www.eetimes.com/author.asp?section_id=36&doc_id=1332...](https://www.eetimes.com/author.asp?section_id=36&doc_id=1332351)

 _"... if a company seeks to manage MIPS correctly, what’s the strategy? “The
value is in restoring the IP roadmap, the MIPS brand, and coming at the market
in a way that doesn’t put MIPS head-to-head with ARM, but concentrates on the
parts of the market where ARM is weak and there is good opportunity to
license,” one executive said. Citing Mediatek’s recent decision to use MIPS
instead of ARM in its modem, he stressed, “ARM is in the modem by default, not
for any particular suitability or technological advantage. The apps processor
is where ARM has the real advantage, as all of Google Android is built on it,
so you avoid this area completely.”_

From [https://fuse.wikichip.org/news/1373/wave-to-acquire-
mips/](https://fuse.wikichip.org/news/1373/wave-to-acquire-mips/)

 _" [Wave] ... raised $56.7M in funding led by Tallwood Venture Capital – the
current owners of MIPS ... While Wave Computing holds over 60 patents, MIPS
holds 100s ... in March, Wave announced that they will be integrating the
64-bit MIPS IP cores into their future DPUs. The integration is done in order
to allow existing MIPS-based RTOS to handle the control and management
functionalities of the chip. Conversely, Wave might be seeing an opportunity
by injecting their IPs into the MIPS existing ecosystem. MIPS software and
tools are quite mature and can certainly accelerate Wave’s development."_

~~~
puzzle
Probably nowhere. :-( MIPS was, almost twenty years ago, perhaps the first
architecture that had 64-bit capable chips you could power with two AAA
batteries (along with RAM, a screen and the rest of the system). Talk about
squandered opportunities! For some reason, it was all downhill after the
PS2/PSP.

~~~
zwieback
That reason is ARM, they just crushed everyone with their business model even
though they weren't best at anything.

~~~
puzzle
When I said "for some reason", I was trying not to go into speculation. But
yeah, ARM definitely attacked them in the embedded market from the lower end.
And Motorola, transitioning from 68k to PPC, from the high end (PPC was very
popular for laser printers, for example), later joined by IBM (Cell/Xenon). We
all know what happened to SGI in the server/workstation market.

~~~
pjmlp
SGI also have themselves to blame when they decided to be just yet another
hardware vendor selling Linux/Windows systems.

~~~
bogomipz
I'm not understanding your comment. Weren't they always just that - a hardware
vendor? I think they had some image editing stuff that only ran on their boxes
but didn't they still essentially make their money on the high end hardware
needed to run those programs?

~~~
pjmlp
All commercial UNIX vendors were hardware vendors.

What made them special is how they combined their hardware with their own
version version of UNIX.

Hence why even trying to write portable code across UNIXEs wasn't as easy as
many believe.

Irix had IrisGL, Iris Inventor, XFS and a few other goodies, before they
became the Open variants.

SGI was also a big sponsor of the C++ STL work.

~~~
trasz
Twenty different vendors combined their hardware with their own version of
Unix. There was nothing special about that.

~~~
pjmlp
Sure there was, the vertical integration and overall user experience.

~~~
walterbell
With the end of Moore's Law, are we returning to vertically integrated
hardware+software?

Domain Specific Architecture talk by Hennessy & Patterson, 2018 talk for 2017
Turing Award:
[http://iscaconf.org/isca2018/docs/HennessyPattersonTuringLec...](http://iscaconf.org/isca2018/docs/HennessyPattersonTuringLectureISCA4June2018.pdf)

~~~
pjmlp
I will watch the talk later.

However I can say yes, we are returning to those days.

Thanks to the thin razor margins you see OEMs now selling un-upgradable
phones, laptops, tablets, 2-1, leaving the old desktop PC for a niche market
that gets smaller every year, mostly targeted at gamers.

Also OEMs got envious of Apple as the surviving icon of those days, all of
them want to be Apple of their market and sell experiences.

------
zackmorris
I'm cautiously optimistic about this as a counter to specialized and
proprietary hardware. While other industries are moving towards domain-
specific hardware for 3D rendering, AI and physics, I'm hopeful that the
people at MIPS have the insight to know that these are all evolutionary dead
ends.

The real future is going to be in reprogrammable hardware, something akin to
FPGAs but with cores as building block instead of gates. We need massively
parallel chips with an order of magnitude more cores than we have today (at
least 256 as a baseline), with better interconnect and smarter routing that
can do something like content-addressable memory.

MIPS is an ideal candidate for this, as it's a textbook example of the minimum
number of transistors needed to implement a pipelined CPU. They should be able
to fit hundreds or even thousands of cores in the number of transistors wasted
on cache today in mainstream processors from Intel and AMD. Assuming they
don't just kill MIPS to do their own proprietary AI DSP..

~~~
jcranmer
> The real future is going to be in reprogrammable hardware, something akin to
> FPGAs but with cores as building block instead of gates. We need massively
> parallel chips with an order of magnitude more cores than we have today (at
> least 256 as a baseline), with better interconnect and smarter routing that
> can do something like content-addressable memory.

There is a massive trade-off. We don't have the power budget to make 100s of
cores, so you need to start ripping stuff out of cores to make the power costs
tolerable. In turn, that means that your single-threaded performance is going
to go down. If your code is massively parallelized, you can easily cover those
costs. But for high scaling, you start having problems with the communication
costs. The problem isn't fitting the ALUs into the chip, it's filling the ALUs
with data.

The most likely future is that we move to a more heterogeneous world: you have
a few big cores for handling unparallelizable code (and handling things like
interrupts) intermeshed with various kinds of accelerators. One of them could
be a very high-throughput (at cost of higher latency) LINPACK-style
accelerator kind of like GPUs (but not living off of a slower PCI bus). You
could attach an FPGA as well. But the very-high core count processors just
haven't worked well (ask Intel how well Xeon Phi worked out).

~~~
zackmorris
I tend to disagree. It's true there's a tradeoff but there's also an
opportunity cost in staying on the road we're on now (which has no future, at
least for CPUs).

CPU performance increases since 2000 have mainly come from longer pipelines,
bigger branch prediction logic and larger/deeper caches. Those are all
extremely important for single-threaded performance but are not much use for
embarrassingly parallel (low branch) computation like in MATLAB/Octave, R,
shaders, AI, physics and so on.

You're right about Xeon Phi and make an absolutely valid point about the
trouble of filling ALUs with data. It was probably destined to fail because
most mainstream languages just fall down terribly trying to do
multiprocessing. Only a handful like Erlang and Go get it right, but at the
time just weren't on the radar.

I think the world is treading water with the current CPU/GPU divide and
heterogeneous processors like the Cell. Few developers fully utilized the
Synergistic Processing Elements (SPEs). These paradigms provide powerful
hardware but leave the intricacies to the developer, or sometimes the compiler
if they're lucky.

That's the part that I'm getting tired of. Companies tout the power of their
various technologies without really addressing the hard problems in computer
science. I want a paradigm that presents itself as a single unified CPU and
memory. I want the compiler and operating system to optimize my code and
handle moving my data with copy-on-write. If I had something like this and a
decent Actor Model language, then it could trivially emulate things like
shaders and we could get rid of all of these domain specific languages.

What I'm getting at is that with a few hundred short-pipeline cores from the
90s (say MIPS or PowerPC 600 series) and a content-addressable memory,
compute-bound problems become tractable. Why bother with rasterization when
you can write a ray tracer in a page of code and get better results? Or
imagine being able to run not just one neural net but many in parallel, or let
genetic algorithms run for 2 or 3 orders of magnitude more generations. I want
raw computing power, not a subset of it filtered through someone else's notion
of what I might need.

~~~
TomVDB
> Why bother with rasterization when you can write a ray tracer in a page of
> code and get better results?

You do realize that ray tracing is ultimately a problem whose performance is
determined by the external memory system?

Unless you’re thinking about each mini CPU rendering a bunch of reflective
balls where the full scene can be stored locally in each CPU.

Because once you don’t, you’ll have to find ways to cover the access latency
to the shared memory pool, and before you know it your super simple CPU will
looks suspiciously like the shader core of today’s GPUs.

Your other examples have similar limitations.

The truth is that there are not many problems that can efficiently be mapped
to an architecture with tons of small CPUs, some local RAM, _and nothing
else_.

------
beat
Wait, MIPS still exists and has worthwhile technologies?

I keep expecting this to be a retro branding exercise on something
fundamentally different, like a Fiat 500 or a modern VW Beetle.

~~~
logfromblammo
I don't see them as competitive with ARM and RISC-V.

The last I heard of MIPS was some Chinese-fabbed chip that used the
unencumbered set of MIPS opcodes--probably MIPS II--and a mostly-GNU OS to
build the only computer on Earth (at the time) completely free of patent or
licensing restrictions. It sounded like a stunt then, and I haven't heard
anything about it since, so it probably was.

~~~
cbHXBY1D
MIPS and RISC-V are very, very similar. Makes sense if you realize that they
modified MIPS and started adding opcode space and features from there.
Essentially, RISC-V is a modern do-over of MIPS, just with variable length
instructions, floating point, and new opcode space and extensions.

------
kinsomo
MIPS must've been in pretty bad shape if they were acquisition material by a
startup. You'd think the acquisition would've been the other way around.

~~~
mtgx
MIPS was in bad shape when Imagination acquired them. Then, not long after,
Imagination's GPU business was starting to go downhill, so they couldn't focus
as much as they may have wanted on MIPS CPUs. Also MIPS failed to gain
traction with Android, just as x86 did. And then Apple dumped Imagination's
GPUs, which was basically a death sentence for the company, so they had to get
rid of MIPS.

Wave looks like a pretty cool startup, but I'm not sure how helpful MIPS
architecture would be to them. The biggest benefit is probably not getting
sued by Intel or other large players.

~~~
feb
Interestingly, it looks like Intel is making MIPS SoCs too according to
patches sent for the Linux kernel : [https://www.spinics.net/lists/linux-
clk/msg27348.html](https://www.spinics.net/lists/linux-clk/msg27348.html).

~~~
wmf
Before speculation spirals out of control, that's a networking chip from
Lantiq which was acquired by Intel.

------
bogomipz
This an original article of the Silicon Graphics acquisition of MIPs
referenced in the article. Interesting historical read:

[https://www.nytimes.com/1992/03/13/business/silicon-
graphics...](https://www.nytimes.com/1992/03/13/business/silicon-graphics-to-
buy-mips-for-406.1-million.html)

------
oneshot908
It would be a really interesting acquisition if only Wave Computing had
something interesting (2x Word2Vec is boring IMO, c'mon, you can do better
than that with custom HW, can't you?). NVidia sucks at low power computing.
It's one of the few sweet spots to disrupt their mostly unassailable
ecosystem. All IMO of course.

------
zwieback
Wow, end of an era! Or not, maybe MIPS will emerge again after the DNN
meltdown, who knows.

------
amq
AI-powered Wi-Fi routers incoming. /s

