I noted how, when they first started, they talked mostly about developing 'AI accelerators' and it felt mostly like they were talking about big, GPGPU style chips to go head to head with nVidia. Thousands of small SIMD cores doing matrix multiplies, with fast memory and pcie. Maybe something halfway between Cerebras size and and Nvidia Hopper. A tall order but something really needed.
Then at some point it feels like Jim got hooked on the idea of RISC-V everything, and they pivoted their messaging to talking about these more CPU-like chips with a main R-V64 8-wide decoding, state of the art OoO execution, etc. That sounds more like a RISC-V competitor to AMD Zen instead of a competitor to an nVidia GPU.
And they they talk about that just being the interface to the AI chip later but... it really feels like that saw 'hey we can get all this RISC-V stuff for free essentially, and really take over the development of the spec, and that is easier than figuring out how to develop an GP AI chip and a stack that competes with CUDA to go with it, so that is easier to start...'
I'm totally a non-expert though and the preceding is just what I've picked up from watching interviews with Jim (who I just find awesome to listen to).
In the interview he runs down the issues they encountered going down the pure AI accelerator path. It sounds like they've decided the opportunity wasn't there (i.e. too hard) so they've pivoted.
It makes more business sense to have more general purpose hardware that can be pivoted to other applications. Lots of AI ASIC vendors are going to go belly up in the coming years as their platforms fail to attract customers. Carving out a tiny niche with limited demand and no IP moat is very risky in the IC world.
Fast CPU performance is necessary for AI workloads too. You need a fast CPU and combine that with lots of Vector or Tensor processing. Lots of applications need both. They have done both for a while.
Logic is simple inline with reducing power draw from a simplistic instruction set.
Move into a space where we have rapid manufacturing for Specialized chips. Alongside the concept inherit to Nvidia's DPU and you have something very Interesting.
Then at some point it feels like Jim got hooked on the idea of RISC-V everything, and they pivoted their messaging to talking about these more CPU-like chips with a main R-V64 8-wide decoding, state of the art OoO execution, etc. That sounds more like a RISC-V competitor to AMD Zen instead of a competitor to an nVidia GPU.
And they they talk about that just being the interface to the AI chip later but... it really feels like that saw 'hey we can get all this RISC-V stuff for free essentially, and really take over the development of the spec, and that is easier than figuring out how to develop an GP AI chip and a stack that competes with CUDA to go with it, so that is easier to start...'
I'm totally a non-expert though and the preceding is just what I've picked up from watching interviews with Jim (who I just find awesome to listen to).