Glancing through the source repos, it looks like there's a decent amount of code written, so why not mention it or what it's doing so far? It looks at least somewhat non-trivial, so surely there's at least a basic demonstration.
if you'd like to see what was done up until around Nov 2020, i've a video of the first boot of the litex BIOS https://www.youtube.com/watch?v=72QmWro9BSE for example.
however what would be really helpful would be offers of assistance. we've got funds available, so that's not a "we're looking exclusively for volunteers" thing
second answer: there appears not to be very much to show for it because we had to stop actual "development" work and do several months of planning on the Vectorisation system. however we couldn't start that unless we had a full and proper understanding of OpenPOWER, which was why we first had to do a basic OpenPOWER v3.0B core, and that's what's demo'd in the video.
hope that helps.
Really hoping this project succeeds. Maybe sometime I'll get a chance to work/play around with it a bit myself.
Wish them the best though, love to see as much libre silicon as possible.
i ask people, if they want to help, not to waste money on talking to lawyers. specifically saying that if they really want to talk to a lawyer they should request that lawyer to donate their time as "pro-bono" due to the charitable funded nature of the project (NLnet is a Charitable Foundation).
i'm kinda stunned that two people actually genuinely asked this, rather than saying "i'll speak to my Accountant".
think about it: imagine being contacted by the Linux Foundation, offered some donations to do some work, and you respond, "oh, errr i demand the right to pay 30% of that charitably-sourced money you are offering me to my Lawyer in fees to check if it's ok to receive that charitably-sourced money", i mean, wtf?? :)
regarding using an OoO engine: you may be interested to review this:
we're trying something new, basically. that's down to being funded by NLnet to do innovative research.
Regarding OoO, I don't see anything in those slides that's in favor of an OoO GPU. And the fundamental die area tradeoffs between a GPU and an OoO core are different. OoO comes with the idea that you can spend 10x+ the die area on your dispatch logic than your ALUs and register file, and your GPU is designed to amortize the dispatch logic as much as possible against a sea of ALUs and register files since you have enough parallelism to just barrel schedule through massive amounts of threads. Both designs at their best keep their ALUs fed every clock, but a GPU just plain can dedicate much more die area to those ALUs meaning markedly more FLOPs per mm^2.
no i didn't go into heavy details on the internal architecture, i did the study (with help from Mitch Alsup) on OoO for 5 months straight, back at the beginning of 2019. when he explained how easy it is to do multi-issue if you use Unary (bit-level) encoding on the Dependency Matrices, i went, "ok that's it, we're using that" :)
what the plan is, is to do instead of big.LITTLE, to do "long.FAT" :)
by that i mean, we will have one core that is high multi-issue and high clock rate, but the rest of the cores are MASSIVE on SIMD engines but light on issue (single or dual).
all still SMP, but some cores just absolute processing Monsters.
now, we'll still need a separate Texture Cache (in addition to I-Cache and D-Cache) because texture interpolation, we'll still need a pixel tile memory area, and so on
but we want to see how far we can get and still not have everything shipped over to a completely separate processor. that is madness, the driver development alone having to contain a full-on RPC mechanism. no wonder latency on GPU Shader execution is so bad on commercial CPUs.
This would be be (by orders of magnitude?) the most complex design attempted in the language, which pushes it into NIH territory.
also, you may be interested to know: Sorbonne University has access to Cadence. after converting to Verilog, they ran our 180nm design through DRC and it passed 100%.
nmigen has some deterministic behavioural guarantees and much more which make it a far better choice. it happens to output Verilog which we can treat as a (readable) machine-code (assembler-like) intermediary.
It's for sure a solvable problem, but 9 time out of 10 the FOSS developers trying to push a new language into this space end up not meeting their actual "release a usable design" goals but instead spend all their time yak shaving the language. It's a classic nerd snipe in the space.
right now we're in like "R&D mode", thankfully paid-for by NLnet.
i go into some detail about why we chose nmigen, and what can be done with it. it was not a "light decision", it was several months comprehensive review.
here's the point in the FOSDEM talk i just did: https://youtu.be/7rCeNzrCB_g?t=1939
- NUMA approachIRaw brute-force performance pissed all over the competition at the time"
"Libre-SOC combines the best of historical processor designs,co-opting and innovating on them (pissing in the back yard of every incumbent CPU and GPU company in the process)."
This sounds like a lot of ego-driven cheat-beating.
"Libre-SOC can do the same tricks that IBM POWER10 and
Apple M1 can. Intel (x86) literally cannot keep up."
What is the thinking behind this? Build it and they will come?
If we wanted to simulate 2D world in hardware, we could do it very efficiently on our CPUs and it would be infinitely scalable (assuming the world interacts mostly locally).
Here comes the interesting part: if we live in a simulation ran by higher-dimensional beings, we have to transform our physics into a form that would reveal some inherent dimension-ness of our physics, then if we find this number, the beings live in a world with one more dimension.
So for example if our physics seems to be 10 dimensional, the beings live in 11 dimensional world and their CPUs are 10 dimensional. They designed our physics to be only 10 dimensional so that they can run it efficiently on their CPUs.
Also, assuming these extra dimensions are like what we'd thing they'd be like we can simulate them - we sent probes all over the solar system using "2D" computing decades ago.