Hacker News new | past | comments | ask | show | jobs | submit login

The problem is that it's a dev board that will likely be slower than QEMU, and is missing half of the Risc-V extensions that you want to test with. (and it supports a maximum of 8gb of ram, so good luck compiling LLVM on it)





An N100 running QEMU will be slower. A machine that is faster running QEMU will cost a lot more.

For example my 6 core Zen2 laptop runs my primes benchmark (small code, long-running, the best possible case for qemu-user) 10% slower than my VisionFive 2. If you want to run a whole emulated OS then qemu-system is a lot slower than qemu-user.

My i9-13900 laptop runs the same benchmark in qemu-user 2.6x faster than the VisionFive 2, but it also costs 30x more -- or about 2x more than the Frame laptop with the JH7110 mainboard. Or 5x more than the original DC Roma laptop with the JH7110 now costs ($299).

If you want a RISC-V laptop then Frame is a very expensive way to get one. It might be higher quality, and it will be upgradable to better specs later.


> will cost a lot more.

Presumably everyone buying this thing would already have (or need) a proper PC/Mac to use alongside it.


Maybe. Or they might only have a phone/tablet. Or an older but still totally useful machine such as that Zen 2 laptop I mentioned, which runs qemu slower than the VisionFive 2.

Also, a number of people have come unstuck by writing and testing things ONLY on qemu, and then had them fail on real hardware. For example, qemu was historically much more lenient with PMP settings than real hardware (if you didn't touch the PMU then qemu acted as if you didn't have one at all). Also anything that needs fences on real RISC-V (or Arm) hardware is likely to work even when it's incorrect on a PC that is jitting multiple instructions per RISC-V instruction and is TSO anyway. Qemu being lenient about setting up the UART (e.g. it doesn't care about baurd rate, start/stop bits etc) compared to real hardware is another example.


>good luck compiling LLVM on it

4 cores, thus 2GB/core. Plenty.


I regularly get OOMs at 16 GB/core doing debug builds. LLVM is practically a memory stress test.

Only if you use FatLTO. I've compiled LLVM many times with less memory.

GCC on the other hand is another story. On a machine that I was able to compiled LLVM (thanks swap), I couldn't even extract GCC source code.


I can't remember if LLVM has this problem specifically, but the linker step with debug info can really explode the memory use.

And unfortunately, that's even without build parallelism.


make -j4 for LLVM requires about 36GB (I know because I tried with 32GB). 16 gb is almost enough to maybe build a debug build of LLVM without threading, but it will take a few hours on a chip that slow. With only 8gb, you are going to be absolutely trashing your swap (which at pcie gen 1x2 speeds will be pretty darn torturous). I'm guessing this system will take ~8 hours to build LLVM.

Fortunately the LLVM build framework allows you to specify how many link steps can be done in parallel separately from the number of other things done in parallel.

Also, linkers other than GNU `ld` (or `gold` which is faster but used more RAM) use a lot less RAM e.g. `mold`, or even better LLVM's own `lld`.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: