Hacker News new | past | comments | ask | show | jobs | submit login
DeepComputing: Early Access Program for RISC-V Mainboard for Framework Laptop 13 (deepcomputing.io)
150 points by sohkamyung 23 days ago | hide | past | favorite | 67 comments



A lot of comments are focusing on the value as a RISC-V development platform, which is obviously important, but I'm also hopeful that this presages more Framework mainboard options besides just what Framework itself offers. There is already a pretty big community offering I/O modules beyond Framework's options, but the true benefit of a Framework system is in the ability to not be locked in to only what one company things is worth the time and effort to develop. This is the first inkling that that benefit might actually come about.


We have gotten inbound interest from other Mainboard makers too, at least one of which is pretty far along on a design.


That's awesome to hear! I'm really excited to see the framework ecosystem grow.


Would you be okay if they compete with your mainboards? For example, they produce a lower price AMD board.


There's not really a lot of room for anyone to make a board that isn't a curiosity or highly-specific developer platform though. Framework already have both x86 vendors covered for people who want Intel or AMD. The only other chip worth making a board for is Snapdragon X Elite. There's nothing else in the same performance class.


There's the possibility of making mainboards with other features though, built-in FPGAs, SDRs, or maybe lower-powered x86 chips for more battery life.

I guess that all falls under "curiosity", but I really do hope that the ecosystem for framework compatible parts blow up.


lower-powered x86 has little chance anymore. In 10 years from now, just about every portable system will be ARM-like.


Performance per watt is nearly identical for the Apple M4 and the Ryzen AI 9 HX 370, despite the Ryzen being manufactured on 4nm while the M4 is on 3nm: https://www.phoronix.com/review/apple-m4-intel-amd-linux/3

Sorry to burst your bubble.


The biggest thing I want to see from framework is ARM (or better, Risc-V that achieves great low power performance) with an enormous battery and linux or BSD with all the optimizations to improve battery life.

I bought a macbook a while ago specifically because I can get it to last about 45-50 hours non-stop usage on one charge, so getting a system tailored for even better performance and a longer battery life (macbooks could probably double or triple battery life if they bulked up and stopped trying to be so petite) would be incredible.

>100 hour battery lifes should be very achiebable for developers, as limiting screen brightness and using only terminal with a black background can increase battery life _enormously_.


Between Zoom (not the whole time, but just by having it in the background), Slack and Crowdstrike's Falcon agent, I usually can't make more than 5 or 6 hours away of a wall outlet with my MacBook Pro M2 Max...

I hate corporate software.


Oh, wow. That's a pity. I mean, you're down to the battery time of any random x86 laptop.

I usually get barely a working day out of my corporate MacBook Pro M1 with Corpo Security Special Sauce, CLion and Teams. But I have to kill CLion when it gets too crazy (i.e. often). Do you have any insights from Activity Monitor on who is draining the battery?

My biggest offenders: - Symantec Data Loss Prevention Agent (x86 Emulation) - $Corporation App Store (I never use it, don't know why it burns CPU time)

This corporate "security" software is the essence of everything that's wrong with a corporation.


If I check the "Last 12 hours app energy use" from my Activity Monitor it's usually Firefox, Zoom, Slack and PyCharm.

But that only shows "Apps", not processes. If I list all processes, the one that's almost always in the top is Crowdstrike Falcon. Specially when there's disk access, as it seems to intercept everything...


I agree that would be a delight. I'd even take a color eink or transmissive / reflective LCD to lower the display power draw. But I would still like to be able to equip it with large amounts of storage and ram.


An Arm motherboard would be a great thing for one of those third parties to make. I don't think Framework has the resources to take it on right now. It's time for them to do an update to the AMD boards, and they do a new Intel one every year...


There are limits to how enormous the battery can be. Over 100Wh and you can't fly with it, so nobody is going to do that. That's less than double the capacity of the current Framework battery.


You charts show the M4 having a more than 2x advantage in both MIPS per watt and MB per Joule over the Ryzen


Read through the article. Ryzen wins a few, and from a worse node, and with off-package dram, and a much more capable GPU.


"off-package dram"

Yes, the real ARM vs. x86 discussion will come around Ryzen Max 2025Q1 with 128mb on-package dram (hope we get desktop board manufacturers to sell this).


what does 128mb on package dram help with? That seems like it's way to small to suspend to ram or anything.


"640kb should be enough for everyone"

Joke aside, typo, s/mb/gb/g - thanks for catching.


Yet. There will be competition for the Snapdragon series. At least if Arm doesn't shot itself in the foot with lawsuits; if they do, the real action will shift to RISC-V. I'm sure that the Nuvia developers are already working on a RISC-V chip just in case...


I hope it becomes a standard format for developer boards


Oh, come on.

You just know someone's gonna make an Amiga one.

And a whole bunch of aging German and Scandinavian hackers are gonna come out and try to convince people, no seriously, this is a great daily driver, you just have to go to Aminet and get the right RTG driver pack for the display panel and...


> that isn't a curiosity or highly-specific developer platform though

So what? It would be great to use this as a developer platform because you have the whole chassis and peripherals of a laptop and all you have to make is the mainboard


This is basically similar to what made the IBM PC successful. They opened up the bus architecture, and the market for add-in cards exploded.

(being IBM, they later tried to "upgrade" to a locked-down microchannel bus, but it happily didn't go anywhere)

other things got standardized (motherboards, power supplies, peripherals, etc) and it has been going ~ 40 years now.


>They opened up the bus architecture, the market for add-in cards exploded.

Price and size and overall state of tech back then didn't really allow to have several PCs in one (for some rare and pricey exceptions like PowerPC CPU extension boards, etc.). These days we can potentially have inside regular size laptop a backboard with standard bus (something like Sipeed cluster board which takes up to 7 credit card sized SoCs [1]) into which we can potentially plug various SoCs of the same or different arc. Say one SoC is RISC-V, one x86 and several SoCs with powerful NPUs - configure your laptop(cluster) for the mission at hand. Dare i say from a common bin of parts in the office (or even from public library - the SoCs are just tens of dollars nowdays, like say a game cartridge).

[1] one can imagine if the cards were inserted at angle instead of vertically https://www.amazon.com/Sipeed-Lichee-Cluster-High-Performanc...


Took me a while to figure out that the actual pricing was $200 USD for a mainboard (you have to untick value-added services).

Definitely not the best pricing, but also not completely bad considering you get a 64GB microSD and case alongside?

Looking forwards to the next-gen mainboard they hint at in the "value-added package" as the JH7110 really is quite a weak chip (even by RISC-V standards)...


the price to functionality is pretty steep. 200 would seem about right for a dev board that supported rva23, but for a CPU without support for the extensions that the ecosystem is going to require it's hard to justify 200 for ~pi3 performance on an already outdated platform


The RVA23 spec has only just been ratified a couple of weeks ago. It will be a couple of years before there is hardware supporting it.

A suitable RVA22+V chip (SpacemiT 8 core) has only been out on dedicated dev boards for a few months. That's probably going to be their 2025Q2 main board.

Their 2025Q4 main board will be the first high performance RVA22+V one. If the chip happens ... it's struck some political (sanctions) problems.

JH7110 is the right chip for a first board to get the wrinkles ironed out. No one has ever done a 3rd party or non-x86 mainboard for Frame laptops before.


I agree that it's a good step, it just feels like it would have been a good 2022 step or a good $50 step at the end of 2024, but it just seems like too little too late to be worth the price.


Note that this is a business-focused pre-release from DeepComputing, not the pricing for general developers at release of the Mainboard.


This JH7110 is from 2021. Some specs: https://www.cnx-software.com/2022/08/29/starfive-jh7110-risc...

1.5 GHz CPU core frequency, some old RISC-V cores while we're still waiting for cores with decent single-core performance to compete with modern desktop processors.

Sorry, but for me this board is dead in the water, unless you can't use ARM/x86 for political reasons.


> This JH7110 is from 2021.

First retail customer deliveries of mass-production JH7110 chip and board were February 2023. First laptops with it were at the end of 2023.

The CPU cores used (U74) were announced in October 2018. The first high-priced dev boards using essentially test chips (HiFive Unmatched $650, BeagleV Starlight unknown price, 300 were made and given away to developers) were mid 2021.

It is important to distinguish the dates of availability of RTL for a core, first tests chips of a complete SoC using it, and mass-production as there is usually several years between each stage.

You hear about all these dates in the RISC-V and Arm worlds where each thing (core, SoC, board) is done by different companies, each with a vendor-customer relationship with the previous stage company. You don't hear about them in the more vertically-integrated x86 and Apple worlds.


That's true, you don't hear about these dates in vertically-integrated businesses, but it's a bit of a misdirection. Vertically-integrated manufacturers definitely don't take 6 years from RTL completion to mass production of the end product.


That's because they are vertically-integrated and can overlap the different stages and have the board people give feedback to the SoC people, who can give feedback to the CPU core people.

With multiple vendor-customer relationships in the chain there is not only likely to be no overlap in stages, but even there may be 6, 12, 24 months of gap between the RTL for the core being available and someone even making a decision to make an SoC using those cores, and a similar gap from a chip being available to someone else deciding to buy it to design a motherboard around it.


Those delays are very understandable but they don't make the chips less old, or counteract the problems caused by being old when RISC-V is racing to catch up.


The people who put out RTL for a core go right on to making a better core. The time they take to design a new core is not affected by how long people making SoCs or boards or consumer products do or don't take to make their own products.

Are you aware that major chip company Microchip just announced a new family of chips, the PIC64GX -- a big deal for them -- with a dev board that has just become available in the last month or so, which uses RISC-V cores first used in the HiFive Unleashed in early 2018 (and announced in Oct 2017, when they already had working test chips)?

https://www.microchip.com/en-us/development-tool/curiosity-p...

Are you aware that people still announce new chips based on the Arm A53, a core announced in 2012?

Or that 70% of Arm's revenue comes from CPU cores announced in 1994 (ARM7TDMI) - 2009 (Cortex-M0)?

"Old" is not relevant to anything. Only design features are relevant.


> The time they take to design a new core is not affected by how long people making SoCs or boards or consumer products do or don't take to make their own products.

I don't think anyone implied otherwise. But the product in my hand is still lacking.

> Are you aware that major chip company Microchip

I don't see how this affects my argument in any way?

> Are you aware that people still announce new chips based on the Arm A53, a core announced in 2012?

Not for the main cores of laptops they don't.

> "Old" is not relevant to anything. Only design features are relevant.

There is sufficient context here to tie the two together. These cores are slow because of relative lack of development time. These cores lack vector units specifically because of their age, because they were designed before there was a spec (which you brought up).


It's a dev board. Reason to buy it: you need a desktop that has RISC-V cpu inside.


If you want a desktop, the Pine64 Star64 is only $90 for the same spec. This is a laptop though.


Milk-V Mars 8 GB is $68.99 at Arace.


The problem is that it's a dev board that will likely be slower than QEMU, and is missing half of the Risc-V extensions that you want to test with. (and it supports a maximum of 8gb of ram, so good luck compiling LLVM on it)


An N100 running QEMU will be slower. A machine that is faster running QEMU will cost a lot more.

For example my 6 core Zen2 laptop runs my primes benchmark (small code, long-running, the best possible case for qemu-user) 10% slower than my VisionFive 2. If you want to run a whole emulated OS then qemu-system is a lot slower than qemu-user.

My i9-13900 laptop runs the same benchmark in qemu-user 2.6x faster than the VisionFive 2, but it also costs 30x more -- or about 2x more than the Frame laptop with the JH7110 mainboard. Or 5x more than the original DC Roma laptop with the JH7110 now costs ($299).

If you want a RISC-V laptop then Frame is a very expensive way to get one. It might be higher quality, and it will be upgradable to better specs later.


> will cost a lot more.

Presumably everyone buying this thing would already have (or need) a proper PC/Mac to use alongside it.


Maybe. Or they might only have a phone/tablet. Or an older but still totally useful machine such as that Zen 2 laptop I mentioned, which runs qemu slower than the VisionFive 2.

Also, a number of people have come unstuck by writing and testing things ONLY on qemu, and then had them fail on real hardware. For example, qemu was historically much more lenient with PMP settings than real hardware (if you didn't touch the PMU then qemu acted as if you didn't have one at all). Also anything that needs fences on real RISC-V (or Arm) hardware is likely to work even when it's incorrect on a PC that is jitting multiple instructions per RISC-V instruction and is TSO anyway. Qemu being lenient about setting up the UART (e.g. it doesn't care about baurd rate, start/stop bits etc) compared to real hardware is another example.


>good luck compiling LLVM on it

4 cores, thus 2GB/core. Plenty.


I regularly get OOMs at 16 GB/core doing debug builds. LLVM is practically a memory stress test.


Only if you use FatLTO. I've compiled LLVM many times with less memory.

GCC on the other hand is another story. On a machine that I was able to compiled LLVM (thanks swap), I couldn't even extract GCC source code.


I can't remember if LLVM has this problem specifically, but the linker step with debug info can really explode the memory use.

And unfortunately, that's even without build parallelism.


make -j4 for LLVM requires about 36GB (I know because I tried with 32GB). 16 gb is almost enough to maybe build a debug build of LLVM without threading, but it will take a few hours on a chip that slow. With only 8gb, you are going to be absolutely trashing your swap (which at pcie gen 1x2 speeds will be pretty darn torturous). I'm guessing this system will take ~8 hours to build LLVM.


Fortunately the LLVM build framework allows you to specify how many link steps can be done in parallel separately from the number of other things done in parallel.

Also, linkers other than GNU `ld` (or `gold` which is faster but used more RAM) use a lot less RAM e.g. `mold`, or even better LLVM's own `lld`.


They promise Linux support. What is the situation with Rust on RISC-V? Obviously for a couple of years you haven't been able to build a somewhat complete and modern distro without a Rust compiler. However, that wasn't available everywhere outside of x86 and arm64 at the same time (https://lwn.net/Articles/845535/). Has that been fully solved in the meantime?


My experience with our riscv build deamons in Debian is everything works, but there is some spurious segfaults from time to time.

Check the green build matrix on this page: https://qa.debian.org/developer.php?login=alexander.kjall%40...



Anyone familiar with debian can explain what these drops in percentages are?

E.g. arm64 dropped very low for a while or all arches dropped around 2024.68

https://buildd.debian.org/stats/graph-quarter.png


Not that familiar, but this is sid, which is where debian development happens.

If there's a huge drop, it is most of the time because a package a lot of software depends on got updated or modified (e.g. built with different options) and it caused ABI changes, thus causing rebuilds of everything that depends on it.


The Rust compiler has supported RISC-V64GC for many years already. I got my first RISC-V SBC in 2022 and have not found a single issue with the Rust compiler (some libraries have problems, but that's a different problem). In the end, the Rust compiler relies on LLVM, which also Clang users need it in order to compile C and C++ programs to RISC-V.


In the embedded space, rust works great for riscv on ESP chips


How mature is the RISC-V ecosystem for dev machines? VS Code and Jetbrains IDEs are probably not supported, but what about a setup with Fedora, Erlang, Elixir, Node, and maybe NeoVIM?


https://wiki.freepascal.org/Platform_list#Supported_targets_...

So with FPC/Lazarus you have a full dev system if works as wiki suggests.


There isn't really any RISC-V hardware that is fast enough that you'd want to run those big IDEs yet.


Once you exclude proprietary (jetbrains) or not-really-open (vs code) tools, it is a non-issue.


I really wish these CPUs had the Vector extension


So do I, and so do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the CPUs that are given us.

These CPU cores were announced three years before the Vector extension spec was ready.

Check the updates coming next year.


I am happy that there is already a RVA22+RVV1.0 spacemiT K1/M1 as in milk-v jupiter and banana pi bpi-f3.

Relative to having nothing.

In 2025, more options will definitely pop up.



Well, I need a mini server, like rpi3, ofc I will go for RISC-V (without arm, and if possible without hdmi/hardware mpeg codecs), but they are very hard to buy without big tech browsers. I may write my own mini kernel... for my mini servers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: