I would love to see a clear roadmap from the EU (not been successful searching)
My take on this is
1. this is less about competitiveness at the cutting edge and more about security and economic on-shoring
2. building chips on-shore at the 40-20nm level massively reduces risk, increases the likelihood smaller states can build locally and solves for most chip needs
3. chips we need are rarely the cutting edge AI stuff. The vast volume of chios will go in as controllers on screens, USB connectors and so on. Building plug and play alternatives will give local manufacturers choices, and incentives will help.
4. the big win is security. Does the CEO of sensitive company, the head of security services and the general in charge of procurement use keyboards, cpus motherboards and monitors made from open source chips manufactured in a trusted nation? What is the BOM for the challenger tank - how many chips in there that are made by whom and ...
the process is long and arduous and the risks are huge.
But we make tanks from steel other materials made in "favoured nations" - surely the same applies to silicon?
My understanding is that a 40nm fab is only economically viable if it's spent the first several years of its life producing high margin chips.
In other words; the life cycle of a 40nm fab is:
1997: start building fab
2000: fab goes online and starts producing CPUs
2006: fab upgraded
2012: fab switches from CPUs to video and memory controller chip sets
2018: fab switches to USB controllers and embedded chips
2019: fab offline for 2 months because an antiquated but critical part is broken and is only brought back online because another similarly old fab went offline and sold off their parts
2020: fab shut off because of covid
2021: fab found to be a write-off because too many things broke while fab was offline.
So if you skip straight past the profitable phase, you end up spending billions of dollars to make a fab that makes $0.30 parts, and it'll never be profitable unless those parts are $10 each, which in turn makes the product they're in unprofitable.
You are correct. Building fabs today only for fabbing much older nodes will not be profitable. You have to target 22nm and below otherwise you can't afford to jump in the semi fab ring.
TSMC is building a lot of new 28nm production with plans to shut down all their older nodes and move everyone over in the next few years.
GlobalFoundries (formerly AMD fabs) created a brand-new 22nm planar process specifically for older chips as an upgrade to other company's 28nm processes.
Profits seem possible if you approach it the right way.
We're talking about different things here. I was talking about building new fabs for 28nm nodes and you're talking about TSMC upgrading existing fabs from older nodes to 28nm production.
Of course upgrading an existing older "sunk-cost" fab to 28nm production will be profitable, but not building a new one from scratch just for that same older node.
But now this makes the subsidies angle make more sense: You subsidize initial construction and then the domestic plant remains online indefinitely because the construction is a sunk cost and the incremental cost of upgrades over time is sustainable.
The math works out a lot better when you’re upgrading pre-EUV fabs or expanding an existing facility. A lot of the gear and setup is mostly the same such as wafer cleaning, HVAC and isolation, etc and the local challenges to setup and labor have been figured out.
"But I've got a product that's certified with this part that's running on a 40nm process that has these specifications that are deeply tied to features of that 40nm process; things like voltage ranges and temperature tolerances! If you force me to switch to a comparable but not identical part at 22nm I'll have to re-certify my widget with 18 different regulatory agencies!"
If those are your needs, you order all the parts you need over your product’s lifetime up-front or get (= pay for) a contract with the manufacturer that makes them promise to sell you the parts for X years (they probably wouldn’t keep producing old parts, but would stockpile enough of them to be able to deliver working ones years later)
There are companies (I've used Rochester Electronics) that both stockpile and manufacture legacy chips specifically for the long tail support situations.
You will always have pure analog electronics and other bespoke things that basically don't benefit from anything finer than these nodes. Even for digital chips, it makes no sense to use leading edge nodes for very simple logic where a lot of the area is just contact pads.
Relevant to mention MEMS (micro-electromechanical systems) in this context, which use much older nm tech. Be it digital micro mirror devices¹ or gyros². Or photo/laser diodes.
Given the physical limitations, as well as the problems we have with code base security it might be time to aim for cheaper production of something in the region of 180nm instead.
Looking at how old much of the standard weaponry used today is (TOW 50 years with an actual physical gyroscope, Javelin still 25 years³), the demand from the military alone should cover the initial cost. Especially if you look at the ludicrous prices western countries payed for even dumb artillery shells.
I have to wonder if the ability to profit depends entirely on the established cartel of semiconductor manufacturers. They determine the current prices of chips in the marketplace.
If entering that marketplace requires competing with them, then I am not sure anyone that is not already in the market can ever win. The margins are too low and the startup costs are too high.
Government intervention seems to be the only possible solution, and that option hardly sounds viable when considering that cartel’s collective lobbying power.
I don't think this is a "cartel of semiconductor manufacturers" so much as it's been a "shambolic cluster of organizations running crappy old fabs into the ground producing cheap chips that were subsidized by a prior decade's worth of very expensive products."
I can afford to sell gazillions of chips at $0.08 per chip if I'm running a fab I didn't pay to build. I'm only (barely) paying for the inputs. When Stan, the last guy who understands how to run the widget verifier, or Elaine, the last lady to understands how to run the polishing machine retire, I'll have to close up shop.
Those $0.08 per chip devices have been absurdly subsidized in that a replacement infrastructure to make them would require that they cost $10 per device, and the ecosystem of things built on $0.08 chips isn't viable in a $10 per chip world.
In order to have a fab make $0.03 per unit devices, you first have to have the fab spend 10 years making $300 per unit devices, regardless of the underlying node size of those $300 per unit devices.
Likely you couldn't even go back and make a fab that makes large volumes of 60nm-90nm node sizes at all, for any amount of money, because the equipment to do this (new) hasn't been made in 2 decades and no company is willing to invest the money to make new crappy old equipment.
It's not a nefarious oligopoly as much as a synchronized "run the asset to failure" lifecycle of the infrastructure.
How much does it cost to make a 300 year old tree?
>Likely you couldn't even go back and make a fab that makes large volumes of 60nm-90nm node sizes at all, for any amount of money, because the equipment to do this (new) hasn't been made in 2 decades and no company is willing to invest the money to make new crappy old equipment.
I believe your argument assumes that there is a fixed cost to produce even 180nm or 350nm ICs that hasnt changed since the first one was produced.
We still need 300 years for a 300 year old tree, but 25 year old technology might now be relatively easy to build if we start from scratch.
Yes, my argument is that producing at industrial scales even chunky nodes requires enormous capital expenditures and may be impossible without rebuilding large chunks of an antiquated and abandoned supply chain.
Even if it is 10% the cost of making the each of the individual components involved in making a relatively simple 90nm chip, you're still looking at vast costs.
If you're talking about making 30 chips in a university fab, sure, I'll concede that it is "possible" but if you're talking about propping up an industry built on products that require a herd of standardized "$0.30" parts made on legacy 90nm fabs, that ship has sailed.
Update your BOM and recertify or raise your costs by an order of magnitude.
First off, you are definitively making a very solid point, cost for getting mass production right are a killer once the institutional knowledge is gone. For example, its very visible in the field of battery technologies if i am not mistaken. Going from lead to lithium was a gigantic task and the inertia going forwards hasnt reduced enough at this point.
But realistically this is a matter of going back far enough, to lower the cost far enough? 10% are a good start but to stick to the topic, physical gyroscopes from decades ago are now replaced with MEMS ICs where the reduction in cost is magnitudes more then down to 10%. At a certain point the reduced cost makes it viable. The question is just has it been long enough?
While we wont get 90nm cheap enough, the question is what can we do on a hobby level (vs academia)? Because going from there (neglectable cost and technological requirements) to mass production will at some point be cheaper then the cost of setting up reproducible tooling for older high tech systems.
I am likely still off with 180nm, but there should be a level at which this makes economical sense. A level that gets cheaper to reach with technological progress / time.
The problem I see with this argument is that there are plenty of fabs making trailing-edge devices, some of which aren't even that old. It even seems to be part of the established path for countries and locations more generally that seek to bootstrap a semiconductor industry of their own. They get started with the simplest and coarsest nodes, then go finer step by step. Even TSMC got their start that way. So it seems like a pretty robust industry to me, I'm not seeing the argument for a crisis.
Personally i cant follow this line of reasoning. In the end this is an economical argument, as they still buy machines from the same manufacturer. At that point its a matter of being able to deliver and create a market for ICs with the given machines. Which is often achieved through political will and subsidizes to get to that point.
My initial argument is that while you cant compete with ASML products in 2023, you will be able to economically compete with some of their older products once you go back far enough.
> How much does it cost to make a 300 year old tree?
Aside from your main point, I found this an interesting thought exercise thinking about cost of air, sunlight, soil, water and then 300 years of security
I imagine if you're going to grow one 300 year old tree, then your best bet is obscurity. Find a stable very-rural area that's not prone to bushfires, plant one tree and make sure it's doing well for a few years, come back 300 years later, you're done.
If you're not going the obscurity path then you'd really want to scale it up - there's not much difference between security for one tree and security for 100 trees.
The capital expense on a new fab is crazy. There may be a cartel factor but that usually would work to the advantage of the manufacturers, so that doesn't seem to be the case here.
There's no real cartel for older nodes. It's not even really possible considering how many fabs exist and how many players are operating those older fabs.
But you can only really make those profitably for a few industries (military, medical, seismic come to mind). The EU does have the chip fabs for those industries, of course...
There might be an argument then that it would be worth it for the state to take the hit. If shit hits the fan and you have zero semi-manufacturing, then you are going to be pretty screwed.
> If shit hits the fan and you have zero semi-manufacturing, then you are going to be pretty screwed.
I don't really understand this claim at all. Chips are not exactly fungible, unless you force your local companies to use you "state sponsored chips" in their products just being able to produce "chips" wouldn't be that useful. What are you going to do with them?
So the cost of building a fab hasn't come down in the last decades, huh?
Genuinely asking, is there some^W^W^W what is the "uncompressible" cost in fab-fabbing? I'd totally guess that staff and the building itself are not it?
What confuses me is that there seems to be a bathtub curve on the fab market.
We've got the race to 7-5-3-2nm for high-end parts, and then a couple generations back on support chips and lower-performance cheap parts.
Then you've got the dead zone. Is there a meaningful active use case for 1-micron processes anymore?
But then you get to perennial embedded-market stuff (Z80, 65C02, 80186) originally built on 40-year-old processes. And going even further, you have stuff that's a handful of transistors on a die that was probably drawn with markers for the original lithography; I'm not sure we'd even use numbers to describe that process. How does that stuff get made anymore? It can't all be draining down old stock.
I can't imagine it's cost effective to use modern-fab capacity to manufacture Z80s, 555s, or 74LS04s at their original die sizes. And if you shrunk them to put a million on a wafer, even assuming that's feasible, you'd change their performance characteristics in ways that might break long-established specs and contracts.
Such as? I can't really think of any benefit besides providing jobs and funding for contractors (so kicks backs etc.)
Then again it's not particularly surprising, the EU is well know for wasting massive amounts of money on all sorts of nonsense while ignoring things that actually matter.
No, but I trust you. However the US tech industry is doing just fine without too much government funding and intervention so it's not such a huge issue (in this specific area of course, not in general)
Sounds like there is a need for investment into innovation beyond just building the next-generation fab for $2^x billion. Bringing the cost of a new less-advanced fab down from $2 billion to $100 million, and then building 20 of them, could also be profitable (though less exciting). There is a national economy that's actually been growing quite well for a few decades now by applying that general idea to other industries.
But if you were a country or an alliance that wanted to be 1000% sure you always had access to a component (drone parts) you might be willing to pony up billions to make sure you could not be blockaded or embargoed. I don't know if that makes sense but given what is going on in Ukraine and the Mid East, people have to be thinking about that.
The example is hypothetical, but complex machines can be complex to keep running, and often suffer catastrophically when shut down.
If the fab was barely profitable before shutting down, it doesn't take much to total it. Fabs are full of machines that cost tens of millions of dollars when they were new and there are simply no spare parts of vendor support for them now, and you can't just swap in a modern replacement. Fabs are full of extremely sensitive environments (no dust here, acid that will kill you if you touch it there, constant temperatures, no humidity, etc). If any of that is compromised, it's now just a toxic waste dump.
Again, I have no specific knowledge in this domain, but I imagine most of the time the owner's happy enough just to walk away from the headache.
There's also the brain drain aspect. All the process engineers and techs that understood all the various "recipes", quirks, etc, of the various machines moved on to other work.
A new crew will eventually work it out, but there's a lot of trial and error getting to the right bake time/temps, spin rpm, etc, etc. Yield and rework suffers while they do that.
Not an expert, but there are additional start up costs that need to be spent to “start it up.” With any significant downtime, those could eat up any possible profit unless it’s a newest technology fab.
> What is the BOM for the challenger tank - how many chips in there that are made by whom
In today's world, it would seem more sensible to just stockpile enough of all the components for 5-7 years of tank production, knowing that if your enemy tries any evil tricks then you have half a decade to figure out how to redesign or make the components yourself.
Keep a close eye on anything that looks like an antenna and it isn't so bad having the enemy backdooring your chips either.
This has been my take as well. There is a lot of disruption in a company when a key part, like the FPGA that serves as a communications nexus in the product goes EOL and everyone scrambles for a year trying to engineer in a replacement.
Buy enough parts for expected product life, make good use of the time you didn't waste on scrambling, and when your product is EOL sell any left-over parts on the secondhand markets.
> about security and economic on-shoring
> increases the likelihood smaller states can build locally and solves for most chip needs
I'm not sure what does that mean? What specific chip needs that would that solve and what benefits would this provide? If those chips are not competitive nobody would buy them? So what would governments do with them? Stockpile them for the future just 'in case'?
The problem is that unlike grain or oil chips are not exactly fungible if your military production or other vital industries lose access to their current suppliers they wouldn't be able to use your slow, outdated and overpriced chips anyway (and forcing them to do that under normal circumstances would make your products less competitive).
> BOM for the challenger tank
How many other components does the Challenger tank contain (IIRC it's not really produced anymore anyway) which are not manufactured in the UK? In any case stockpiling necessary chips etc. just in case the UK won't able be able to acquire anything from the US/Germany/etc. seems like a practical approach than trying to develop everything inside the country.
I agree that often the less cutting edge chips are important but doesn’t the EU already have that handled with ST Microelectronics, NXP, Infineon? What’s lacking is very high end CPU, GPU, high end memory, high end FPGA.
AFAIK only Global Foundries Dresden goes down to 22nm and 12nm, and I think that's by far the most cutting edge fab currently in EU, making the Ryzen IO dies and other such things.
But even TSMC's future Dresden fab starting construction next year(hopefully) will start making mostly automotive chips for NXP, Bosch and Infineon chips at 28nm and 22nm all the way in 2027(!), with plans to go to 16nm and 12nm in the further future.
Your view on EU cutting edge semi fabrication seems very optimistic.
Of course they weren't gonna export their crown jewels outside of Taiwan, the same way how the west didn't export their crown jewels to Asia when they did the technology transfers for semiconductor manufacturing in the '70s, making sure to keep their Asian partners at least a node behind.
Everything gets out in the end. My Italian hometown had a "golden age" of silk manufacturing for a while, thanks to bugs smuggled out of China. It lasted for a couple of decades and then they were again smuggled out to other Italian towns. And then of course you have the nuclear shenanigans.
If European countries wanted the tech bad enough, they would find ways to get it. The problem is not the know-how but the massive investments needed to productize it.
>The problem is not the know-how but the massive investments needed to productize it.
Are you telling me the EU, the richest block in the world, has less money to spend on fabs than TSMC, as if the EU is scrapping for change behind the couch cushions.
If only you knew how much money the EU wastes through various useless and vanity projects that accomplish nothing except getting certain well connected people rich, we could have built 3x TSMCs.
But unlike Taiwan, we're lacking in visionary well educated tech leaders, and drowning in clueless politicians and established gentrified industry players who lobby the funds go to their projects instead.
> Are you telling me the EU, the richest block in the world, has less money to spend on fabs than TSMC
I didn't say we don't have the money, but that it's a problem to commit the money. It's basically the norm that EU countries unanimously agree that "something should be done" on a certain issue, but then disagree on how much it should cost and where the money should come from. This gets more and more complicated the bigger the cost is (and this is an expensive idea) and the farther we are from the regular 7-year-budget process (it was last agreed in 2020, so jockeying for big items will probably resume in 2025-26).
I don't disagree on the overall lack of vision in European political classes (hardly a fault of the EU, it's common to basically all countries and all levels of government), but even a visionary leader would have to work hard to get agreement on such a big project.
> Are you telling me the EU, the richest block in the world, has less money to spend on fabs than TSMC
That could very well turn out to be the case in practice, not for lack of money, but inability to provide the promised subsidies according to Financial Times:
Having a company an industry dependent on generous subsidies from states is a race to the bottom. TSMC will just pit you against other countries on the basis of "which one of you is gonna give us more of your tax-payers' money and we'll build our fab there"
To me it just seems like relying on government funding to drive innovation in sectors where private companies have incentives to compete is extremely foolish.
A lot of EU semi research goes on at IMEC in Belgium, but EU still lacks the actual means of put any of it into production on their own soil. EU fabs have given up going beyond 12nm as it was deemed too capital intensive.
Is there some reason why you wouldn't be able to run a purely open source software stack on it, if you wanted? Does Microsoft Windows even run on RISC-V?
It's a cool project but I do wish these open source processor initiatives targetted more realistic design points.
In particular there's often a desire to push out of order design into the micro-architecture where the resulting performance just doesn't justify it. In this they're achieving a CoreMark/MHz of 2.44 (from the paper here: https://upcommons.upc.edu/bitstream/handle/2117/384912/sarga...). This is very low performance (on a par with the Arm M0+). Now CoreMark certainly isn't the be all and end all of Benchmarks. In particular it has very little relevance to high performance compute or application cores in general. However it's a useful performance smoke test. It is easy to perform well e.g getting close to 1.0 IPC for a single issue design such as Sargantana, CoreMark doesn't really stress the memory system so a major source of stalls that you need to hide latency for just isn't there. So if you're not hitting that you've definitely got work to do on the microarchitecture. They may well have been better off trying to build something simpler and putting more design time into improving the performance of the basic microarchitecture.
The other crucial aspect that's often overlooked is verification. This is a major part of producing a new production quality CPU design and it doesn't appear to be discussed in the paper at all. Maybe once they've released the RTL they'll also release the testbench so you can see what they have done.
Any of these efforts not performing as well as BOOM may be suffering from "not invented here". Its already there and getting good IPC. Why not start from that.
Though on the CoreMark benchmark they haven't published the IPC achieved. You get a large swing in results depending upon the compiler used and switches (For RV32 at least I've found GCC out-performs LLVM comfortably).
They do have an IPC number for Dhrystone (another tiny benchmark that tells you little about real-world performance but you should be able to perform well on), that looks to be 0.7.
I believe we might be at the point where supply chain security (and code base security) might warrant the question why you cant implement something on an M0+.
If you really need higher speeds for reaction time, use an ASIC or FPGA. We already do this with USB3 or Ethernet controllers.
For those of you who don't speak Catalan, "sargantana" is a common little local lizard (Podarcis hispanicus, "Iberian wall lizard"). Of course the chip family (Lagarto) just means "lizard" in Castilian.
They are very similar. The Iberian ones are smaller, with broader heads, and are sometimes more colorful. I'm pretty sure that a Catalan would call the Italian ones 'sargantana'.
Warning, offtopic but funny: FWIW "lagarta" in spanish slang is a girl with a lot of ambition looking from things from men taking advantage of them. Not a "worker" but a dangerous person. Lol
There is a bit of a controversy around this. And I don't say you are wrong. It's just that I personally consider that Castilian should not be used and does no longer exist. Here's why I think it like so:
Castilian originated as one of several Romance dialects in the Iberian Peninsula. It developed in the Kingdom of Castile during the Middle Ages, distinct from other regional languages like Catalan or Galician. With the unification of Spain, Castilian gained prominence, eventually evolving into modern Spanish. This was not merely a linguistic shift but also a result of political and cultural dynamics. The language we now call Spanish has absorbed influences from Arabic, indigenous languages of the Americas, and others, diverging significantly from its medieval Castilian origins. For this, Castilian has now disappeared, you just need to read how Castilian was written to see it has nothing to do with modern Spanish.
Today, Spanish is spoken by over 500 million people worldwide. In contrast, the Castilian region of Spain has a much smaller population (~3M). Referring to the language as Spanish acknowledges its extensive global presence and its modern version. Just as we refer to the language originating in Tuscany as Italian, not Tuscanian, calling the language from Castile 'Spanish' aligns with common linguistic naming conventions. Languages often take their names from the nations or cultural entities they are associated with, not their specific regions of origin.
Modern linguistic institutions, like the Real Academia Española, regard 'Castilian' and 'Spanish' as synonyms but recommend 'Spanish' for its inclusive and global character.
This is a nice and thoughtful post and I agree mostly. I'd like to add that my use of the word "Castilian" reflects my experience of usage of the term here in Barcelona (when speaking "Spanish", Catalan, and English). It's not a hard rule of course, but people are especially likely to refer to Castille over Hispania when distinguishing from other languages spoken historically within the country.
The term also usefully refers to the prestige dialect of Spanish, as might be spoken in Madrid. This is useful to distinguish from e.g. the "al-andalus" (Andalusian) spoken in the south which is more treated as a dialect than a separate language (though the distinction is of course fuzzy).
(On the other hand Barcelona in particular has a significant population of sudamericanos who will usually say "español", certainly that term is well used and understood.)
> Castilian (castellano), that is, Spanish, is the native language of the Castilians. Its origin is traditionally ascribed to an area south of the Cordillera Cantábrica, including the upper Ebro valley, in northern Spain, around the 8th and 9th centuries; however the first written standard was developed in the 13th century in the southern city of Toledo. It is descended from the Vulgar Latin of the Roman Empire, with Arabic influences, and perhaps Basque as well. During the Reconquista in the Middle Ages, it was brought to the south of Spain where it replaced the languages that were spoken in the former Moorish controlled zones, such as the local form of related Latin dialects now referred to as Mozarabic, and the Arabic that had been introduced by the Muslims. In this process Castilian absorbed many traits from these languages, some of which continue to be used today. Outside of Spain and a few Latin American countries, Castilian is now usually referred to as Spanish.
> In Spain and in some other parts of the Spanish-speaking world, Spanish is called not only español but also castellano (Castilian), the language from the Kingdom of Castile, contrasting it with other languages spoken in Spain such as Galician, Basque, Asturian, Catalan, Aragonese and Occitan.
> The Spanish Constitution of 1978 uses the term castellano to define the official language of the whole of Spain, in contrast to las demás lenguas españolas (lit. "the other Spanish languages").
It's the same language. I'm a Spaniard, so I know it well. Name it the way you'd like, it can be called Spanish, Español or Castellano everywhere from Mexico to Patagonia, and from The Canaries up to the Pyrenees.
"name it the way you'd like, it can be called Spanish" is a very different proposition to "[you should say] Spanish, Castilian does not exist [and you are wrong to use that name]", which was the angle of the poster who kicked all this off.
> from Mexico to Patagonia, and from The Canaries up to the Pyrenees.
Sounds a bit imperialistic?
Notwithstanding the tens of millions of native speakers of autochtone non-spanish languages in these territories: Mapuche (260K), Quechua (7.2M), Aymara (1.7M), Guaraní (6.1M), Wayuu (400K), Mayan (6M), Miskito (150K), Garifuna (120K), Nahuatl (1.7M), Mixtec (530K), Catalan (4.1M), Basque (750K), Galician (2.4M). Spanish is quickly eroding all of these, but they still exist! (And this only counts native speakers. The number of people who are fluent in Guarani or Catalan is certainly more than the double of that.)
> Not imperialistic. Would you say the same of the English language too?
Yes, of course? When I think of imperialism the first thing that comes to my mind is precisely "the bri'ish empi'ah and its commonwealth"! If English is currently the world's default language is just because of the triumph of English/American imperialism.
So an in-order core that is slightly faster than rocketchip in their benchmarks. That doesn't seem all that exciting, except for the vector extension, although they only support a small subset of it. Thats sounds similar to spatz [0] and given their numbers is slightly faster.
The previous DVINO was a 5-stage in-order, this Sargantana core is a 7-stage out-of-order write-back with register renaming and a non-blocking memory pipeline.
So it is not a full in-order or a full out-of-order design.
That sounds like the perfect high end MCU core. Doesn’t say what the target use case is, but if it’s like other RISC-V announcements, they’re probably talking about general purpose CPUs, in which case those specs are pretty disappointing. It’s a shame that RISC-V has made so little impact in embedded electronics.
It has made lots of impacts but just not publicly (in embedded like hard drives, eytc...). Also there is not as much spent in advertisement in comparison to "the brand new Apple chip" which is backed by a company with deep pockets (and which evidently is also more powerful)
What is the purpose of including the RISC-V Compressed 16-bit extension set in what is supposed to be a HPC chip? Most embedded/IoT RISC-V implementations include that for obvious reasons but why here?
If you don't have the C extension then you can't run off the shelf Linux distros such as Fedora, Debian, Ubuntu, Arch and are limited to what you compile yourself e.g. Buildroot / Yocto.
However the actual academic paper says it's RV64G, no C.
Thanks for the correction Bruce. I was in a rush (never post when you are in a rush, or drunk, or angry) and I was so used to seeing RV64GC that I didn't notice the absence of the 'C'.
the same reasons motivating C still apply at HPC: higher code density means fewer bits wasted representing redundant information, better cache utilization, minimization of memory fetch bandwidth, etc.
basically, every metric derived from code size is happier when you have 20-30% fewer bits representing it.
You don't do your first experimental test chips on 4nm. That's where you get to when you have raised hundreds of millions after you've gone through several iterations to prove to investors you know what you're doing.
It's unrealistic to expect these chips to compete with modern manufacturing standards. Still, it's very impressive the progress RISC-V has made in the last few years. It's actually a viable option for many projects now.
i guess this chip is not for high end gaming machines or servers, but rather cars, industrial machine controlling, smart fridges, that kind of stuff. during covid production of many appliances ground to a halt because of various chip shortages. now what if for some reason asian chips became unavailable in europe (wars, natural catastrophes, ...)? cheap and easy to build is far more important than high end performance here.
> The BSC, Europe's leading developer of open source computing technologies
> The fact that the [..] architecture [..] of these new processors is open source, and therefore non-proprietary and accessible to all, reduces technological dependence on large multinational corporations
I hadn't heard of either BSC nor Open Source Computing before. I'm curious though, are there a lot of people out there who are not tied to large corporations and who have the knowledge and the means to produce computer hardware? Are there hobbyists out there producing their own custom chips and graphics cards?
The BSC has been featured a bunch of times around here due to their Marenostrum Supercomputer. A month ago someone posted a virtual visit to their Marenostrum 4 location, that's kinda surprising/interesting because is located inside an old chapel:
If they ever get AGI going it will have come full circle. You can go there to pray to your very visible god. Prompt engineering will be the new praying, you read it on HN first...
I don’t know about hobbyists but
there are less known companies doing open source hardware for sure, [1] here is an example of cool stackable parallel computing project.
I participated on the campaign and received mine, but not sure how are they doing today, it was a while ago.
Edit: Andreas Olofsson the original founder seems to be still active in the field [2]
There are also lots of interesting projects, like Aina, a project in partnership with Generalitat de Catalunya (like the council? prefecture?) to foster the Catalan language with models and tools using HPC resources: https://projecteaina.cat/
> The Barcelona Supercomputing Center [...] presented on Wednesday the new Sargantana chip, the third generation of open source processors designed entirely at the BSC.
> Researchers from other universities and research centres such as the Centro de Investigación en Computación del Instituto Politécnico Nacional de México (CIC-IPN) [...] have participated in the development of Sargantana.
So this was designed entirely in Spain but it is also joint work with a university in Mexico ;-) Nice project though; I've visited BSC and they do a lot of cool work there.
What about this chip is open source? As far as I can tell, nothing. It frustrates me to no end that closed, secret efforts inherit the “open source” branding just because the specification they implement is participatory and royalty free.
Does anyone know a decent RISC-V developer kit that one can buy in the Bay Area today? Or rent somewhere in the cloud? I want to start porting our C libraries to RISC-V.
Right now, I'd recommend the canmv kendryte k230 which has a C908 rvv 1.0 capable core.
If you can wait a bit, mid/end 2024, I'd go for the vision five 3 (or whatever is will be called), as it will have RVA22+V (iirc) or for the sg2380 which has SiFive P570s and X280s both RVA22+V.
If you don't care about vector, then currently anything based on jh7110 should be good.
But if you have the time to deal with very slow execution and the potential need to report hardware bugs, I'd consider benchmarking on rtl simulation of open source cores. (BOOM, tenstorrent-bobcat, XiangShian, ...)
> Emulation on a modern X86 CPU will outperform any commercial available RISC-V processor at the moment
That's not true.
qemu-user is a little faster than the single-issue HiFive Unleashed from 2008, but qemu-system is slower.
Against either the dual-issue U74 cores in the JH7110 or the small OoO cores in the TH1520 and SG2042 qemu doesn't sand a chance on a core for core basis.
It used to be the case that qemu could win on x86 by throwing more cores at the problem, but with the 64 core SG2042 in the Milk-V Pioneer that possibility has disappeared too -- not to mention that the Pioneer is $1500 for chip+motherboard (need to add RAM and storage), while a 64 core x86 is $5000 just for the chip.
> There are plenty that exist, but i haven’t heard of anyone using them or any stores selling them.
You probably can't walk into your local mall and walk out with one, but it's easy enough to buy boards on Aliexpress or on Amazon (it's usually the same company, shipping from China, either way)
As for people using them, I guess most people simply don't advertise what they're using. I'm working at a very large company that is porting certain x86/Arm software to RISC-V. We're using the VisionFive 2 as the reference device. It's the best current combination of performance, price, and software maturity. I've also got the Lichee Pi 4A and the software works fine on that too (it would be shocking if it didn't) but it turns out that despite the specs on paper the VF2 is 20% faster anyway.
If you need to use SIMD/vector rather than plain C then the only choice for RVV 1.0 at the moment is the CanMV-K230 which has a single 1.6 GHz core vs quad core on the other boards. It's also only just come out. My order made on October 29th hasn't arrived yet, though they claimed on November 12 that they'd received stock for it. Mind you the Pi 5 I ordered on September 29 only just arrived last week, so this is not unusual for brand new boards.
It is in an in-order CPU. It is meant for tasks where prediction and robustness are important. More like hard-ish realtime stuff in nasty environment (or... security? ahem...)
Interesting. It'd be nice to know if they're going to focus on HPC loads or hobby/consumer too. I should check to see if I still know people around the BSC :P
From the preprint [1] it looks like it is not meant for consumers.
This way, Sargantana lays the foundations for future RISC-V based core designs able to meet industrial-class performance requirements for scientific, real-time, and high-performance computing applications.
It still is. You can just book it with reception directly, or if you attend a meeting or conference. Whenever I get friends in Barcelona I always invite them over too (anyone that works there can request a visitor badge and schedule the visit -- necessary avoid conflicts).
My take on this is
1. this is less about competitiveness at the cutting edge and more about security and economic on-shoring
2. building chips on-shore at the 40-20nm level massively reduces risk, increases the likelihood smaller states can build locally and solves for most chip needs
3. chips we need are rarely the cutting edge AI stuff. The vast volume of chios will go in as controllers on screens, USB connectors and so on. Building plug and play alternatives will give local manufacturers choices, and incentives will help.
4. the big win is security. Does the CEO of sensitive company, the head of security services and the general in charge of procurement use keyboards, cpus motherboards and monitors made from open source chips manufactured in a trusted nation? What is the BOM for the challenger tank - how many chips in there that are made by whom and ...
the process is long and arduous and the risks are huge.
But we make tanks from steel other materials made in "favoured nations" - surely the same applies to silicon?