Hacker Newsnew | past | comments | ask | show | jobs | submit | hashtag-til's commentslogin

This is correct.

I suppose this is relevant to a subset of HN audience who attend FOSDEM. Even the talk abstract is worth discussion as it highlights an important side effect of FOSS goals and the current state of the world.


all talks are recorded. so you can watch it live or on replay. Talks are free to attend; they are @ ULB campus in Brussels. 31st of jan - 1 of feb.

For modern compiler and a more direct approach I recommend https://www.cs.cornell.edu/~asampson/blog/llvm.html


LLVM makes it so much easier to build a compiler - it's not even funny. Whenever I use it, I feel like I'm just arranging some rocks on a top of a pyramid.


A trend started with tools like the Amsterdam Compiler Toolkit, LLVM happens to be the more famous one.

https://en.wikipedia.org/wiki/Amsterdam_Compiler_Kit


Yet if only it wasnt that huge, so compilation takes this much time :/


Using LLVM is an indirect approach that will limit the quality of your compiler.

When one looks at languages that use LLVM as a backend, there is one consistent property: slow compilation. Because of how widespread LLVM is, we often seem to accept this as a fact of life and that we are forced to make a choice between fast runtime code and a fast compiler. This is a false choice.

Look at two somewhat recent languages that use LLVM as a backend: zig and rust. The former has acknowledged that LLVM is an albatross and are in the process of writing their own backends to escape its limitations. The latter is burdened with ridiculous compilation times that will never get meaningfully better so long as they avoid writing their own backend.

Personally, I find LLVM a quite disempowering technology. It creates the impression that its complexity is necessary for quality and performance and makes people dependent on it instead of developing their own skills. This is not entirely dissimilar to another hot technology with almost the same initials.


Adrian Sampson (the author of that now 10-year old blog post) has a online course for actually teaching compilers:

<https://web.archive.org/web/20210208162458/https://www.cs.co...>

Discussed several times on HN: <https://hn.algolia.com/?query=cs6120%20advanced%20compilers>

(And discussion about the blog post from last year about the IL used in the course: <https://news.ycombinator.com/item?id=41084318>.)



Seems AI re-written or reviewed at least.


I'd say that people take everything as if it was gamified. So the motivation would be just to boast about "raised 1 gazillion security reports in open-source project such as curl, etc. etc.".

AI just make these idiots faster these days, because the only cost for them to is typing "inspect `curl` code base and generate me some security reports".


I remember the Digital Ocean "t-shirt gate" scandal, where people would add punctuation to README files of random repositories to win a free t-shirt.

https://domenic.me/hacktoberfest/

It wasn't fun if you had anything with a few thousand stars on Github.


I'm sure the community will make an inexpensive Pringles version of it.


Retrain engineers as lawyers is clever. Better than repurpose them as subpar managers.


Oh don’t worry, they have plenty of those too


But then there is the software ecosystem issue.

Having a competitive CPU is 1% of the job. Then you need To have a competitive SoC (oh and not infringe IP), so that you can build the software ecosystem, which is the hard bit.


RISC-V is rapidly growing the strongest ecosystem.

The new (but tier1 like x86-64) Debian port is doing alright[0]. It'll soon pass ppc64 and close up to arm64.

0. https://buildd.debian.org/stats/graph-week-big.png


> But then there is the software ecosystem issue.

We still have problems with software not being optimised for Arm these days, which is just astounding given the prevalence on mobile devices, let alone the market share represented by Apple. Even Golang is still lacking a whole bunch of optimisations that are present in x86, and Google has their own Arm based chips.

Compilers pull off miracles, but a lot of optimisations are going to take direct experience and dedicated work.


Considering how often ARM processors are used to run an application on top of a framework over an interpreted language inside a VM, all to display what amounts to kilobytes of text and megabytes of images, using hundreds of megabytes of RAM and billions of operations per second, I'm surprised anyone even bothers optimizing anything, anymore.


For all it's success it's still kind of a niche-language (and even with the amount of Google compiler developers, they're are spread thin between V8, Go, Dart,etc).

I think the keys to Risc-V in terms of software will be,

LLVM (gives us C, C++, Rust, Zig,etc), this is probably already happening?

JavaScript (V8 support for Android should be the biggest driver, also enabling Node,Deno,etc but it's speed will depend on Google interest)

JVM (Does Oracle have interest at all? Could be a major roadblock unless Google funds it, again depends on Android interest).

So Android on Risc-V could really be a game-changer but Google just backed down a bit recently.

Dotnet(games) and Ruby (and even Python?) would probably be like Go with custom runtimes/JIT's needing custom work but no obvious clear marketshare/funding.

It'll remain a niche but I do really think Android devices (or something else equally popular, a chinese home-PC?) would be the gamechanger to push demand over the top.


> Even Golang

Golang's compiler is weak compared to the competition. It's probably not a good demonstration of most ISAs really.


Not an issue because exceyt for a few windows or apple machines everthing arm is compiled and odds are they have the source. Give our ee a good risc-v and a couple years latter we will have our stuff rebult for that cpu


The whole reason ARM transition worked is that you had millions of developers with MacBooks who because of Rosetta were able to seamlessly run both x86 and ARM code at the same time.

This meant that you had (a) strong demand for ARM apps/libraries, (b) large pool of testers, (c) developers able to port their code without needing additional hardware, (d) developers able to seamlessly test their x86/ARM code side by side.

RISC-V will have none of this.


Apple is the only company that has managed a single CPU transition successfully. That they actually did it three times is incredible.

I think people are blind to the amount of pre-emptive work a transition like that requires. Sure, Linux and FreeBSD support a bunch of architectures, but are they really all free of bugs due to the architecture? You can't convince me that choosing an esoteric, lightly used arch like Big Endian PowerPC won't come with bugs related to that you'll have to deal with. And then you need to figure out who's responsible for the code, and whether or not they have the hardware to test it on.

It happened to me; small project I put on my ARM-based AWS server, and it was not working even though it was compiled for the architecture.


Apple’s case is really good indeed.

Having a clear software stack that you control plays a key role in this success, right?

Wanting to have the general solution with millions of random off label hardware combinations to support is the challenge.


Embedded is far larger than pcs and dosn't needthat phones too are larger an already you recompile as needed.


> […] and odds are […]

When it comes to the adoption of a new ISA, there are no odds even the sources exist, it is the scale and QA that are or are not.

The arrival of the first wave of Apple Silicon in 2020 led to a very hectic year in 2021 and beyond of people rushing in to fix numerous issues mostly (but not only) in Linux for aarch64, ranging from bugs to unoptimised code. OpenJDK that had existed for aarch64 for some time was so unstable that it could not be seriously used natively on aarch64, and it took nearly a year to stabilise it. Hand-optimising OpenSSL, Docker, Linux/aarch64 and many, many other packages also took time.

It only became possible because of the mass availability of the hardware (performant consumer-level arm64 CPU's) that has led to a mass adoption of the hardware architecture at the software level. aarch64 has now become the first-class citizen, and Linux as well as other big players have vastly benefited from it (e.g. cloud providers) as a whole. It is far being certain that if not the Apple Silicon catalyst we would have seen Graviton 4 in 2024 (the 4th gen in just 5 years), large multi-core Ampere CPU's in 2023/24 and even a performant Qualcomm laptop CPU this year.

Mass hardware availability to lay people that leads to mass adoption by lay people is critical to the success of a new hardware platform, as all of a sudden a very large pool of free QA becomes available that spurs further interest in improving the software support. Compare it, for instance, with the POWER platform that is open and the hardware has been available for quite a while; however, there has been no scale. The end result is that JIT still yields poor performance in Firefox/ppc64. Embedded people and hardware enthusiasts are not the critical mass that is required to trigger a chain reaction that leads to platform success, it is the lay people incessantly whining about something not working and reporting bugs.

Then there is also a reason why OpenBSD still holds on to a zoo of ancient, no longer available platforms (including a Motorola 88k) – they routinely compile the newly written code – however many moons it takes them to do it – and run it on the exotic hardware today with the single narrow purpose of trapping bugs, subtle and less subtle ones, caused by architectural differences across the platforms. Such an approach stands in stark contrast to the mass availability one; it does not scale as much, but it is a viable approach, too. And this is why the OpenBSD source code has a much better chance of running flawlessly on a new ISA.

Hence, hardware platform adoption is not a simple affair as some enthusiastically try to portray it to be.


Embedded has been doing just that for their platform for ages. they don't care about most of the things you list though.

not that your point is wrong, but for mose uses it doesn't matter. It would be better if they had it but they don't need it.


> they don't care […]

Precisely. Embedded cares only about one thing: «get the product off the ground and ship it fast, bugs including». And since the software in embedded is not user-facing, they can get away with «power cycle the device if it stops responding» recommendations in the user guide.

Embedded also sees the CPU as a disposable commodity and not a long term asset, and it is a well entrenched habit of throwing the entire code base away if another alternative CPU/ISA (cheaper, more power efficient etc – you name it) comes along. Where is all the code once written for 68HC11, PIC, AVR etc? Nowhere. It has all but been thrown away for varying reasons (architecture switch, architecture obsolescence and stuff). Same has not happened for Intel, and the code is still around and running.

For more substantial embedded development, the responsibility of adopting a new ISA falls on the vendor of the embedded OS/runtime (e.g. VxWorks or the embedded CPU vendor) who makes reasonable efforts of supporting hardware features important to customers but does not carry out the extensive testing of all features. Again, the focus is on allowing the vendor's customers to ship the product fast. Quality of development toolchains for embedded is also not so infrequently questionable and complaints about poor support of the underlying hardware are common. They are typically ignored.

> but for mose uses it doesn't matter […]

Which is why embedded is not a useful success metric when it comes to predicting the success of a CPU architecture in user-facing scenarios (namely, personal and server computing).


We've seen compatibility layers between x86 and arm. Am I correct in thinking that a compatibility layer between riscV and arm would be easier/more performant since they're both risc architectures?


There are already compatibility layers for x86 on RISC-V. They’re not quite as good, but progress is being made.

Edit, link: https://box86.org/2024/08/box64-and-risc-v-in-2024/


That’s the model when you’re in the IP business - nothing new here.

This is what you use to fund the next generation of said IP. There is no magic.


The whole ALA essentially boils down to "you pay us because other companies made our ISA popular".

This is why companies are pushing toward RISC-V so hard. If ARM's ISA were open, then ARM would have to compete with lots of other companies to create the best core implementations possible or disappear.


Well, I suppose the magic is in crossing your t's and dotting your i's when drafting legal contracts pertaining to said IP. Arm failed to do that.


There is only one correct answer to that prompt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: