Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know this world well (I know what llvm is) but - does anyone know why this was made as a fork vs. contributing to llvm? I suppose it's harder to contribute code to the real llvm..?

Thanks





These processors were very very different from what we have today.

They usually only had a single general purpose register (plus some helpers). Registers were 8-bit but addresses (pointers) were 16-bit. Memory was highly non-uniform, with (fast) SRAM, DRAM and (slow) ROM all in one single address space. Instructions often involved RAM directly and there were a plethora of complicated addressing modes.

Partly this was because there was no big gap between processing speed and memory access, but this makes it very unlikely that similar architectures will ever come back.

As interesting as experiments like LLVM-MOS are, they would not be a good fit for upstream LLVM.


> ... there was no big gap between processing speed and memory access, but this makes it very unlikely that similar architectures will ever come back. ...

Don't think "memory access" (i.e. RAM), think "accessing generic (addressable) scratchpad storage" as a viable alternative to both low-level cache and a conventional register file. This is not too different from how GPU low-level architectures might be said to work these days.


LLVM has very high quality standards in my experience. Much higher than I've ever had even at work. It might be a challenge to get this upstreamed.

LLVM is also very modular which makes it easy to maintain forks for a specific backend that don't touch core functionality.


My experience is that while LLVM is very modular, it also has a pretty high amount of change in the boundaries, both in where they're drawn and in the interfaces between them. Maintaining a fork of LLVM with a new back-end is very hard.

I know my company (AMD) maintains an llvm fork for ROCm. YMMV.

Super interesting, thanks. I specifically thought that its modular aspect made it possible to just "load" architectures or parsers as ... "plugins"

But I'm sure it's more complicated than that. :-)

Thanks again


LLVM backends are indeed modular, and the LLVM project does allow for experimental backends. Some of the custom optimization passes introduced by this MOS backend are also of broader interest for the project, especially the automated static allocation for provably non-reentrant functions, which might turn out to be highly applicable to GPU-targeting backends.

It would be interesting to also have a viable backend for the Z80 architecture, which also seems to have a highly interested community of potential maintainers.


https://github.com/jacobly0/llvm-project

... but now three years out of date, because it's hard to maintain :-)


Pretty sure that the prospects of successfully pitching the LLVM upstream to include a 6502 (or any 8/16-bit arch) backend are only slightly better than a snowball’s chances in hell.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: