Hacker News new | past | comments | ask | show | jobs | submit login

This is a smart move. It's essentially what the System/38 (later AS/400 & IBM i) did. They had an ISA or microcode layer that all apps were compiled to. Then, that was compiled onto whatever hardware they ran on. When IBM switched to POWER processors, the just modified that low layer to compile to POWER processors. That let them run the old apps without recompilation of their original source. They used this strategy over and over for decades. Always one to keep in mind.

Going further, I think a team could get interesting results combining this with design-by-contract, typed assembly, and certified compilation. Much like verification condition generators, the compilation process would keep a set of conditions that should be true regardless of what form the code is in. By the time it gets to the lower level, those conditions & the data types can be used in the final compile to real architecture. It would preserve a context for doing safe/secure optimizations, transforms, and integration without the original source.

Based on other comments here, it sounds like the LLVM Bitcode representation is more closely tied to a specific architecture than the System/38 intermediate representation was. Same idea, though.

I agree. That's either an advantage or another opportunity for modern IT to learn from the past. Those old systems were full of so many tricks they're still outclassing modern efforts in some ways haha.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact