Hacker News new | comments | show | ask | jobs | submit login

You are right that it's hard to compete with x86 but it's for a weird reason (beyond the economic might of behemoths like Intel). x86 has good density, so it can do more in a few bytes than sparser instruction sets like RISC. In the late 90s when computers starting being memory bandwidth limited, PowerPC lost out even though it was perhaps a more "modern" architecture. I've often wondered if someone would generalize code compression (I could swear there was something like that for ARM?) Oh and I suppose I'm more of a ranter than a hacker - too many years of keeping it all inside - so now I kind of regurgitate it in these ramblings...

ARM has a second instruction set built into it called Thumb, which performs a subset of ARM operations in smaller instruction sizes. ARM is also an incredibly complex architecture which -- as someone who can effectively write x86 assembly in his sleep now -- I can barely wrap my mind around.

the core dilemma of computer science is this: conceptually simple systems are built on staggeringly complex abstractions; conceptually complex systems are built on simple abstractions. which is to say the more work your system does for you, the harder it was to build.

there are no stacks which are pure from head to toe. I guarantee you, old LISP Machine developers from Symbolics probably had a hard time designing their stuff as well.

There are other downsides to the ancient x86 instruction set than just a complicated decode step (which isn't all that complicated in transistor count). For example, think how much more efficient a compiler could be if it had 256 registers to work with. Or what if we could swap context in a cycle or two instead of the multi cycle ordeal that's needed now to go from user space to kernel space. It would finally make a micro kernel a viable option. Technically easy enough if you can start from scratch, but all software would need to be rewritten.

ARM thumb is a 2-operand (i.e. one of the registers is the destination) instruction set over 8 registers, just like i386. It has similar code density to x86, at the expense of fewer instructions per cycle. It does lack x86's fancy memory addressing modes though.

And I wouldn't say PPC lost. IBM has competitive CPUs in the market, they're just not in consumer devices. But they're just that: "competetive". They aren't much better (actually pretty much nothing is better than Sandy Bridge right now).

I think this discussion may be going in the wrong direction. Arguing the merits of ARM and Power instructions over x86 just seems to be falling into the trap the article discusses - slightly different ways to keep doing the wrong thing.

To me TFA is about a reassessment of fundamental assumptions, and it's about exploration. It doesn't suggest concrete solutions because nobody knows what they are, but it does suggest that our efforts to better the art have been short-sighted. Right now the next Intel chip or ARM chip is just another target for compilation, just another language or library fight with instead of solving real problems - solving old problems, not just the latest new/interesting/imagined ones.

(FWIW, this particular example doesn't excite me too much - If the future is DWIM, it almost certainly has to be done first in software, even if it is eventually supported by specialised hardware.)

PowerPC lost? It is currently being used in the Xbox 360, PS3, Wii and (in the future) WiiU.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact