Hacker Newsnew | comments | show | ask | jobs | submit login

Though I speak from a probably utterly uninformed + unqualified standpoint, this raises a few points for me:-

1. If x86 hardware is so terrible (and I have heard that the architecture really is bad many times), how come we don't have competing chips out there which are many, many times more efficient? I know ARM outperforms on the low-power front, but not in terms of perf to my knowledge. Do such chips exist? And if not, why not if this is true? Even with x86 backwards compatibility concerns, you could bring out a theoretical amazingly powerful chip and just port compilers to it to leverage it and gain some (possibly niche) market hold that way.

2. I think he is underestimating the vast complexity of computing, and deeply underplaying the techniques which have evolved to get us where we are. Yes, I have again heard that the Von Neumann architecture has shortcomings, but look at what computing has enabled us to do - it has changed the world + enabled amazing things which are evolving at breakneck speed. Again - if there really is an ideal alternative approach, why isn't it being pursued? I don't buy the conspiracy theory crap about vested interests holding it back. Again, you could do well in a niche if the alternatives really are that much better.

3. I think it is likely that as things evolve previous ideas/approaches will be overturned and old approaches thrown away, like any pursuit. As @deweller says, this is true of any field. Ask any professional and they will tell you how their industry is fucked in 100 different ways. It is frustrating to hear software engineers talk about how terrible immature + hackerish + crap it all is while assuming that things have some sort of crystalline beauty in other industries. I did Civil Engineering at (a good) university, and it turns out that most consultancies use linear models to model everything, then apply an arbitrary safety factor, resulting in hopelessly inefficient over-engineered structures all over the place + turning the job into something of an uncreative administrative role in some respects (with no disrespect intended for all the civil engineers out there).

4. It is my theory that any attempt to interface with the real world and actually do useful real things results in an enormous amount of uncertainty and chaos in which you still have to make decisions however imperfect and however negatively others will judge it going forward. I think that's true because the world is chaotic and complicated and imperfect, and to influence it means to be touched by that too. That doesn't excuse poor engineering or not caring or stupid decisions, etc. but it is a factor everywhere I think.

5. So change it :-) there are efforts afoot everywhere to struggle to improve things, and from the sounds of it the guy has perhaps not tried enough things. I feel go is a great step in the right direction for a systems language which removes a lot of the crap you don't want to think about, as does C# (though it does potentially tie you to ms crap).

6. Programming is difficult, solving real problems is difficult, abstractions are nice but leak, and sometimes it's quite painful to have to put up with all the crap other people hacked together to make things work. But the value is in the end product - I don't care that my beautiful macbook air runs on a faulted architecture and some perhaps imperfect software, the beautiful ease of use is what matters to me. We mustn't lose sight of these things, though again this is not an excuse for crap code. There is definitely a balance to be struck.

7. Zack Morris is clearly a highly accomplished + competent hacker given what he's done and the obvious passion he writes with (to care enough to be disappointed + upset by these things is indicative), which I think has a bearing - the deeper your knowledge, the better your skill, the more faults you notice by dint of your intelligence + competence. There is a definite curse at play here.

Anyway, just my 2p :)




I resonate with Zack's rant, we have apparently chewed much of the same bugs.

A quick answer to your @singular's questions: 1) Existing code - this trumps writing everything from scratch. 2) I don't agree, I believe the compuation is straightforward, my belief is that what you perceive as 'progress' is mostly just 'go really really fast.' I showed a Microsoft engineer at the Vintage computer festival installing an RDBMS on VMS while four people were playing games and exploring VMS on four terminals connected to the machine, then I fired up and ran the test code to show the code had installed and was usable. It did not impress him that I didn't reboot the system once, nor did the other four people using the system notice I had installed a new capability that was available to everyone using the OS. Those are not design goals of a 'personal' OS like Windos/DOS/NT, although they could be. The stuff you learn in CS classes about architecture and layers and models and invariants, can make for very robust systems.

3. My experience is that programmers program. Which is to say that they feel more productive when they produce 10,000 lines of code than when they delete 500 lines of code and re-organize another 500 lines. Unlike more 'physical goods' types of industries it is easier to push that stuff out into production. So more of it ends up in production.

4. Not sure where this was going.

5. This is something I like to believe in too, its just code, so write new code. Hey Linus did it right? The challenge is that it will take 4 - 5 years for one person to get to the point where they can do 'useful' stuff. That is a lonely time. I wrote the moral equivalent of ChromeOS (although using eCos as the underlying task scheduler and some of the original Java VM as the UI implementation.) Fun to write but not something picked up by anyone. You get tired.

6. I'd take a look at eCos here, one of the cool things about that project was a tool (ecosconfig) which helped keep leaks from developing.

In the 'hard' world (say Civil Engineering) there are liability laws that provide a corrective force. In software it is so easy to just get something put togehter that kinda works, that unless you are more interested in the writing than the result, you may find that you're spending less time on structure and more on results.

-----


1 - I agree that's a huge, massively important thing, but there are non-x86 processors in the world which find their niche (in ARM's case it's quite a huge niche), so surely if it is possible to develop a processor which is so much better than x86 then why don't they already exist? I am hardly very well informed on the processor market, so for all I know they do, though I'd be surprised.

2. Sure, I guess what I'm getting at is that we've done amazing things with what we've got, I'm by no means suggesting we shouldn't take a broader view and replace crap, or at least work towards it where market entrenchment makes things difficult. The point is, again, that if there exists such a plausible alternative to the Von Neumann architecture, then why aren't there machines out there taking advantage? Again you could probably fill a niche this way. I suppose, in answer to my own question here, that you would be fighting a losing battle against the rest of the hardware out there being reliant on V-N but still, I'd have assumed that something would exist :)

3. Yeah. But it's hard + often the harder path to do things right in any industry. Such is life, not that that excuses anything.

4. A sort of philosophical point. Feel free to ignore :-).

5. There is stuff out there that already exists too though. Go, OCaml, Haskell, F# are all really interesting languages which in their own ways tackle a lot of the accidental complexity problems out there. Plan 9 + inferno are very interesting OSes, though they are probably a little too niche to be all that useful in the real world. But yeah, understandable, fighting the tide is difficult.

6. Cool will take a look.

Yeah - one of the things that attracts me to software engineering is the relative freedom you get to be fully creative in solving a problem. However that cuts both ways it seems.

-----


so surely if it is possible to develop a processor which is so much better than x86 then why don't they already exist?

Suppose I invented a new chip that was awesome for gaming, spreadsheets, word processing, databases and power consumption.

Who would build PCs with it?

What OS would it run if someone built it?

Who would buy that?

Its not merely a "huge, massively important thing". Its the only thing.

-----


Intel & HP tried this with the Itanium. A decade (or two) and billions of dollars later and x86 or at least x64 is still king.

No doubt they made some mistakes, but it wasn't for lack of trying.

(And having debugged code on an ia64 I'm quite happy with the status quo!)

-----


Yes, and it was hilarious. It stunned me that Intel couldn't figure out that AMD was going to eat their lunch when they figured out a way to extend the x86 architecture to 64 bits while retaining software portability. For years Intel had beaten challenger after challenger based on the juggernaut of their software base (680x0, MIPS, NS32032, SPARC, PowerPC, Etc) and yet their brash attempt to push Itanium by not extending x86 was counter to all those previous victories. Kind of like a general taking a battle plan known to work and ignoring it.

As we move into an era of 'I don't see why I should get a new machine' of growth minimization there is a window for folks like ARM to get in with 'all day computing.' But it will take someone extraordinary to make that happen. Look at the state of Linux on ARM to understand the power of a legacy architecture.

-----


> I don't agree, I believe the compuation is straightforward

That is, until you do them in parallel...

-----


The x86 thing is just grousing by people who think aesthetics in the assembler are the definition of a clean architecture. Instruction decode for a modern x86 CPU is indeed a difficult problem when compared with cleaner architectures (though ARMv7 is hardly "clean" -- how many instruction sets does it support now? Five, I think?). Instruction decode, however, is one of the easiest problems to tackle on a CPU. It just happens to be the part that software people understand, so it's what they like to whine about.

You're absolutely right: if it could be done much better, there'd be an example in the market to prove it. Yet Intel is walking all over the market, and has been for the last 8 years or so.

-----


It's amazing how much turd polish several billion in r&d will buy you.

X86 is amazing in spite of legacy and poor architectural choices.

-----


Statements like this -- saying that x86 is either good enough or as good as anything could possibly be anyway -- sound to me like a lack of imagination, lack of perspective, or not wanting to stir up any cognitive dissonance given that market forces have caused x86 to dominate.

Would you also say that there probably couldn't exist a significantly better OS than Mac, Windows, or Linux, or else we'd know about it? I admit "better" can be hard to define; what would make a 10x better car, or IP protocol, for example? It strains the imagination, because what makes these things "good" is complicated. But ask anyone who was around during the early proliferation of computer architectures and operating systems, and they will tell you that what ended up winning out is markedly inferior to other things that could have been. Paths get chosen, decisions get made, billion-dollar fabs get built. The market doesn't pick the best technology.

It's like web standards -- accept them, but only defend them to a point. We may be stuck with CSS, but that doesn't mean it's any good or that it succeeded on its merits.

-----


1. Well GPU's crush x86 CPU's in some areas so there is at least one competing technology that is a clear win. Also, Intel added both a GPU and video decoder to their CPU’s, but neither of them use anything close to x86.

As to the rest of it, I think you can look at microwaves for a perfect example of terrible software in wide spread use. You need to be able to select cook time and possibly power level or set the clock. Yet, most microwaves have such a terrible interface that few guests get embarrassed asking how to get a new one to work. And as long as it takes more effort to send it back than it takes to figure out the strange design there is little reason to build a better system.

-----


You are right that it's hard to compete with x86 but it's for a weird reason (beyond the economic might of behemoths like Intel). x86 has good density, so it can do more in a few bytes than sparser instruction sets like RISC. In the late 90s when computers starting being memory bandwidth limited, PowerPC lost out even though it was perhaps a more "modern" architecture. I've often wondered if someone would generalize code compression (I could swear there was something like that for ARM?) Oh and I suppose I'm more of a ranter than a hacker - too many years of keeping it all inside - so now I kind of regurgitate it in these ramblings...

-----


ARM has a second instruction set built into it called Thumb, which performs a subset of ARM operations in smaller instruction sizes. ARM is also an incredibly complex architecture which -- as someone who can effectively write x86 assembly in his sleep now -- I can barely wrap my mind around.

the core dilemma of computer science is this: conceptually simple systems are built on staggeringly complex abstractions; conceptually complex systems are built on simple abstractions. which is to say the more work your system does for you, the harder it was to build.

there are no stacks which are pure from head to toe. I guarantee you, old LISP Machine developers from Symbolics probably had a hard time designing their stuff as well.

-----


There are other downsides to the ancient x86 instruction set than just a complicated decode step (which isn't all that complicated in transistor count). For example, think how much more efficient a compiler could be if it had 256 registers to work with. Or what if we could swap context in a cycle or two instead of the multi cycle ordeal that's needed now to go from user space to kernel space. It would finally make a micro kernel a viable option. Technically easy enough if you can start from scratch, but all software would need to be rewritten.

-----


ARM thumb is a 2-operand (i.e. one of the registers is the destination) instruction set over 8 registers, just like i386. It has similar code density to x86, at the expense of fewer instructions per cycle. It does lack x86's fancy memory addressing modes though.

And I wouldn't say PPC lost. IBM has competitive CPUs in the market, they're just not in consumer devices. But they're just that: "competetive". They aren't much better (actually pretty much nothing is better than Sandy Bridge right now).

-----


I think this discussion may be going in the wrong direction. Arguing the merits of ARM and Power instructions over x86 just seems to be falling into the trap the article discusses - slightly different ways to keep doing the wrong thing.

To me TFA is about a reassessment of fundamental assumptions, and it's about exploration. It doesn't suggest concrete solutions because nobody knows what they are, but it does suggest that our efforts to better the art have been short-sighted. Right now the next Intel chip or ARM chip is just another target for compilation, just another language or library fight with instead of solving real problems - solving old problems, not just the latest new/interesting/imagined ones.

(FWIW, this particular example doesn't excite me too much - If the future is DWIM, it almost certainly has to be done first in software, even if it is eventually supported by specialised hardware.)

-----


PowerPC lost? It is currently being used in the Xbox 360, PS3, Wii and (in the future) WiiU.

-----


1 it is lie that people like to wank on about. The inefficient x86 portion of a CPU they talk about is less than 1% of the transistors in a modern processor. The first thing an intel processor does when it receives an x86 Opcode is translate it into a more efficient internal Opcode that it actually executes. In fact it could be claimed this has helped to improve efficiency because front side bus bandwidth is at a premium and Opcodes that convey more information help conserve FSB bandwidth. Basically if it wasn't a processor designer who told you that I would take it with a grain of salt because people(including myself) like to pontificate on topics which are adjacent to their domains of knowledge, but which they know little.

-----


We still have x86 because we don't like to change things. We like to be able to run windows 95 apps on a modern desktop, just like how we wanted to be able to use our original B&W TVs when colour and widescreen were introduced (although, since analogue signals are dying out, this will soon not be possible; hopefully interlacing will dieout soon too). We like backwards compatibility, upgrading our chips while still retaining compatibility.

Also, x86 isn't as bad as it is made out to be either, even if most of that is just how well our compilers can optimise x86 code.

-----


1. If x86 hardware is so terrible (and I have heard that the architecture really is bad many times), how come we don't have competing chips out there which are many, many times more efficient? I know ARM outperforms on the low-power front, but not in terms of perf to my knowledge. Do such chips exist?

What about IBM's Cell (the one used in PlayStation 3)? Or GPU related technologies, like nVidia CUDA?

-----


I am no fan of x86; however, as I understand it, it's really not as bad as it seems. Modern CISC CPU's have turned into RISC-CISC hybrids; they have a CISC frontend/interpreter that drives a RISC backend. As a result, you get most of the best of both worlds.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: