The 432 had incredible hype: "The vacuum tube, the transistor, the microprocessor - at least once in a generation an electronic device arises to shock and strain designers' understanding. The latest such device is the iAPX 432 micromainframe processor, a processor as different from the current crop of microprocessors (and indeed, mainframes) as those devices are from the early electromechanical analog computers of the 1940's." 
This 32-bit machine had some very unusual features. It implemented support for objects at the hardware level, with access protection for on a per-object basis. Even the kernel doesn't have access to everything. The world would be much more secure with no more buffer-overflow exploits.
This chip was started before the 8086 and included a virtual address space of 2^48 bytes. It was designed to be programmed entirely in high-level languages.The processor also included garbage collection in hardware. It also supported floating point and multi-processor operation, well before x86 did. Part of the operating system was built into the chip; the policies were defined in software, but the implementation was on the chip.
It's interesting to think what computers and programming would be like if the Intel 432 had succeeded instead of x86. We'd probably have super-secure computers and be programming in Ada.
The failures you describe in the post were technical manifestations of an operational failure in bringing an engineering project to completion. The scope was too large, with too many unknowns. Had they scheduled their project relative to Moore's Law, they could have kept the team smaller until they had the transistor budget to ship the chip they designed. You should read about how the Aztek was made 
From my point of view I guess it actually seems like a terrible reason in the long run, given that even something as widely derided as the x86 architecture has, over time, been made to perform.
Only recently has the bicycle seen a resurgence in the west. Because there is a small uptick doesn't mean bicycles as a technology have succeeded the level they should have compared to the alternatives.
But certainly processors can do more than they do. For example, doesn't x86 offer a memcpy or memmove instruction and just decode it as the right "fast" way, versus making people update their stdlib with complicated code?
I don't really know what I'm talking about, but it seems I often run across long discussions where people are trying all sorts of instruction sequences and they're model-specific and it just seems like there should be a few more higher-level instructions exposed. (Well perhaps that's the point of all the SSE and AVX instructions.)
He even wrote an (unpublished) book about programming with as third basic principle "Do It Yourself!" (Aka "Invent The Wheel!"; http://www.colorforth.com/POL.htm). That alone moves it into its own universe. If you don't think so, read on to "Make variables as GLOBAL as possible. Why not? You can save some space and clarify your requirements"
> It implemented support for objects at the hardware level,
> with access protection for on a per-object basis.
The 432 was slow, but the i960² was a clean RISC that could have had a future.
Another weird attempt of the early 80s. Similar fate. Very surprising company though.
All of the above are highly recommended, by the way.
Similar themes in Roger William's "The Metamorphosis of Prime Intellect" (an AI accidentally takes over the universe).
I wish more stories would start off with one magic point, cast one universe-altering spell, suspend my disbelief once, and then just deal with the consequences.
Yeah I hear people call this "hard sci-fi", but that's not really fitting. It can apply to any fiction. There's fantasy like Harry Potter where the world is just unbelievably inconsistent (as HPMOR loved to point out). Compared to, say, Mistborn (I don't read a lot of fantasy), which introduces its restricted magic system and more-or-less deals with it from there.
And the one big change can be huge, unrealistic, too! Like the Culture books - posit that we've got hyperintelligent friendly AI that can warp many dimensions at will - the rest fits in more-or-less from there; but no one would call Culture hard sci-fi.
Egan's publisher recently ran off a new printing of many books in his back catalog that were hard to find in the US.
> BLIT (which stands for Berryman Logical Image Technique) is a short science-fiction story written by author David Langford. It features a setting where highly dangerous types of images called "basilisks" have been discovered; these images contain patterns within them that exploit flaws in the structure of the human mind to produce a lethal reaction, effectively "crashing" the mind the way a computer program crashes when given data that it fails to process.
Does Stross actually manage to use real theoretical CS in that series?
I can't say the whole film is worth the time, but I really loved how in the first minutes it establishes it's alternative history by a single sentence: introducing a scientists who "in 2008 was awarded ... the fields medal for his proof of the nonexistence of one-way functions"
[when I said "reminds me of Traveling Salesman (2012)" I just meant examining the consequences if P=NP; that was before I followed your link and was reminded how good sci-fi _should_ be — it absolutely pales in comparison to Antibodies.]
You can read a bit early on in the first book of the series where he talks a bit about this stuff, on Google books, page 17: https://books.google.ca/books?id=GfSGzhDcU2UC&lpg=PP1&dq=atr...
This looks pretty damning: http://en.wikipedia.org/wiki/Petersen_graph
(Don't email it to cstross, I already did ;-)
I think this subthread fills that desire nicely. :)
(Despite the domain name, it is a science fiction piece. Nature publishes short science fiction under their 'Futures' column.)
I feel that part of why these books seem so alien is that most people are taught programming as if it were two different disciplines: "low level" algorithms, with fixed data types, and big-O complexity theory; and "high level" systems design, with type abstraction and object patterns. While a truly skilled programmer must understand both worlds, this sort of model has them separated in the same manner physicists seem to separate general relativity and quantum chromodynamics (or oil and water).
Books on Smalltalk and Forth, like those listed in the article, frequently reveal a mode of programming which is neither purely "high" or "low" level. Yet despite their non-conformity, neither language is haphazard or capricious in design. Instead, the both seem to embody the unofficial motto of the US Army Engineers: "The difficult we do immediately. The impossible takes a little longer."
For example, throw out the notion that memory is volatile - or slightly more practically, what if the price we pay for automatic memory management in our programming languages also bought us abstraction over the volatility of memory? How different would our systems look? For one thing, switching things off and back on again wouldn't be the "cure-all" that it mostly is today.
The fact that we can build systems like Smalltalk tells us that much of our current notions of computing are merely convention, not axiomatic at all.
Smalltalk and Forth are definitely "different convention" things, while SICP and CTM are more like detailed examinations of things that might really be axiomatic, giving us the means of combination, and hopefully the means to imagine building things beyond what our mindset of present conventions would allow.
The current (older) me just had their mind blown by 'Self'- but there is no book.
Anyone have an idea about what would pair well with Neil Gaiman?
I think we'd all welcome a new, more accurate term, but no one has really come up with a good one. Maybe Applied Computational Philosophy?
The fatal error in computer science was that it modeled complex systems without truly understanding them. Computers simulated complexity. You might know more or less what was likely to happen. But the causes remained unclear.
- Bruce Sterling, The Zenith Angle
Firstly "computer" used to mean the people that did computations, so, that untethers computation from the material doing those computations. And secondly, if you buy into the philosophy of it, computation is all around us, binds us in a way like the force in Star Wars.
But it is called computer science, who / whatever is doing the computation, not computation science, so I'm not sure that (either part of) your response applies.
Actually, I disagree with the grandparent (hence, I suppose, with Sterling) differently: I think that it's rather common to name sciences after gadgets, depending on how flexible you are about what is called 'science'. The first example that came to mind, just because I have a colleague who works on it, is cryo-electon microscopy (https://en.wikipedia.org/wiki/Cryo-electron_microscopy). It's fair to argue whether that's really a 'science' as opposed to just a 'technique', but I'm confident that there are other examples that are more clearly 'sciences'.
And yet, mechanical, civil, chemical, and electrical engineers have made great strides without such understanding, in fields where it is truly impossible to fully understand or model the forces at work, the materials in play, the structures in shape, or the physics in motion.
They manage to get by on approximations, so that's not it.
I agree that the term is used too broadly. I don't regard myself as a "Computer Scientist". I'd argue that "Software Engineer" is another term that is too loosely applied, although it does have a definition.
Its a nice quote, but science is all about modeling complex systems as a way of understanding them...arguably, human "understanding" of systems is just modeling of them.
Computer science is about as useful a term as polymorphism (borrowed from biology). It serves to put lipstick on a pig.
What if the science of physics started with the Grand Unified Theory, knowing what the fundamental properties & rules of the universe were? would it still be science as we then worked from both ends of knowledge (basics up and complexity down)? how then would it be different from "computer science", where we start both with the basics (0, 1, NOR) and from the complexity of the behavior we see & desire, working both toward the middle for it all to come together and function?
I'd say yes, and this article is representative of my thinking on the subject (easier than trying to write an essay in a HN comment).
You can argue that they're both not sciences, but not in the same sense.
If you have any doubts please browse a few back issues of 'Communications of the ACM'.
If so, do those methodologies in physics also stop being "science" and start being a tool as well?
If you use a Bunsen burner in a method according to the scientific method, does it count as "science"? Or is it just setting fire to things?
Not sure where you're drawing the boundaries. Science is done with tools and methods, and mathematics is one of the most important tools of science, used to define some of the most important methodologies.