> The great thing about Linux as the base layer is that it allows a commodity common ground with a very capable system that also facilitates more specialized layers to be implemented on top of it.
The great thing about x86 instruction set as the base layer is that it allows a commodity common ground with a very capable system that also facilitates more specialized layers to be implemented on top of it. Only a handful of programmers develop at a layer below the x86 instruction set, or even at that level for that matter.
My point is that the instruction set of these large CPUs (not AVRs etc) is itself just an abstraction to an inscrutable micromachine. That abstraction lets you develop a complex program without knowing the details.
Unix was specifically designed for small, resource starved machines and did not have the powerful abstractions of mainframe OSes like Multics or OS/360. It's OK, but as modern CPUs and IO systems have grown and embraced the mainframe paradigms that had been omitted from the minicomputers (e.g. memory management, channel controllers, DMA, networking etc) unix and linux have bolted on support that doesn't always fit its own fundamental assumptions.
That's fine, it's how evolution works, but "cloud computing" is a different paradigm, and 99.99999% of developers should not have to be thinking at a unix level any more than they think of the micromachine (itself a program running on a lower level instruction set) that is interpreting the compiler's output at runtime.
As I said in the other comment, maybe people though that "cloud computing" is a different paradigm back in the 70's but it turns out that no, it's all the same distributed stuff.
If you have two processes on same machine, locking a shared data structure takes microseconds. You can easily update hundreds of shared maps and still provide great performance to user.
If you have datacenters in NY and Frankfurt, the ping is 90mS and fundamental speed of light limit say it will never be below 40mS.
So "lock a shared data structure" is completely out of the question, you need a different consistency model, remote-aware algorithms, local caches, and so on.
There are people who are continuously trying to replace Unix with completely new paradigms, like Unison [0].. but it is not really catching up, and I don't think it ever will. Physics is tough.
It's quite an appropriate analogy. We used to write a lot of code in assembly code. I used to write microcode and even modified a CPU. But it's been decades since I last wrote microcode (much less modified an already installed CPU!) and now the instruction sets of MPUs like x86 and ARM are mostly just abstractions over a micromachine that few people think about.
And an OS it the same: it used to be quite common to write for the bare iron, to which an OS is by definition an abstractional interface. I still do that, but it's an arcane skill frankly not in huge demand. Which is probably a good thing.
Nowadays most code is written at nosebleed levels of abstraction, which frankly is a good thing, even if I don't like doing it myself. But still, as developers do it, they are often dragged back down the stack to a level that these days few understand.
I think the person/company that cracks this will be the dominant infrastructure play of the decade.
It's not an appropriate analogy because most people don't program in x86, but most people know (or can easily look up, when the need arises) basic Linux administration commands.
The great thing about x86 instruction set as the base layer is that it allows a commodity common ground with a very capable system that also facilitates more specialized layers to be implemented on top of it. Only a handful of programmers develop at a layer below the x86 instruction set, or even at that level for that matter.
We have to get our heads out of that mire.