An operating system can only do so much.
I suspect even if you designed something better than Plan 9, which would be a feat, the smart minds and money are already thinking past Intelville. Getting past The Architecture (what do we call it? IBM?) that's been a staple of computing for decades is the next big thing. That's what the author is hinting at, I think, and I'll be interested to read his paper.
(ARM isn't what we're looking for, it's just a better Intel. Same architecture.)
One of my deep-seated beliefs is that backward compatibility can hurt more than benefit, and this is sort of a corollary.
The funny thing is that there's not a whole lot of difference between the needs of servers and personal systems. Both value uptime and latency, both benefit from hotplug flexibility (personal systems because we're always plugging things into them, servers because you can't take the system down when adding/modifying parameters), security's important, sandboxing, device support, and what all else. The biggest likely difference is whether and how advanced a direct graphical output device is attached, beyond that, they're similar.
As for scrapping everything and starting over: it's almost always a mistake. Refactoring and incremental improvements discard much less knowledge and provide a continuous migration path (Plan 9's biggest failing, absent licensing, since fixed but far too late). Virtualization may well offer a buffer against this -- we can run old environments in their own imaginary boxen.
What exactly do you have in mind? As long as the new thing is Turing-complete, the path of least resistance says that we'll just drag all the old baggage over with us.
The stack we have is incredibly malleable, there is no such thing as a sufficiently clean break. The ability to fix it now is as present as it ever will be, pretending otherwise is naval-gazing that relies on the assumption the process of architecting new paradigms for a fresh pasture and building out a new stack on top of that could somehow beat to market the more simple process of porting what we've already got and bolting on access to the new nicities.
Buffer of the flow attack eh?
Edit: Should've opened the darn thing first. It's a transcript, so probably just the speech conversion.
By the time you've finished designing your revolutionary new chip, Moore's law has caught up, and you might as well have just used standard hardware!
If that isn't working well enough, why not? Too much legacy code?
I dont' really want another single-purpose appliance, so it's kind of a bummer to see the TSA "security" arguments (the article, not you) working to break general-purpose computing... (I think the best we can do for now is a trust system and, for me at least, I still have to see the code to have faith in that.)
As for the "principal of least privilege", well, that was the point of microkernels. We're all using Linux and Windows and Mac OS X (none of which are microkernels), so even if microkernels are "better" they may not be very "practical" for the moment (the old Tanenbaum–Torvalds debate).
I don't believe that your argument here is correct. Code and data sharing the same memory space is not a necessary condition of being Turing complete. Just because a Turing machine works that way doesn't require any Turing complete computer to work that way.
On the other hand, the distinction between 'code' and 'data' when running a simulator is a little bit arbitrary.
> As for the "principal of least privilege", well, that was the point of microkernels.
Not nearly the whole point of microkernels, no.
See: Church-Turing thesis. If you can simulate a Turing machine (which can clearly be said to hold code and data in one memory space) then the hosting machine can also be said to share code and data in the precise sense given by the mapping of the simulated Turing machine onto the host machine (it's implementation). A computation in the simulated Turing machine is just a computation on the host, with some overhead. Conversely, a simulated CPU can't compute faster than it's host's CPU; if a Turing machine was simulated, the host was Turing complete.
"Principle of least privilege" implies a sandbox and IPC for every process, all so things don't run within the kernel, or any other process when possible. That sounds like a microkernel to me. You might say another "point" of microkernels (I don't have an exhaustive list) was to make design and debugging easy too, but those are just the same ideals inflicted on the programmer: keep it simple and homogeneous; the less you do, the less you can do wrong.
(I didn't put the reference to least privilege in there...)
As you said, where to draw the line of separation is arbitrary. One could certainly make code and data (and processes generally) "more separate," but it's all at the expense of programmability (the general in general purpose) and speed (IMHO, why microkernels aren't everywhere), which brings us to the current compromises: monolithic kernels, OSs that warn you before installing, NX flags, patch Tuesday, and antivirus programs.
We build up an environment of temporarily-appropriate limitations from a blank (general) slate. If I ever feel too restricted, I put on my black fedora and perform a "jailbreak" (I reboot). I'm still better off with a malleable computer than with many individual limited tools. (Not that you suggested otherwise, just my 2 cents.)
Data - noun - Data you haven't labelled as code—yet.
The idea that you're going to be able to wave a magic Harvard architecture wand and fix bad inputs causing things to do different things than intended is a misunderstanding of the problem.
Any resources loaded later must be data-only, and code memory must be read-only during the entire execution.
The Harvard architecture could be used as the basis for a more rigorous trust model (iff the owner of the system controlled the root of trust.) Democracy is out of fashion though, so we would undoubtedly get something similar to what we have today with Redhat (for all practical purposes) "having to" pay for the right to boot Linux on a system "certified for Windows 8"...
Overflows are still exploitable with NX. The attacker instead jumps to a series of fragments of library code. Since libraries will always be executable, there's no problem (aside from the difficulty of finding the right chain of "gadgets").
ASLR goes some way into preventing return oriented programming (ROP) attacks, but it isn't bulletproof.
 : http://en.wikipedia.org/wiki/Return-oriented_programming
The features of a processor designed to protect itself from memory are really just stop gaps on the way to the next paradigm that supplants von Neumann, is I think what Watson is saying.
I don't think that can be argued.
Do you have any examples of security features which are theoretically uncircumventable?
QKD implementations on the other hand ...
They might be, or they might no be. But in no "theory" it is proven that this is so.
So I don't see how someone can argue that "in theory everything is circumventable".
1) We think (but have not proved) that factoring large numbers is hard. We use this for cryptography. In theory the crypto could be brute forced, or maybe find some new method for factoring. In practice brute forcing would take longer than the Universe will exist and there is unlikely to be a breakthrough in factoring large numbers.
2) We think that a single overwrite of a hard disc platter is enough to destroy the information. No software exists that claims to be able to recover information that has bee over written once. No companies exist that claim to be able to recover data that has been over-written once. No university research exists showing recovery of data that has had a single overwrite. No criminals have been prosecuted or convicted with evidence recovered from a disc that's had a single overwrite. Everything we know suggests that a single overwrite is fine. But, because a well funded government might be able to recover that data we suggest that people do 3 (or 8, or 30something if you're being silly) over writes, or if the data is really important that people destroy the platters. In theory the data might be recovered, and so people have decided that in practice they will destroy the drive or overwrite more than once.
When talking about security it's a good idea to assume that someone can break whatever you're doing, and then ask if you need to do more, or need to do things differently.
Besides, it relies on securely distributing the pad itself before information exchange can take place, which in turn is prone to the usual array of physical insecurity, design errors (e.g. using publicly available randomness), or, if distributed by a digital channel, to failures of the encryption used.
Interestingly most OSes are still very good at protecting users from each other. And on Linux (but not on OS X nor on Windows), thanks to how X works, it is trivial to allow one app from another user to access the display (and only the display) of another user.
One browser in one user account for my personal email + personal online banking (although that one would be more secure if done from a Live CD), one browser for surfing all the Web, one browser for my professional emails, etc. Most user accounts (beside my developer account which, by default, as no Internet access [but I can whitelist sites per-user using iptables userid rules of course]: no auto-updating of any of the software I'm using) are throwaway and can be reset to default using a script.
As to giving and receiving phonecalls: a good old Nokia phone onto which you cannot even install J2ME apps is perfect ; )