

Computers: It's time to start over. - buffer
http://spectrum.ieee.org/podcast/computing/software/computers-its-time-to-start-over

======
gizmo686
Aren't smartphones already an example of starting over? Both IOS and Android
run each program in their own sandbox, with highly restricted to nessasary
system resources.

~~~
RivieraKid
Yes, although only on a specific level of abstraction.

------
codinghorror
wait, isn't the whole "don't let code and data sit in the same memory" the
whole point of the no-execute bit, which AFAIK is hardware enforced on any
remotely modern AMD or Intel CPU? Granted it takes OS support, too..

<http://en.wikipedia.org/wiki/NX_bit>

If that isn't working well enough, why not? Too much legacy code?

~~~
rogerbinns
Technically NX is the same memory, just different permissions applied to
portions. It is possible to have an architecture where code and data are
completely separate - <http://en.wikipedia.org/wiki/Harvard_architecture>

~~~
djcapelis
Code - noun - Data you've labelled as code.

Data - noun - Data you haven't labelled as code—yet.

A python program is data that's just a text file. So. You know. There's that.
Other things that are "data" but aren't: PDFs, Javascript and CSS.

The idea that you're going to be able to wave a magic Harvard architecture
wand and fix bad inputs causing things to do different things than intended is
a misunderstanding of the problem.

~~~
adlpz
Not really. The point is that what is code and what isn't should be decided at
load time, and never be changed during the execution.

Any resources loaded later must be data-only, and code memory must be read-
only during the entire execution.

~~~
Thrall
You sacrifice an enormous amount of flexibility and extensibility by enforcing
such distinctions. Much of the power and elegance of languages like lisp comes
from blurring the boundary between code and data. To the C etc. mentality,
it's unthinkable, but in lisp you can maintain (modify/extend/fix) a running
application, without having to unload and reload everything.

------
lowglow
I've been thinking more and more about the problems holding progress of
experience back. I think it's the dogma baked into the operating system
itself. I'd love to share my ideas and help build a new OS with people.

~~~
jspthrowaway2
I don't think the next big thing will be a new operating system -- that's
thinking too shallow. We've tried dozens over the years, and right now we're
dividing between consumer experience and server work (and poorly at that; no
Linux distribution these days bothers to pretend they're separate any more).
Plan 9 also swam as far as it could off to the deep end and didn't catch on,
for a few reasons; it's a genuinely good model but as we know from this
startup game, the good ideas don't always take hold.

I suspect even if you designed something better than Plan 9, which would be a
feat, the smart minds and money are already thinking past Intelville. Getting
past The Architecture (what do we call it? IBM?) that's been a staple of
computing for decades is the next big thing. That's what the author is hinting
at, I think, and I'll be interested to read his paper.

(ARM isn't what we're looking for, it's just a better Intel. Same
architecture.)

One of my deep-seated beliefs is that backward compatibility can hurt more
than benefit, and this is sort of a corollary.

~~~
dredmorbius
Oddly: Linux started as a desktop personal computer operating system (with one
user: Linus), and that's always been _his_ focus (though others have of course
had other interests).

The funny thing is that there's not a whole lot of difference between the
needs of servers and personal systems. Both value uptime and latency, both
benefit from hotplug flexibility (personal systems because we're always
plugging things into them, servers because you can't take the system down when
adding/modifying parameters), security's important, sandboxing, device
support, and what all else. The biggest likely difference is whether and how
advanced a direct graphical output device is attached, beyond that, they're
similar.

As for scrapping everything and starting over: it's almost always a mistake.
Refactoring and incremental improvements discard much less knowledge and
provide a continuous migration path (Plan 9's biggest failing, absent
licensing, since fixed but far too late). Virtualization may well offer a
buffer against this -- we can run old environments in their own imaginary
boxen.

------
nate_martin
>"So, you know, a common past exploit mechanism, something called a buffer of
the flow attack."

Buffer of the flow attack eh?

~~~
nthitz
It's weird cause they use the term "buffer overflow" elsewhere in the article
many times.

~~~
evoxed
Maybe they were dictating, or inputting handwriting. They're close in sound
and spelling.

Edit: Should've opened the darn thing first. It's a transcript, so probably
just the speech conversion.

------
sichuan2000
The GUI of computers could be largely rethought, especially after the
introduction of mobile devices which raised consumer expectations of user
interfaces. I look forward to more subtle touch gestures.

~~~
Aardwolf
Disagree. They are rethinking DESKTOP UI's these days somehow using "mobile"
as argument, and everything after KDE 3.5 and Gnome 2 has only become worse
instead of better. Please give me back the proper desktop UIs.

~~~
B-Con
The two interfaces really have different styles. The dual 21 inch setup a
couple feet from me does not need to behave at all like the 5 inch display in
the palm of my hand. The very suggestion that they would baffles me.

------
sherjilozair
If we're any day going to re-design computer architecture, I think we'll put
designing for artificial intelligence more priority than designing for
security.

~~~
TheBoff
I'm fairly sure that each processor designed solely for AI has been a failure.
For example: <http://en.wikipedia.org/wiki/Fifth_generation_computer>

By the time you've finished designing your revolutionary new chip, Moore's law
has caught up, and you might as well have just used standard hardware!

------
jcoder
Funny---their ideas for making computers more secure don't touch on passwords
(<http://ieeelog.com/>)

~~~
petermlm
That reminds me of IBM saying that in five years passwords will no longer be
used, I think. I going to try and find a citation for that.

Edit: [http://latimesblogs.latimes.com/technology/2011/12/ibm-
predi...](http://latimesblogs.latimes.com/technology/2011/12/ibm-predicts-a-
future-with-no-passwords-mind-reading-smartphones.html)

------
mikecane
So he wants security in hardware? A Security Processing Unit?

------
martinced
From TFA: “The role of operating system security has shifted from protecting
multiple users from each other toward protecting a single…user from
untrustworthy applications.…"

Interestingly most OSes are still _very_ good at protecting users from each
other. And on Linux (but not on OS X nor on Windows), thanks to how X works,
it is _trivial_ to allow one app from another user to access the display (and
only the display) of another user.

So my way of protecting myself, the user, from the untrustworthy applications
(mainly the web browser and it's daily major Java / Flash / CSS / JavaScript /
etc. security issues) is to run applications in separate user accounts.

One browser in one user account for my personal email + personal online
banking (although that one would be more secure if done from a Live CD), one
browser for surfing all the Web, one browser for my professional emails, etc.
Most user accounts (beside my developer account which, by default, as no
Internet access [but I can whitelist sites per-user using iptables userid
rules of course]: no auto-updating of _any_ of the software I'm using) are
throwaway and can be reset to default using a script.

As to giving and receiving phonecalls: a good old Nokia phone onto which you
cannot even install J2ME apps is perfect ; )

