
The Death of the Von Neumann Architecture - kayamon
http://www.codersnotes.com/notes/the-death-of-the-von-neumann-architecture
======
ansible
The "Von Neumann" and "Harvard" architectures mean something very specific in
CS. The author is applying them to a higher level where they don't belong,
which I find a little silly.

The reasons for Apple, Microsoft, and other vendors for restricting execution
of native code are various, but mostly related to security and maintaining
control of the ecosystem.

It is fine to complain about that desire for control. But it is not necessary
to view this as a computer architecture issue at another level.

~~~
kayamon
I think you have very much misread it.

I'm not complaining about desire for control at all. I'm complaining about the
inability for a program to treat it's code and data as the same medium.

That is very much the key tenet of a Von Neumann architecture.

~~~
dheera
Although this is freedom-limiting, I strongly doubt that this thinking on
iWatches will "infect" desktops in the future. The iWatch is designed to be an
appliance, much like that microwave, rather than a computing platform for on-
board idea creation.

Also, digital watches of all sorts have pretty much always been Harvard
architecture for the past N years. From a use case perspective, the iWatch
should be treated as a glorified digital watch, not a minified iPhone. Seen
from this angle, I disagree that the iWatch is a "step backward". But I do
agree with you that in general, Apple is annoying for deliberately limiting
freedom.

~~~
baghira
Not to complain about terminology, but _appliances_ used to be precisely the
sort of stuff you could repair. I've fixed my microwave, I did not carry it to
a genius bar. This is the reason why I fear this pernicious business model
will spread: it promises lock-in for everybody, from John Deere to coffee
machines makers.

As far as the iwatch being just a watch, the same thing could have said of the
iphone. It was a phone, only cooler. Almost a decade later, ios devices are
the platforms where many users do a significant fraction of their computing.
And while ios devices may not require on bitcode submission, they limit the
users freedom in a way that is fundamental. Image a world were every x86
computer sold post 1995 (outside of the server space) could only run Windows.

Besides, even if you own a "creation machine", you'll often be catering to the
market of restricted devices (case and point: gecko on ios).

EDIT: I'll grant you may be right in the specific sector of non-computing
electronics (watches, calculators, etc).

------
ChuckMcM
I think the author is confused. Protecting a code segment to be 'read only'
does not change a machine from "Von Neumann" to "Modified Harvard"
architecture. That distinction is reserved for computers that separate the
'store' (aka volatile memory) into two disjoint regions. And pure Harvard
architecture machines (like the early PIC series from Microchip) suffer from
an in ability to vary the partition between data and code, which is the real
problem[1].

Now there is a pretty sound reason for making your code segment read only, and
the author touches on it, which is that if you are running third party code,
because it limits the attack surface for security exploits somewhat.

But then he goes off the rails with this claim, _" What really struck me was
that there was no flexibility here; no way to for a program to make changes to
itself or build new subprograms, even if Microsoft approved it."_

The next topic on the required reading list should be "Turing complete" :-) It
is entirely possible to write a piece of code which is both a core
interpreter, and the byte codes on which that interpreter is operating. And
including in the byte codes a means to read, modify, and re-execute any
previous byte codes. The sum of actions of course will be constrained by what
can be expressed in a byte code but given that, you are free to do what ever
you want.

And of course any interpreter which was interspersing its data and executable
byte codes in the same chunk of memory, well that would be a Von Neumann
architecture :-)

[1] Harvard machines ultimately fell out of favor because you could end up
with left over code store and not enough data store for an application.

~~~
kayamon
> That distinction is reserved for computers that separate the 'store' (aka
> volatile memory) into two disjoint regions

On a machine where programs are supplied as bytecode, there are indeed two
disjoint regions.

The bytecode sits in it's own region. You can't touch it from the data region.

~~~
microtherion
I believe you are misunderstanding the use of bytecode in the Apple Watch:
What is _deployed_ to the device is still native code, the bytecode is merely
what's _submitted_ to the app store.

~~~
kayamon
But if the program has no knowledge, visibility or control of the native code,
it effectively does not exist.

------
zerohp
Von Neumann Architecture has been dead and buried for 20 years. We only
imagine our x86 is Von Neumann because there is coherence between the
instruction and data caches.

Every modern machine is a Modified Harvard Architecture.

[http://en.wikipedia.org/wiki/Modified_Harvard_architecture#M...](http://en.wikipedia.org/wiki/Modified_Harvard_architecture#Modified_Harvard_architecture)

~~~
ctz
> Every modern machine is a Modified Harvard Architecture.

Well, this is just not true if you consider all modern computing systems
rather than just desktop computing.

Every modern microcontroller is either Harvard, or pure Von Neumann. And there
are billions upon billions of these shipped every year: ARM microcontrollers
ship at a rate of 35Hz! Generally the lower power parts tend to be Von
Neumann.

~~~
arcticbull
The AVR family of microcontrollers are Harvard machines.
[http://www.atmel.com/technologies/cpu_core/avr.aspx](http://www.atmel.com/technologies/cpu_core/avr.aspx)

~~~
fancyketchup
As are many of the PIC series of microcontrollers.

------
quotemstr
If you like this article, you'll also like:

"The Coming Civil War over General Purpose Computing":
[http://boingboing.net/2012/08/23/civilwar.html](http://boingboing.net/2012/08/23/civilwar.html)

"The Right to Read": [https://www.gnu.org/philosophy/right-to-
read.en.html](https://www.gnu.org/philosophy/right-to-read.en.html)

------
vezzy-fnord
Not to detract from the author, however:

 _Or perhaps you’re running on an OS that doesn’t have any support for hot-
patching a running executable with new code. And you think “I know how to
write a programming language runtime that can do that.” Perhaps you’d seen how
Erlang can do it, and wanted to try it yourself._

Dynamic code upgrade is really rare in general. The most that's usually done
in mainstream practice is to pass socket fds to a newly exec()ed child
instance, or some other form of superserver/pre-opening, a la inetd, UCSPI or
whatnot.

 _What if you were on a system that didn’t support DLLs or shared libraries,
and you thought it’d be kinda useful to invent something like that?_

Depends on what your purpose for shared libraries are. If you just want a
plugin system, then the traditional way w/o shared libs is to use some form of
RPC so as to coordinate forked OS processes.

Finally, the majority of our programming languages are firmly in a von Neumann
model, so it isn't quite dead yet. Not sure if read-only code segments are
enough to deliver its eulogy, either.

------
walterbell
_> Apple’s own Safari compile scripts to assembly because they’ve helpfully
added a secret backdoor just for Safari, but not for any other programs .. On
one benchmark, LuaJIT’s ARM JIT is 48X faster than its own interpreter.
Android can run the JIT version, but iOS can only run the interpreter
version._

How many years will it take for an antitrust regulator to level this playing
field?

~~~
innguest
Someone just needs to make a better computer that people can switch to from
Apple. No need to bring violence into the equation.

We know the formula now, which Microsoft ignores since their market is
companies. The formula for selling computers to people seems to be "be in
control of both hardware and software to provide the best native experience".
It's not a secret, anyone that wants to can start a company and compete with
Apple. All regulators can do is make Apple's job harder for themselves which
will result in bad products and cost taxpayers money, a lose-lose situation.

~~~
walterbell
The formula also includes factory financing which slows competitor access to
manufacturing capacity. Some products benefit from network effects
(manufacturing, developer adoption) and become difficult to displace by any
competitor, however well funded. Careful intervention can help to restore
balance.

Unlike the oil companies of yesteryear, or even a browser that is supposedly
"baked into" an operating system, the remedy here is not logistically complex:
the backdoor is already known to exist -- it can be extended to some
competitive products.

~~~
innguest
> The formula also includes factory financing which slows competitor access to
> manufacturing capacity.

I'm talking about the formula that seduced the customer. "Factory financing"
did not seduce the customer - one company in charge of the whole computer (as
opposed to one company making the hardware and another making the software) is
what seduced the consumer. The fact that company makes computers for end
users, not company employees, is what seduced the consumer.

> Careful intervention can help to restore balance.

Like non-violent spanking, careful intervention is an oxymoron. Unless you can
show an example where no one was harmed by government anti-trust laws being
applied.

------
eyesee
It seems disingenuous to write an article extolling the benefits of JIT and
self-modifying code without mentioning the security concerns of allowing
anyone to change executable code.

~~~
quotemstr
You don't need to be able to change code to execute arbitrary code --- see
return-oriented programming. Besides, NX gives you almost all the same
benefits.

Restrictions on code execution strike me more as business controls (and yes,
assaults on freedom) than real security measures.

~~~
latiera
Look at my other response in this thread, ROP isn't trivial these days due to
ASLR implementations. Almost always one needs information leak bugs.

The presence of a JIT makes things trivially abusable for the attacker and is
a big security risk.

------
oldmanjay
The technical points mixed with the politics is a little boring. I'm really
over people trying to make a moral issue out of iOS. You totally have the
freedom to ignore platforms that don't work the way you want.

I also agree you have the freedom to complain about things over and over
endlessly but _yawn_

~~~
sp332
iOS has 42% marketshare. You can't ignore it. When a company is moving in the
direction of removing freedoms from 42% of the market, that's a political
problem that should be addressed and not ignored.

~~~
zeeed
Sure he can. And besides, Apple won't change course because us devs worrying
and they never have in the past. Neither did Microsoft, at least not out of
free will.

The only power we have is buying power. Ranting and politically complaining as
the article does won't change anything.

~~~
4ad
> Ranting and politically complaining as the article does won't change
> anything.

Yes it does, it educates readers, it can convince others not to buy iOS
devices.

Other articles have certainly convinced me of that, for which I am very
thankful to the authors.

These articles are not for Apple, they are for the people. People like us in
Internet forums, who then discuss them, learn something (or not), and then
make a decision (or not).

------
ahomescu1
There's also a counter-point: Harvard architectures are more secure. If the
application has the ability to execute data as native code, attackers will
find a way to exploit that.

~~~
vardump
Until someone uses ROP-widgets turning stack return addresses into executing
what the attacker wants. Doesn't matter if there are no pages that are both
writable and executable. Besides, any secure JIT will only keep a page either
writable _or_ executable, but not both in the same time.

[http://en.wikipedia.org/wiki/Return-
oriented_programming](http://en.wikipedia.org/wiki/Return-
oriented_programming)

~~~
quotemstr
latiera, you seem to be banned for some reason. Both of your comments are
dead. I'll quote your post in full.

> It's not that simple. ROP relies on known or predictable addresses and
> pretty much all modern OSes have some form of address space layout
> randomization (which keeps getting better and more sophisticated).

> With good ASLR, ROP is not possible without relying on information leak bugs
> which are finite. So the cost for the attacker increases and it gets harder
> and more time intensive for reliable exploits to be written.

> Allowing JIT for everything is a TREMENDOUS security violation, since it's
> trivially abusable and page permissions are irrelevant. There are just too
> many ways for clever attackers to abuse it.

Of course ASLR makes ROP harder. It makes all exploits harder. I still don't
agree with your last paragraph. As someone else mentioned, good JIT
implementations never leave code both writable and executable. It's simply not
the case that "page permissions are irrelevant". Page permissions are central
because they defeat the specific attacks you have in mind. A JIT is no more
vulnerable to them than the dynamic loader is.

I still don't buy that banning PROT_EXEC buys you any protection from
attackers exploiting applications.

On the other hand, banning PROT_EXEC provides plenty of protection against
uppity application developers trying to "abuse" the platform you "own" by
attempting to program in unapproved ways.

~~~
vardump
Exactly.

To add, nothing prevents from allocating pages from random addresses to get
protection similar to ASLR.

It can actually be harder to attack dynamically generated code, because
instruction offsets and instructions used will vary from application
invocation to another. You could even do intentional variations to ensure no
relative instruction offsets are deterministic. So dynamically generated code
can have not only have random address (ASLR) but also random relative offsets.
Obviously statically compiled code must have those offsets static, known by
the attacker.

ASLR just means you don't necessarily know where. But once the attacker knows
where or has some clever trick so it doesn't matter, it's game over.
Unfortunately the creativity of attackers getting around ASLR seems to be
endless.

------
amelius
> Languages like Java, JavaScript, or the .NET framework could never have
> prospered on platforms that didn’t allow them to create new native code as
> they ran.

We can take this a step further and say that if IBM locked down its PC back in
the 80s, we wouldn't have Linux!

------
cremno
>Apple and Microsoft are showing every indication that within the near future
this direction may well apply to desktops too. The restrictive ‘store’ APIs
pushed by these platform holders seem to relish in preventing the execution of
unapproved code.

Not really. Windows 10 apparently allows universal Windows apps to call
VirtualAlloc() and VirtualProtect().

[http://blogs.msdn.com/b/chuckw/archive/2012/09/17/dual-
use-c...](http://blogs.msdn.com/b/chuckw/archive/2012/09/17/dual-use-coding-
techniques-for-games-part-2.aspx)

~~~
vardump
> Windows 10 apparently allows universal Windows apps to call VirtualAlloc()
> and VirtualProtect().

Yes, but they're useless for this purpose. VirtualAlloc is mapped to
VirtualAllocFromApp. Which means you can't allocate executable pages.

You can't make a page executable. Nor can you make executable pages writable.

So you can't run dynamically generated code at all in a Windows 10 universal
app.

~~~
kayamon
It's hard to tell if that's the case or not - the documentation seems to
contradict itself on that point.

------
quotemstr
Come to Android, where mprotect still works.

------
dsjoerg
> "A step backwards to the Harvard architecture benefits no-one."

Actually it benefits Apple, which is why they did it. Or is the author arguing
it doesn't benefit Apple? However, I didn't see anything substantiating that
in the article.

------
chipsy
Sounds like "Java Phones." It was being done way before Apple got involved.

------
jonaf
I don't think using proprietary platforms such as Xbox 360 or iOS as examples
constitutes a trend in the field of Computer Science. It's more likely that
Apple will reject your iOS app for some reason than it is that you won't be
able to "innovate." If you want to innovate on iOS and create something truly
spectacular, you should be seeking employment or simply be satisfied hacking
the provided device yourself.

I would consider this article's content far more substantial if there was an
example of this architecture being used in, say, Linux 4.

~~~
eveningcoffee
Do you really believe in this what you wrote here??? Are people completely
lost their dignity?

It used to be that the owner of the operating system did not have any say
about what other developers can write to that platform.

~~~
walterbell
_> It used to be that the owner of the operating system did not have any say
about what other developers can write to that platform_

It used to be that "online lobbyist" was not a job description,
[http://www.washingtonpost.com/blogs/monkey-
cage/wp/2014/06/0...](http://www.washingtonpost.com/blogs/monkey-
cage/wp/2014/06/02/if-you-can-fake-spontaneity-you-have-it-made-five-key-
questions-about-the-grassroots-industry/)

