
The Page-Fault Weird Machine: Lessons in Instruction-less Computation - ColinWright
https://www.usenix.org/conference/woot13/page-fault-weird-machine-lessons-instruction-less-computation
======
jacquesm
Computing has a nice analogy in lasers. When lasers (and masers) were first
built it was extremely hard to make one. Lots of engineering and design went
into getting something to properly lase. Then, over time it became easier once
the principles were understood. Later still it was realized that given the
right conditions you could get a very large number of substances to lase, some
of them even accidental.

In the long turn it will possibly be found that it is harder to have tech of
any sufficient complexity that can _not_ be used for general purpose
computation than stuff that can be used that way. To find another computer
inside an existing computer that was placed there accidentally is perhaps less
of a surprise if you look at it that way.

Other candidates: sufficiently large chunks of a network, hydraulics systems,
power infrastructure, biological systems.

~~~
opinali
TIL the verb "to lase".

------
peterwaller
This is pretty mind blowing. Computation without ever incrementing the program
counter.

Link to the code by the same guys:
[https://github.com/jbangert/trapcc](https://github.com/jbangert/trapcc)

------
malkia
In a similar vein: "Is the Network Turing-Complete? EPFL Technical Report
187131" \-
[http://infoscience.epfl.ch/record/187131/files/report_1.pdf](http://infoscience.epfl.ch/record/187131/files/report_1.pdf)

------
PhasmaFelis
Could someone explain this in a way that a middling-experienced high-level
coder can understand? It's clearly awesome, but I'd like to get it a bit
better than I do.

~~~
saalweachter
My understanding from reading the abstract is:

    
    
      1.  The author demonstrates that page faults on the IA32
          architecture are Turing complete.  Or more precisely,
          the automatic memory caching, address translating,
          interrupt handling, etc, which the CPU does behind the
          curtains while executing programs.
      2.  This means that if you carefully arrange the contents
          of memory, you can write "programs" that don't have any
          instructions.
      3.  This has implications for various sorts of (static?)
          analysis which people do on programs.  For instance, 
          you might analyze a program as a series of
          instructions, and think it does one thing, when the
          above side-channel is actually doing another thing.

------
walid
I think I found something to spend some time on!

