
Robust Design Through Diversity (1999) [pdf] - stass
http://worrydream.com/refs/Sussman%20-%20Robust%20Design%20through%20Diversity.pdf
======
raphaelss
It reminded me of two unrelated things: Erlang and FFTW.

Erlang due to its error handling philosophy and FFTW due to the following:

 _Question 4.1. How does FFTW work?

The innovation (if it can be so called) in FFTW consists in having a variety
of composable solvers, representing different FFT algorithms and
implementation strategies, whose combination into a particular plan for a
given size can be determined at runtime according to the characteristics of
your machine/compiler. This peculiar software architecture allows FFTW to
adapt itself to almost any machine.

For more details (albeit somewhat outdated), see the paper "FFTW: An Adaptive
Software Architecture for the FFT", by M. Frigo and S. G. Johnson, Proc.
ICASSP 3, 1381 (1998), also available at the FFTW web page._ [1]

[1]
[http://www.fftw.org/faq/section4.html#howworks](http://www.fftw.org/faq/section4.html#howworks)

Edited for one more comment:

The last paragraph also reminds me of Alan Kay with its focus on the
interaction between systems and not their specific logic.

~~~
fa
FYI: FFTS [1] in many (most?) cases outperforms FFTW, _without_ runtime
calibration. It gets there by using a smarter algorithm algorithm. It doesn't
have the bells and whistles of more complete libraries—e.g., restricted to
power-of-two transforms for now—but hopefully this will change.

[1] [http://anthonix.com/ffts/](http://anthonix.com/ffts/)

~~~
oh_sigh
Is that outperforms on all architectures/"runtimes"?

------
nickpsecurity
Good paper. This type of research lives on under the banner of safety/security
through automatic diversification or diversity. Another phrase used in many
papers is "moving target." I called it Security through Diversity in my own
work. A quick Google shows most of us were similarly [not] creative. ;)

I used diversity extensively in my previous security work combined with
obfuscation and strong, interface protection. Best protection against nation-
state attacks on software. Simplest example was running requests a group at a
time on three separate systems with different ISA's and toolchains while
checking results. Another form of that I derived was having mutually
suspicious parties do different implementations of same behavior. The approach
I used most was just constantly changing implementation details of key
services, from the app to the code serving the interface. Interface attempts
to hide all details of implementation plus sanitize the input. Result is
indeed a moving target that they rarely hit or expand on.

Modern IT is expected to be much more efficient in terms of labor and
productivity. Unreasonably so I'll add. The best way to do diversification now
is doing (a) portable software + interfaces, (b) containerizing
implementations for easy swapping, and (c) using automated methods being
developed in academia to diversify code without human intervention. Two of
these can piggy-back on existing work in OSS and cloud tech. Below is a paper
studying some approaches to the third:

[http://www.ieee-
security.org/TC/SP2014/papers/SoK_c_Automate...](http://www.ieee-
security.org/TC/SP2014/papers/SoK_c_AutomatedSoftwareDiversity.pdf)

------
ianbicking
In a discussion I had recently we started talking about metaprogramming, which
in practice is only the tiniest and most minimal execution of the implied
idea: writing programs that operate on programs. Modern convention is to treat
programs as black boxes that only the developers _of that specific program_
can hope to reason about. And so metaprogramming is when a programmer writes
or very carefully uses a component that operates on the program, but the
result is just the program – you can't run the program without the applied
metaprogramming nor can you apply metaprogramming to a program and expect it
to work. Except debugging.

Underneath the suggestion in the paper I see something similar. "Programming"
might be an overly specific term. But a call to apply reasoning to systems and
computation. Not applying reasoning to _specific_ systems, as a programmer
might analyze or test a specific program (what we're already doing), but to
systems as a whole. I can see this in other computing developments. Operating
systems are the result of reasoning about hardware. The result looks very
different from coding directly to the hardware, especially in our mature
state. But programs still are binaries with machine code. Before the OS of
course was the program itself. And so what would it look like if we applied
this kind of reasoning to programs?

But oh, we already apply so much reasoning to programming! But in a certain
way. We apply reasoning to compilers, which programmers use as they will. We
build libraries and hope they are used well. And then we pull these together
to some actual purpose in the program. And we assemble programs into systems.
Each step empowers the next layer, but we don't apply (much) software to
explore, verify, or extend these layers. Our tests are typically written as
binary pass/fail systems, with no generalized attention to WHY something fails
– and we can't apply generalized attention because the program is too opaque.

~~~
nickpsecurity
I agree and have proposed the same thing. I was trying to build on my old
toolkit which aimed at RAD w/ C/C++ compatibility, BASIC safety, and LISP
flexibility. I essentially implemented a basic-like language that was
compatible with C/C++ functions in LISP. Let me have LISP macros and
interactive development with result auto-generating C/C++ code that reused
their compilers. Worked great while I had it. I know the BASIC angle is funny
to many. However, it's how I started & iterated fast with _readable, no blue-
screen_ code. I'd have used a Wirth language if I knew of them then.

So, next question was "How to make an environment to allow easier development
of high assurance, portable systems?" My platform was a 4GL w/out 4GL's
problems. I want a 4GL for highly-secure or reliable software. Noticed
portable by itself usually requires a bunch of variations on same basic thing.
LISP macro's handled that for me prior. So, my concept was to use something
like Racket Scheme for meta while building a mockup of constructs of
imperative language (Wirth's, Go, or C). Transformations for specific systems
are encoded as meta transforms that operate on base program.

Initially, the thing just generates code it feeds through existing compilers.
From there, you model the basic constructs (eg control flow, stacks) in both a
generic form and one optimized for certain situations (or ISA's). Language
abstractions are modeled as compile-time transforms on these. Constraints,
pre/post conditions, invariants, and so on are added as needed for next phase:
type/interface checking + automatic production of verification conditions,
tests, static/dynamic checks, covert channel analysis, and so on based on
annotated code. Each refinement uses data in step before it for correspondence
check (eg trace-based). Ends at typed assembly (or LLVM maybe) modeled in same
system subject to same correspondence checks and extra analysis at that level.
Any run reuses what's already done and not changed while supporting multicore
to reduce run time. Critical checks run in background as developer modifies
the software.

Written and specified to high level with metaprogramming transforming,
analyzing, and optimizing it to lowest levels. Highest, assurance levels using
tools such as Coq or HOL would be able to leverage HLL code for easier, spec-
to-module/code mapping while trusting certified transforms to do the rest.
Your thoughts aside from how much work this might be? ;)

------
dang
> The ARPAnet is one of the few engineered systems in use that is robust in
> the way we desire.

What are others?

------
panic
Is designing several independent systems (at least three, for voting to work)
more or less expensive than formally verifying a single system?

~~~
azeirah
There's an important difference between a formally verified system and
redundant systems however. At least, I think so!

A formally verified system is verified "software". Different parts of a
redundant system may rely on different parts of the hardware. If some hardware
fails, a redundant system can still succeed, while a formally verified system
is likely to fail.

~~~
panic
Given some model of the way circuits work, it's possible to formally verify
hardware as well. Maybe your model doesn't quite match reality and your system
will still fail. But redundant systems can also experience simultaneous
failures of their components. It would be interesting to know which approach
is more effective.

------
dkarapetyan
Sounds like he is describing Erlang and Smalltalk.

