
Ironsides DNS Server in Ada Spark - based2
https://ironsides.martincarlisle.com/
======
mcejp
If you prefer to read code on GitHub:
[https://github.com/mcejp/IRONSIDES](https://github.com/mcejp/IRONSIDES)

------
capableweb
> We describe the development of IRONSIDES, an implementation of DNS that is
> provably invulnerable to remote code execution exploits and single-packet
> denial of service attacks

Sounds like famous last words. How can something be provably invulnerable to
RCE? Wouldn't you just be able to prove that it's invulnerable to known RCE's?

From
[https://ironsides.martincarlisle.com/globecom_2012.pdf](https://ironsides.martincarlisle.com/globecom_2012.pdf)

~~~
albntomat0
The key is that it’s proven within the formal method system used. For example,
you can prove that you don’t have an out of bounds write, but not that a
hardware bug doesn’t exist.

There was a proof of SSL or TLS (can’t remember which) as a protocol was
secure, but later was exploited. This was because the exploit was something
the spec didn’t cover

~~~
MaxBarraclough
> For example, you can prove that you don’t have an out of bounds write, but
> not that a hardware bug doesn’t exist.

Also, a bug in the compiler could introduce such a bug.

To my knowledge, the major Ada compilers have a very good reputation for
correctness, but none are formally verified, and there's no such compiler on
the horizon.

> There was a proof of SSL or TLS (can’t remember which) as a protocol was
> secure, but later was exploited. This was because the exploit was something
> the spec didn’t cover

Typically we'd expect the only vulnerabilities in such an implementation would
be side-channel attacks, such as timing attacks. Do you have any more details
on this? I couldn't find anything with a quick google.

~~~
albntomat0
Top of the second page:
[https://link.springer.com/article/10.1007/s00165-012-0269-9](https://link.springer.com/article/10.1007/s00165-012-0269-9)

My understanding is that both Needham–Schroeder and SSL had formal
verification prior to the breaks, but the breaks relied on properties outside
of what was proven (which was thought to be complete at the time).

~~~
MaxBarraclough
Great, thanks. I found a freely available pre-print at
[http://alfredo.pironti.eu/research/sites/default/files/cf11....](http://alfredo.pironti.eu/research/sites/default/files/cf11.pdf)

The relevant content is mostly on page 2. They seem to be saying that when it
comes to crypto, both the requirements work and the implementation work are
both challenging and error-prone, even for expert practitioners. I don't think
they're exactly saying anything against formal methods, though:

> It is significant, for example, that the above mentioned flaw affecting the
> Needham-Schroeder public-key protocol could be found by applying formal
> methods

Later, under _Motivation_ , regarding what are essentially integration bugs:

> in OpenSSL an error condition returned by a cryptographic function was
> incorrectly interpreted by the function caller, making the application
> accept corrupted data; such a fault cannot be found if a formal model that
> has no relation with the implementation code is analyzed

Not exactly a demonstration of a weakness of formal methods. Instead, an
instance of a bug (a normal one, not a side-channel vulnerability) occurring
in a system which was only partially formally verified.

> a large gap still exists between these models and a real-world protocol
> implementation and its execution.

A problem with real-world implementation, not with formal methods. I wonder
what they would have made of 'SPARKSkein', released around the time that paper
was published. [0] I suppose it wouldn't have been anything all that new -
integration bugs could still occur.

[0]
[https://www.adacore.com/papers/sparkskein](https://www.adacore.com/papers/sparkskein)

~~~
albntomat0
My takeaway is that using formal methods is difficult to turn into any sort of
actual security guarantee, due to the difficulty of connecting them to a real
world system in a meaningful way. While formal methods could produce something
meaningful with proper "inputs", getting that done is hard.

~~~
MaxBarraclough
I'm more optimistic. There's real value in building ultra-low-defect crypto
code, and as far as I can tell, formal methods have a good track-record there.
We already knew that integration into non-verified code isn't guaranteed to be
free of defects - that's out of scope by definition. That's somewhere safe
languages can help though. What the paper describes as _corrupted data_ might
well be the kind of bug that often happens in C, but rarely happens in a safe
language.

The other question is vulnerability to side-channel attacks. Today's formal
methods aren't much use against them.

