
Verifying ARM processors against the official ARM specification - jsnell
https://alastairreid.github.io/alastairreid.github.io/using-armarm/
======
ChuckMcM
This is important work. And the author is not wrong about the sad state of
hardware description through verilog. One of the reasons I chose to use VHDL
over verilog was that it was somewhat more robust in terms of complete
implementations you could get (especially cheaply)

I find it interesting that ARM is doing this now versus all along. Was V8 the
point where their licensees revolted and said "Hey, your spec is full of bugs,
we're not going to pay a license fee for this crap." ?

~~~
bsder
> One of the reasons I chose to use VHDL over verilog was that it was somewhat
> more robust in terms of complete implementations you could get (especially
> cheaply)

A) Icarus Verilog is open source and free, and the author works with a couple
of people on the Verilog committee last I checked ...

B) What VHDL implementations are free and open source?

~~~
nickpsecurity
An old one here:

[http://ghdl.free.fr/](http://ghdl.free.fr/)

Just found this verification toolkit Googling for the one above:

[http://osvvm.org/about-os-vvm](http://osvvm.org/about-os-vvm)

~~~
jevinskie
nvc (LLVM based) is also impressively complete for the number of people who
have worked on it.

[https://github.com/nickg/nvc](https://github.com/nickg/nvc)

------
femto
It's interesting (and no coincidence?) that the one caveat with the OK Labs
(now General Dynamics) verified micro-kernel was "assuming the hardware
satisfies its specification". This work neatly plugs that gap, so the
verification goes further down (with the new caveat that the EDA tools
correctly implement the verilog source?)

Does ARM have a supervisor mode, along the lines of the Intel Management
Engine, which would leave a chink in the provably correct operation?

~~~
jdub
Nitpick: It was NICTA's (now Data61 at CSIRO) seL4 that was verified, not OK
Labs' OKL4. The seL4 copyright is owned by General Dynamics.

~~~
femto
Thanks for the correction. I'm curious: what was the exact relationship
between NICTA (now Data61) and OK Labs? I thought the verified kernel work was
spun off into OK Labs, but that's not the case?

Answering my own question, based on a bit of web surfing: OK Labs was spun out
in 2007. The formal proof was completed by NICTA in 2009. This verification
work continues today, within Data61.

[http://ssrg.nicta.com.au/projects/TS/](http://ssrg.nicta.com.au/projects/TS/)

~~~
happypanther
lets see how much I get right here,

NICTA was the australian government side, OK Labs was private company that
spun out of it.

Some early versions of OKL4 were opensource and based on code from
NICTA(&others) but there was an internal rewrite that was OK Labs proprietary-
only

seL4 is yet another entirely separate codebase, written by NICTA, but where
the copyright was given to OK Labs by NICTA.

~~~
jdub
The copyright of seL4 was not given (or licensed) to OK Labs AFAIK. It's
currently owned by General Dynamics, and OK Labs no longer exists.

------
kev009
It's amazing how pretty much all HW development does formal verification
business as usual.

~~~
pjc50
It's because you can't update hardware, and the NRE is _huge_. If every deploy
cost you $250k you'd want formal verification too.

Mostly the existing toolchain does _formal equivalence_ checking, which isn't
quite the same thing; it verifies that the behaviour of the output circuit is
the same as the input RTL written HDL. The input HDL is subject to testing in
a very recognisable manner - unit tests and integration tests that verify
intended functionality. Usually with coverage reporting so you can be sure
you've not forgotten areas of functionality.

I feel that hardware formal verification is easier than software verification
because the hardware can't be dynamically created, and there aren't weird non-
local effects so it's easier to partition the problem.

(Incidentally, this is the big failure flag for Ethereum: people building
smart contracts _without_ formal verification, in an environment where
redeployment is extremely difficult and expensive.)

------
raldi
Why does "Cumulative Defects %" go up over time instead of down?

~~~
astrange
Because it's cumulative!

~~~
therein
Wouldn't "Cumulative Defects %" going up imply delta defects is greater than
delta non-defects? Which shouldn't be the case.

~~~
nardi
It's not a very interesting graph. Just a measure of the % of defects they
found through time with respect to the total number of defects they found. It
shows that the process was mostly linear (indicating that there are probably
still more defects to find).

------
gravypod
So I don't understand. Why make the "formal" specification for the processor
rather then one in verilog (or some other immediately testable system) to
start with?

If you can't directly check from a provable formal system then it doesn't
really help with validation and you have to jump through a lot of hoops to get
there (like OP did).

~~~
adreid
It takes a lot of effort to write a specification. Just think how much
software you have ever seen complete specifications for - and how close that
number is to zero.

So if you are going to write a specification, you really want to get maximum
bang per buck out of the effort. Which means that it has to be good for many
purposes:

\- documentation \- verifying processors \- dynamic verification (i.e.,
testing) \- formal verification \- testing test suites \- verifying
application software \- dynamic and formal verification \- verifying OSes,
RTOSes, microkernels and other bare-metal software \- verifying compilers \-
generating machine readable descriptions

Verilog is great for one of these - but less good for all the others. Starting
with a relatively neutral language gives you a lot more flexibility. It makes
any individual task a bit harder than if that was your only goal but you save
effort overall.

There is also the virtuous cycle that bug fixes and improvements that help any
one goal are folded into the master copy - so all other sub-projects benefit.

------
unstatusthequo
And before many mobile providers and ISPs support the actual DNS routing

