
Open Hardware on Its Own Doesn’t Solve the Trust Problem - sabas_ge
https://www.bunniestudios.com/blog/?p=5706
======
klhugo
Excellent work. I completely agree that today the easiest option to develop
safe hardware is using FPGAs.

Two things to look forward:

    
    
       1. Usage of open source FPGA synthesis and implementation tools
       2. Usage of open source FPGA chips :)
    

I've already seen some traction happening for open source FPGA tools, but open
source FPGA chips are only in my head (as far as I know).

I'm a chip designer myself, and for years I have been thinking on kickstarting
something to pay for a tapeout of an open source FPGA. If anyone is interested
let me know, I live in Ontario/Canada.

~~~
jacquesm
How will you know the mask of your FPGA isn't changed?

I think that the idea behind all of this is sound but that at some point we
will have to accept that there always will be some remnant of insecurity
unless you are willing to create your own fab or build your CPU up out of
discrete transistors which can be (1) exhaustively tested and (2) are too
simple to contain anything nefarious given that there is no way to know where
in a circuit that transistor will end up.

~~~
klhugo
A couple things:

    
    
       1. FPGAs are easier to verify cause they are regular structures.
       2. How do you insert a backdoor in an FPGA at the supply chain if you don't know what is the exact logic that is going to be uploaded?

~~~
jacquesm
> How do you insert a backdoor in an FPGA at the supply chain if you don't
> know what is the exact logic that is going to be uploaded?

Popularity of certain open core designs might be one way to gain advance
knowledge of how an FPGA might be used.

That suggests an interesting option: to scramble the input to an FPGA in such
a way that the device will still work but that it is even more unpredictable
how its internal connections will be used (otherwise you could take a number
of open core designs and arrange for your attack to work with those
configurations, which might be detectable in hardware or in the toolchain).

Better yet, scramble the bitstream on every boot (but what would do the
scrambling?).

~~~
fanf2
TFA says “We rely on logic placement randomization to mitigate the threat of
fixed silicon backdoors“ and goes on to talk about the tools they are working
on, but there is still some way to go.

~~~
jacquesm
Ah good catch. I must have picked it up while skimming, but it also seems kind
of obvious.

------
ecesena
> open hardware is precisely as trustworthy as closed hardware. Which is to
> say, I have no inherent reason to trust either at all

I think the sentence should be rewritten as "closed hardware is precisely
untrustworthy as open hardware", meaning that open reveals limits and
assumptions while close is pretending there are none and everything is secure,
while it's precisely not.

The trust imo is not in the product, is in the company or team building the
product.

> I’m a strong proponent of open hardware, because sharing knowledge is
> sharing power.

This is what, I think, makes a company or team more trustworthy. Not just
making a product (even if it's really great and has the ambition to protect
millions) but also sharing knowledge with the ambition that more can learn.

~~~
0xcde4c3db
One might quibble over implications, but I think yours is basically the right
take. As Bruce Schneier said: "security is a process, not a product".

------
catern
One idea on how to verify equivalence between a design and a physical chip:

If we have the design, could we generate instruction sequences (or, in
general, input sequences) and deterministically predict the time required and
power consumed to execute those instruction sequences? Then we could fuzz the
chip with a bunch of generated code and measure that the consumed time and
power matches what we expect. Any backdoor would throw off the measurements.

Can anyone who knows hardware better comment on whether there are other kinds
of attacks that this wouldn't cover?

This idea is inspired by a paper I read once which used a somewhat similar
approach for verifying that a hardware system hadn't been infected by
persistent firmware malware. The authors had the system compute a function
which had a known memory-optimal implementation which required the use of all
the persistent memory available in the system (including firmware, etc.).
Unfortunately I can't find the paper now.

~~~
bunnie
While power signatures are repeatable on a single-chip basis, from chip-to-
chip the variation in individual transistor performance will require a
tolerance to be applied on the threshold criteria for determining if a given
power signature is correct or not across a population of chips. Furthermore,
power consumption is also dependent upon temperature and voltage so that will
need to be carefully controlled for the measurement. The wider the tolerance
on the power measurement, the larger the implant that can be buried.

Unfortunately transistor performance can vary quite widely for a single
design; recall that a single CPU mask design is often sold in several speed
grades. These aren't different designs, they are all the exact same design but
then tested to pass at a given frequency and power profile.

My intuition is that a simple implant would probably escape power
characterization, and larger implants can be left off until a trigger
condition is met. That condition could be made so that fuzzing is highly
unlikely to hit it, e.g. a particular 64-bit number pattern has to appear in a
given register to power on the implant.

The technique may be suitable to detect gross anomalous execution in a single
well-characterized chip but I feel any robust criteria against false positives
would also leave a sufficiently large margin for small implants to slip
through undetected.

~~~
catern
Thanks for the in-depth reply! That makes sense that the primary flaw is that
nondeterministic attributes of a particular physical chip generated from a
design will cause its individual characteristics to vary too widely.

It's interesting - there is a loose analogy to reproducible builds chasing
down and squashing every source of nondeterminacy in software builds. I'm
guessing that's not possible to do that for chip fabs for deep physics reasons
+ reasons of efficiency, but maybe some other trick (recording the "random
seed"?) could be used... And the thought that verified chips require a
verified chip fab process is appealing! But I know nothing about chip
fabrication processes, so I'll leave it at that.

------
ecesena
Related project: [https://betrusted.io](https://betrusted.io)

(announced towards the 2nd half of the post)

------
jacquesm
That may be so but if I have to choose between open source software and closed
software I know which one I trust more and I would assume the same goes for
open hardware, in spite of the difficulties of transferring trust at the
hardware level because of the parties over which you have no control. Of
course if your threat model includes parties changing the masks of your chips
then all bets are off but in general more openness is better and my implicit
trust in people working on open source software and hardware is at a different
level than the alternative.

If only because the ones seem to be driven by altruistic motives and the
alternatives have already been shown many times to be willing to sell their -
and your - soul to the devil for a price.

Open source hardware would need a verification and inspection method that
reliably determines whether or not the manufacturer delivered what they said
they would if you want that level of trust. And even then those tools could be
compromised and so on.

To put it simpler: between say Intel or AMD and 'Bunnie's chip factory' I know
which one I would trust more because with Intel and AMD I _know for sure_ that
there will be a bunch of misery included and with Bunnie I would at least know
that he didn't include it himself and would do his best to avoid others doing
it. Trust, eventually, is always going to be in people.

I'd love to see his 'non destructive method for the verification of chips'
become a reality. It would be at least an interesting exercise to compare that
what we have with what we should have.

And if funding is an issue, this is exactly the sort of thing where I would be
very happy to throw money at a kickstarter.

~~~
ganzuul
> Trust, eventually, is always going to be in people.

Funny thing about trust; you can trust your enemies to behave poorly too.
Trust seems to be about predictability first and foremost, and great
mathematical work is being done to factor humans out of it. Autonomous
vehicles are subject to a formalized version of trust already and engineers
are working to get dependent type theory to do verification for them.

------
paul7986
I just got rid of my Google Home devices; gave them away as Christmas gifts
and noticed half of those in the gift exchange didnt want any spying devices
either.

Overall I really enjoyed using them especially the digital picture frame, but
as long as they are spying devices to show you creepy ads (created by Google
and Amazon) then Im not interested. I wish Apple would offer comparably tech
at a normal price point. If not Apple another company whose is all about
privacy and the device only connects to the Internet to download and store
daily weather and traffic info. It would be a home network only device that
listens when you summon it and it would alert you (via email or text) that a
hacker is trying to hack it if it's Internet channel for downloading info is
compromised.

~~~
incompatible
Obviously, any device that communicates with an untrusted 3rd party whenever
it's turned on is not even trying to be trustworthy. Watching for any
unexpected network traffic would be a very basic step in assessing the
trustworthiness of a device.

I suppose, lacking the kind of efforts that Bunnie is making, the best you can
do is start with relatively simple mass produced hardware, purchase it in a
way that it can't be tampered with specifically for you, install an open-
source OS which doesn't communicate with 3rd parties, check the software
checksums.

Some applications, like vote counters, may not even need an Internet
connection. Creating a secure system to count votes isn't something I'd like
to be asked to do.

------
vardump
I think there's always another layer of trust issues no matter how trustworthy
your system is.

It might be you shouldn't even trust the physical environment, even power
supply can be used to do evil things. Or radiation, even ambient temperature.

Don't forget code itself can affect the environment in unobvious fashion.

EDIT: My point is that we need to be aware security will never be a solved
problem. We can't also consider security as a software-only issue, but have to
have a holistic viewpoint considering the _whole_ system.

There's of course the point where risk mitigation is not worth the cost.
That's another matter.

~~~
klhugo
You are right, but in the scope of chip design we can solve all that. What we
can not solve is supply chain trust (how do you now your temp sensor has not
been tampered?)

~~~
jacquesm
You should be able to exhaustive test a temperature sensor, it is typically a
two lead passive device.

~~~
klhugo
It was just a simple example. There are counter-measures WAY more complex than
a simple two lead sensor. I was part of a CC certified chip design, and I can
tell you we must implement tests for every single countermeasure in the chip.
But still you have no means to check whether your sensor/countermeasure hasn't
been tampered all together in the supply chain.

~~~
jacquesm
> There are counter-measures WAY more complex than a simple two lead sensor.

Fair enough.

Has there to your knowledge ever been documented evidence that a mask had in
fact been tampered with in a way that caused a system to be compromised?

~~~
klhugo
During a CC certification, the design-house, mask shop, and fab are certified
to reduce the chances of the chip being tampered. The certification ensures
that all those places have decent security practices and protocols. It helps
but is quite far from completely mitigating it.

I don't have any reference of a mask being modified to give to you, but it is
so easy to do it that we don't actually need evidences to be worried.

If you think about it, just by changing implantation parameters on the
transistors that form a ring oscillator for generating random numbers can bias
it (this does not require not even modifying the mask).

~~~
jacquesm
Ah yes, that makes good sense. I once built a hardware RNG and it was
surprisingly hard to keep it stable over the longer term to satisfy the
certification criteria. In the end I managed but it was a lot of analog voodoo
and I can see how easy it would be to tamper with that in a way that would not
be detectable unless you monitored the device continuously.

RNGs are a weak spot. Thank you for that example.

------
kgwxd
Ideally, any piece of open hardware would be made by multiple competing
companies and they would all be perfectly interchangeable. If one was
discovered to be tampered with, you could easily switch, the offending company
dies, and maybe some people are actually held accountable so others would be
discouraged to try it again.

While the industry is still innovating, such an ideal world may not arise, but
when things settle down and just about anyone can make a competitive version
of any type of chip, it could, but only if we demand open hardware now to
allow that competition to start forming.

~~~
nrp
The modularity of the supply chain is part of the challenge, as Bunnie
illustrated. You could have interchangeability at the chip level or OEM level,
but still have the same bad actor tampering at the foundry, chip packaging
house, part distributor, SMT factory, FATP factory, 3PL, retailer, or any of
the shipping companies between those.

------
eeZah7Ux
> open hardware is precisely as trustworthy as closed hardware

Nice work but the conclusion is absolutely unwarranted:

Security is all about mitigating risk. With closed hardware, as much as
software, it becomes much easier to implement backdoors _but also_ hide who
did it and when.

Unsurprising, a lot of closed source comes with spyware functions - look at
the phone "app market".

By all means an FPGA is better than trusting a SoC, but this does not mean
that all hardware is the same.

Management Engine is a good counterexample.

------
SemiTom
Single-source ISAs of the past relied on general industry verification
technologies and methodologies, but open-source ISA-based processor users and
adopters will need to review the verification flows of the processor and SoC
[https://semiengineering.com/will-open-source-processors-
caus...](https://semiengineering.com/will-open-source-processors-cause-a-
verification-shift/)

------
ngneer
Great post. A well distilled argument as to the perceived virtues of open
hardware and the actual root of the trust problem. The post strikes a great
balance between showing the problem of complexity and yet highlighting a
feasible step in the right direction. Kudos for not ignoring supply chain
issues.

------
Iv
Wow, that talk on silicon implants was super interesting.

I commend him for continuing in this quest for trustable hardware. I more or
less gave it up. I fear it wont be really possible until the tech has moved so
far that we can produce silicon and PCB at home on open hardware machines...

------
incompleteness
I like this subset of the Hardest Problem: trusting other human beings to get
enough things right.

------
carapace
I trust my sliderule. (This probably sounds like a snarky joke, so let me
qualify it by saying I'm pretty serious.)

I'm an "Apocalyptic": I literally believe that these are the "End Times" and
that our global civilization is about to tank (in ~10-50 years.) So I'm not so
much worried about spooks in my chips as I am about being able to compute
effectively at all, at all.

In that context, I think pretty seriously about what computer hardware will be
available in a post-apocalyptic scenario. In that case chips will likely be
worthless due to unavailability of datasheets.

The things that will work are sliderules and nomographs[0], henges and other
geophysical "calendars", clockwork, fluidics, and relays. You _might_ be able
to make discreet transistors.

See the _Clock of the Long Now_ mechanism:
[https://en.wikipedia.org/wiki/Clock_of_the_Long_Now#Design](https://en.wikipedia.org/wiki/Clock_of_the_Long_Now#Design)
(Although a large pitch-drop[1] "water" clock[2] would be more reliable and
much simpler.)

\- - - -

Note that, if civilization _doesn 't_ collapse, the Trust Problem will become
much worse as IoT advances, and, in the limit, _nanotech_... and you can't
trust anything. Within the limits of physics, reality will become permeated
with "ghosts", we will haunt the world with our own daemons. The Daemon-
Haunted World. ("'The Demon-Haunted World: Science as a Candle in the Dark' is
a 1995 book by astrophysicist Carl Sagan, in which the author aims to explain
the scientific method to laypeople, and to encourage people to learn critical
and skeptical thinking." [https://en.wikipedia.org/wiki/The_Demon-
Haunted_World](https://en.wikipedia.org/wiki/The_Demon-Haunted_World) )

\- - - -

Maybe we _will_ develop massive interlocking computer networks that respect
(your idea of) your human rights, but that's certainly not a solid projection
from current trends, eh?

[0] Image search for nomograph:
[https://duckduckgo.com/?q=nomograph&t=ffcm&atb=v60-1&iax=ima...](https://duckduckgo.com/?q=nomograph&t=ffcm&atb=v60-1&iax=images&ia=images)

[1]
[https://en.wikipedia.org/wiki/Pitch_drop_experiment](https://en.wikipedia.org/wiki/Pitch_drop_experiment)

[2]
[https://en.wikipedia.org/wiki/Water_clock](https://en.wikipedia.org/wiki/Water_clock)

~~~
nl
_I 'm an "Apocalyptic": I literally believe that these are the "End Times" and
that our global civilization is about to tank (in ~10-50 years.)_

Why do you think that?

Despite the pessimism in the US and some other parts of the developed world,
global civilisation has never been healthier.

It's true that nuclear war is a possible civilisation destroying event (as it
has been for ~70 years) but beyond that there are no threats that haven't been
there for millennia (eg, large asteroid strikes).

Things like global warming will cause huge destruction in some parts of the
world, disrupt and perhaps kill millions of people (eg, Bangladesh isn't in a
great way), and probably lead to wars that kill many more.

But even this is far from a global civilisation killer.

~~~
skyfaller
I think that total ecosystem collapse and the extinction of wildlife is a
possible scenario this century (perhaps even likely, if we continue with
business as usual rather than addressing the climate crisis).

You can see how it would happen with e.g. insects dying off:
[https://news.ycombinator.com/item?id=18541536](https://news.ycombinator.com/item?id=18541536)

Followed by birds that eat insects dying off:
[https://news.ycombinator.com/item?id=21018916](https://news.ycombinator.com/item?id=21018916)

Followed by the death of everything that depends on birds and insects, which
is probably a large percentage of the remaining wildlife. Similar extinction
scenarios are plausible with ocean life.

I guarantee that this would result in the collapse of civilization if it were
to occur.

This is one reason it's absolutely vital to halt greenhouse gas emissions.
Loss of habitat and pesticides are also huge problems for insects and other
wildlife, so dealing with global heating is necessary but perhaps not
sufficient to save them, but we have a clear deadline for when we must reach
carbon neutrality, so that seems like the top priority.

~~~
nl
I think that the decrease in biodiversity is dangerous, but not threatening an
extinction event yet - or at least I've never seen a credible scientist who
works in that field making that claim.

(I'd also note that both of these things seem to be caused primarily by
pesticide usage and other agricultural practices, not climate change)

~~~
carapace
FWIW, In re: "decrease in biodiversity"

I once saw the Monarch migration. I can try to describe it but I can't tell
you what it was like. Maybe if I was a poet. Can you feel it if I say "1/m^3"?
I was _within_ them. The whole world was butterflies.

It doesn't happen anymore. We killed them. We took their food. They weren't
hurting anyone. We didn't mean to kill them. We did it by accident. We weren't
paying attention.

A world has _already_ been destroyed.

