
Trust Models - feross
https://vitalik.ca/general/2020/08/20/trust.html
======
cosmojg
Whenever I read Vitalik's work, I find myself convinced that all of society
and its various problems can be boiled down to incentives and their alignment
or misalignment.

~~~
INGELRII
“Never, ever, think about something else when you should be thinking about the
power of incentives.” — Charlie Munger, [https://fs.blog/2017/10/bias-
incentives-reinforcement/](https://fs.blog/2017/10/bias-incentives-
reinforcement/)

 _Mechanism design_ is a subfield of game theory and economics called
mechanism design that tackles this directly.

Instead of having a game and figuring out how it works and what is the outcome
is when agents play it, the problem is to design a game that creates desired
outcomes when selfish agents play it.

It's possible to design mechanisms where utilitarian social-choice function is
the best choice even when players are selfish. The most famous result in the
field is Vickrey–Clarke–Groves mechanism to achieve a socially-optimal
solution in auctions. Quadratic voting and quadratic funding is another
interesting mechanism that results good outcomes. Vitalik Buterin is involved
with this too.

~~~
cutemonster
> design a game that creates desired outcomes when selfish agents play it.

Curious about if these ideas have been applied to constructing healthy
effective companies? With happy employees

Or maybe one could say everyone that comes up with some KPI are doing this?

------
motohagiography
If you invert the colours in the chart, you also get the consequences of their
failure modes. In designing tokenization schemes some years ago, we used a
trust model like this, and the basic problem reduces to the adage, "a security
system is only as strong as its recovery process."

In the case of the field of trust models Vitalik has illustrated, trust comes
down to the questions of: do you have a way to tumble your root of trust, do
you leave it static (like an HSM with destroyed keys), or do you federate. The
answer is of course, "it depends..." In the case of the balance of consensus,
proofs, and anonymity (e.g. ZK) the application defines the needs.

The issue I think is that these problems are all negatively defined and
presume a threat model before a use case. They are artifacts of that threat
model, in that they wouldn't exist if they weren't a reaction to it. These
things (blockchains and their applications) are essentially criticisms that
allow people to organize and defect to a certain extent, but they are lacking
a quality of essentialness I can't seem to find a name for. It's like they
aren't discovered things, but just artifacts of a constraint. Such fun to read
his stuff.

~~~
Taek
>If you invert the colours in the chart, you also get the consequences of
their failure modes.

Forgive me if I am misunderstanding what you are saying, but it sounds to me
like you are suggesting that the consequences of a 1-of-N failure are
inherently worse than the consequences of an N-of-N failure, which is not a
fundamental truth in any way. It is entirely possible for a 1-of-N system to
have better recovery modes than an N-of-N system, in fact it's often much
easier to tumble / enhance / improve the root of trust in a 1-of-N system than
it is to do so in an N/2-of-N system. (for example, in the case of rolling
trusted setup ceremonies vs. multi-party-computation).

~~~
motohagiography
The point I was making is that in the 1/N system, the failure of the one
person or element brings the whole thing down, it's a single catastrophic
failure mode, where when N/N at the other extreme, the failure is contained to
that group. The partial ones mean that the number of people/parts that have to
fail is greater to bring the whole system down. (the example case is storing a
single shared secret, which if compromised, means re-enrolling all N people in
that secret again)

You can replace a root of trust, but then you have to re-enroll all parties
into it. When you have partial or federated trust, the failure is contained to
the N/x group.

I'm thinking there may be a universal trade off between number of trusted
parts and the consequence/cost of failure.

The concept being that a trust model is essentially a non-reversible function
you iterate from its initial unique "trusted," conditions (like derived keys,
or a certificate chain), and if those unique initial conditions are
replicated, you have to re-compute the entire function from a new unique
initial condition, or risk "fake" branches.

A root of trust is the root node of a tree (or a DAG these days), and a
compromise of any of the roots will effectively isolate its downstream
branches. Compromising a single N/1 root means you compromise the whole tree,
where a federated multi-rooted tree means the damage can be contained - in the
model I'm thinking of.

TL;DR: the cost of the recovery mode in the 1-of-N case requires re-
instantiating or enrolling all of N, which basically means bootstrapping the
whole scheme. Fine for a closed system, Hard for an open one.

~~~
Taek
I think you misunderstand what a 1 of N system means here. It means out of N
participants, any of them are sufficient to keep the network safe.

In any system, if N out of N participants are down, you are going to have a
hard time resetting. Typically it is easiest to reset from this failure in a 1
of N system because you only need one person to recover, and it doesn't matter
which person.

I'm oversimplifying a bit, but in most cases a 1 of N system is strictly more
robust than a N/2 of N system. (Or any system that requires more than one
honest participant).

~~~
motohagiography
I think I do get your point, however my emphasis is on when you replace
"person," with "key," or "secret," it explains the consequences of the key
distribution problem.

The system that requires only one honest participant seems stable, except when
that one person has a small but non-zero likelihood of compromise. A system
that requires N/2 honest people - each of whom have a probability of
compromise, it depends on whether that probability is independent or
dependent. In the dependent case, you are correct, the more people you need to
trust, the higher the likelihood of one bringing whole thing down, but in the
independent (e.g. federated) case, the consequences are contained.

I'm saying that the diagram as interpreted from the perspective of the
consequences of failure shows that if you turn the 1/N section red instead of
green, it indicates the level of catastrophe. We could be running up against
the limits of the chart's heuristic analogy, but viewing trust as flowing
downstream over time from the root node of a tree , and asking how many root
nodes of trust you need for a robust system, shows that the fewer the roots of
trust, the greater the impact of a compromise.

I'd considered whether replacing those node keys with multipart keys would
change things, but the whole concept of a compromise is an exogenous
phenomenon, that is, what this trust tree/graph is made of doesn't change the
effect of some super force finding a way to compromise a node. If a trust
model is just a graph, then it will be the properties of that graph that
determine the qualities of the model - and not what the nodes and
relationships themselves are made of.

That final point would be my big-leap conjecture.

------
statquontrarian
It seems odd to not discuss the more fundamental issues of trusting the
infrastructure. All this virtual reality is on top of a physical reality
controlled by governments and powerful interests. What happens when said
powers declare cryptocurrencies illegal (app store removals, RST packets,
etc.), or try to take them over with brute force?

~~~
Taek
This tends to be a focus of the Bitcoin community a lot more than other
cryptocurrency communities. Bitcoin has technologies such as ASN based sybil
attack protections [1], satellite broadcasts that cover most of the land area
of the earth [2], and setups that allow Bitcoin to be broadcast over Ham Radio
[3].

Of course that's not to say the problem is ignored by other communities. Many
people are well aware of the full set of dependencies of these crypto projects
and the ways that external forces might be disruptive. And many people are
working on increasingly sophisticated ways to eliminate these dependencies or
ensure viable alternatives if worst comes to worst.

[1]:
[https://github.com/bitcoin/bitcoin/issues/16599](https://github.com/bitcoin/bitcoin/issues/16599)

[2]: [https://blockstream.com/satellite/#satellite_network-
coverag...](https://blockstream.com/satellite/#satellite_network-coverage)

[3]: [https://www.wired.com/story/cypherpunks-bitcoin-ham-
radio/](https://www.wired.com/story/cypherpunks-bitcoin-ham-radio/)

~~~
brian_cloutier
These are all cool technologies which raise the bar an attacker must meet in
order to disrupt Bitcoin, but it's worth noting that at very least America and
China could absolutely overwhelm those defenses.

The security of nakamoto consensus (Bitcoin and co) relies on a level of
broadcast which is pretty incompatible with a world where the network is
hostile and looking to disrupt your traffic.

~~~
centimeter
> relies on a level of broadcast which is pretty incompatible with a world
> where the network is hostile and looking to disrupt your traffic.

It would be prohibitively expensive and difficult to disrupt point-to-point
radio (e.g. ham) _and_ satellite communications enough to cripple Bitcoin.

~~~
brian_cloutier
The satellite doesn't help very much. It doesn't help at all if you want to
send a transaction or Mike your own blocks. It's also very easy to know
whether it's broadcasting. So if the mechanism by which Bitcoin is banned is
legal, it's easy to find the owner and prosecute them for breaking the law.

I'm not incredibly familiar with ham, but if you ever broadcast then you're
inviting the authorities to find you, and shut you down. If you strictly
communicate point-to-point it doesn't seem likely that you'd be able to tell
the entire world about new blocks within a reasonable about of time. If you
keep the 10 minute block interval then delays longer than, say, 1 minute,
would be extremely problematic. I'm not sure how a Ham radio network could
reliably tell the world about new blocks within 60 seconds without alerting
the authorities.

------
brian_foxx
His views are just simplistic, too simplistic. Additionally, he delves in
topics that have been already looked at in a more abstract way.

