Hacker News new | past | comments | ask | show | jobs | submit login

> Great use case for blockchain technology

>> CT logs are already chained

Trillian is a centralized Merkle tree: it doesn't support native replication (AFAIU?) and there is a still a password that can delete or recreate the chain (though we can track for any such inappropriate or errant modifications (due to e.g. solar flares) by manually replicating and verifying every entry in the chain, or trusting that everything before whatever we consider to be a known hash (that could be colliding) is unmodified (since the last time we never verified those entries)).

According to the trillian README, trillian depends upon MySQL/MariaDB and thus internal/private replication is as good as the SQL replication model (which doesn't have a distributed consensus algorithm like e.g. paxos).

A Merkle tree alone is not a blockchain; though it provides more assurance of data integrity than a regular tree, verifying that the whole chain of hashes actually is good and distributed replication without configuring e.g. SSL certs are primary features of blockchains.

There are multiple certificate issuers, multiple logs, and multiple log verifiers. With no single point of failure, that doesn't sound centralized to me?

Which components of the system are we discussing?

PKI is necessarily centralized: certs depend upon CA certs which can depend upon CA certs. If any CA is compromised (e.g. by theft or brute force (which is inestimably infeasible given current ASIC resources' preference for legit income)) that CA can sign any CRL. A CT log and a CT log verifier can help us discover that a redundant and so possibly unauthorized cert has been issued for a given domain listed in an x.509 cert CN/SAN.

The CT log itself - trillian, for Google and now LetsEncrypt, too - though, runs on MySQL; which has one root password.

The system of multiple independent, redundant CT logs is built upon databases that depend upon presumably manually configured replication keys.

Does my browser call a remote log verifier API over (hopefully pinned with a better fingerprint than MD5) HTTPS?

There are multiple issuers, so from an availability point of view, if one is down, you could choose another. They submit to at least two logs, so if one log is unavailable you could read the other one. This is a form of decentralization.

Now, from a security point of view, it only takes breaking into one issuer to issue bad certificates. But maybe classifying everything as either centralized or decentralized is too simple?

Centralized and decentralized are overloaded terms. We could argue that every system that depends upon DNS is a centralized (and thus has a single point of failure).

We could describe replication models as centralized or decentralized. Master/master SQL replication is still not decentralized (regardless of whether there are multiple A records or multiple static IPs configured in the client).

With PKI, we choose the convenience of trusting a CA bundle over having to manually check every cert fingerprint.

Whether a particular chain is centralized or decentralized is often bandied about. When there are a few mining pools that effectively choose which changes are accepted, that's not decentralized either.

That there are multiple redundant independent CT logs is a good thing.

How do I, as a concerned user, securely download (and securely mirror?) one or all of the CT logs and verify that none of the record hashes don't depend upon the previous hash? If the browser relies upon a centralized API for checking hash fingerprints, how is that decentralized?

Looks like there is a bit here about how to get started: https://security.stackexchange.com/questions/167366/how-can-...

Most people aren't going to do it, but I think that's not really the point, any more than every user needs to review Linux kernel patches. But I wonder if there are enough "eyes" on this and how would we check?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact