Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: It looks like even air gapped Bitcoin hardware wallets can phone home
69 points by JonathanBeuys on July 21, 2022 | hide | past | favorite | 52 comments
We had a great discussion here on HN a few days ago about the question whether it is possible to use Bitcoin in a trustless way. So that you control your Bitcoin yourself and don't have to trust any privileged party to not take it from you:

https://news.ycombinator.com/item?id=32115693

Interestingly, there was a lot of speculation and misinformation. So even on Hacker News, this topic is still only vaguely understood.

But also some very good information came to light.

The biggest bomb that was dropped in the thread received little attention: The fact that signing a transaction is not deterministic. This means when a hardware wallet is asked to sign a transaction, it can internally do that multiple times and then chose from multiple valid signatures. This means that it can encode data into the signature. For example, it could choose between two signatures with certain properties (say one results in an even checksum of the bits of the signature and one results in an odd checksum) and thereby signalling one bit to the creator of the wallet.

Everytime it signals a bit of your seed phrase home, the security of your coins is cut in half.

Here is an article about the fact that elliptic curve signatures are not deterministic:

https://medium.com/@simonwarta/signature-determinism-for-blockchain-developers-dbd84865a93e

The way I understand it, the wallet can chose from a large number of possible signatures and thereby signal many bits to its creator. In every transaction.

I think a dicsussion about this should be started. The way I understand it, it makes it completely impossible to use Bitcoin in a trustless way. Even with an air gapped hardware wallet, you are always at the mercy of the wallet manufacturer and the delivery chain that gets the wallet to you. If it gets swapped out on the way to you, you are at the mercy of whoever swapped it out.




Very interesting! The “attacker” wouldn’t know which signatures came from their hardware, but I suppose they could easily scan all transactions to find them.

The only fix I can think of would be to evaluate the hardware signatures using statistical tests to try to pick up any bias. This would be a burden on the user, but at least feasible.


The malicious wallet could encrypt the data it's reporting before splitting it into bits to report it. Then there won't be any pattern to show up on statistical tests.


Yes, the attacker would watch every transaction on the blockchain for their bits. Not hard to do, since there are just a few transactions per second.

Interesting idea with the bias checking. Not sure if it is possible. If it is, it would probably need very clever software to do that. One that bombards the hardware wallet with a big number of seeds and transactions and checks if it can find indications of the seeds having an impact on the signatures.


That’s a neat thought experiment. You’d need the wallet holder to sign a lot of transactions for it to work but maybe that’d be enough of a reduction of crypto integrity for an attack to be successful - especially if the end game is a Coinbase cold wallet or something.


How many transactions are needed depends on how many bits can be sent home per transaction.

A Bitcoin seed phrase is 128 bit. 32 bit can be easily brute forced. Leaves us with 96 bit. If you can send out 10 per transaction, that is only 10 transactions.


Although I assume you'd be a bit confused why your cold wallet is taking its time to generate a hash or whatever with the 10 bits it needs to modify? I don't know what that time would look like but you'd start to question massively arbitrary delays like 10 seconds one time and 30 minutes the next.


You could easily fix the delay so it's always X even if you take less than X to find the right signature and bail if you can't do it in time.


I guess you could hide the bad UX behind a facade of ‘strong security takes time’


If you are willing to go to any lengths to remove the need to trust a manufacturer, you can’t even start with a dice roll as the dice could be weighted against you. But if for the sake of example you trust the dice and it’s uniform randomness to construct the private key, you should be able to construct a new signature for each new transaction using the dice as a source of uniform randomness, and write your own code on an always-offline computer. You can then generate the signature data needed for the transaction, write it on paper, and carry it to your online computer, where you plug it in and send it to the network.

At no point in this process, short of physical access to the offline computer, is the ECDSA nonce ‘k’ known publicly, so I am not sure what you mean by it cutting your security in half with each transaction. If there are 256 bits in a nonce you would need to generate a lot of signatures for this to be a concern, and if you want to mitigate against this you could cycle through new private keys after every Nth signature.

Much more likely attack has to do with how you generated the random value k.


This would not be up to specification, although non compliant bitcoin clients are functionally transparent (unless they reuse `k`, in which case your funds will be quickly drained):

https://github.com/bitcoinbook/bitcoinbook/blob/develop/ch06...

To avoid this vulnerability, the industry best practice is to not generate k with a random-number generator seeded with entropy, but instead to use a deterministic-random process seeded with the transaction data itself. This ensures that each transaction produces a different k. The industry-standard algorithm for deterministic initialization of k is defined in RFC 6979, published by the Internet Engineering Task Force.

> If you are implementing an algorithm to sign transactions in bitcoin, you must use RFC 6979 or a similarly deterministic-random algorithm to ensure you generate a different k for each transaction.

(https://datatracker.ietf.org/doc/html/rfc6979)


Thanks for the info! So it seems if k is chosen correctly there is no way it can leak data even after an improbably large number of transactions are posted.

But there must still be some source of randomness for k besides just the message data? Otherwise signing the same message twice (like re-connecting to a web3 app via signed message, no transaction involved) would reveal your private key.


You need to include both the message and the private key in the hash. Since signing the same message twice with the same private key produces the same signature, it doesn't leak any additional information.


Got it, makes sense. Thanks.


Writing your own code is not what my post is about.

It is about the fact that even air gapped hardware wallets can phone home.


You've missed the point entirely. Your post is claiming one cannot use BTC in a "trustless" way. The GP is saying "You have to either choose to trust a third party at some point, or go to these extreme lengths that still don't ensure your security unless your opsec is 100% perfect, every time."

OTOH, your post also doesn't prove that the mass-manufactured hardware in question is actually malicious. At best, you've shown a way that it could be. Show me an actually malicious hardware wallet that behaves as you've described, and you'll have made your point. Until then, all you have is speculation and improbabilities.

In other words, charitably interpreted, you've shown that the hardware equivalent of the C compiler in "Reflections on Trusting Trust" could exist, just as the paper itself showed that such a C compiler could exist[0]. That is all. There is no evidence either one exists at all in the wild.

---

[0]: Which, I'll admit, is a pretty cool thought exercise, but has precisely zero real world impact.


> Show me an actually malicious hardware wallet that becaves as you've described, and you'll have made your point.

I'm not the OP and although I agree with you, you may be interested in the corollary for a "stronger" attack than OP defined:

https://bitcointalk.org/index.php?topic=581411.0 and https://github.com/tintinweb/ecdsa-private-key-recovery

Constructing such an airgapped hardware wallet is as trivial as a raspberry pi running a patched bitcoin client. In my opinion the more realistic construction than a that of "Reflections in Trusting Trust".


He just showed that Bitcoin is not mathematically secure.

In fact someone should prove that mathematically secure is a meaningless concept, if your devices are not physically secure.


That's also not true. What was shown is that BTC is not secure if you don't follow the correct transaction protocol. That's obvious. So is it also obvious that mathematical security and physical security are different. Here, the attack also involves not following the correct protocol.

Why do you think that was even worth posting? It's not a profound concept to say "if you don't follow the secure transaction protocol, your transaction will not be secure."


What do you mean “phone home”?

Posting a signature on a public ledger does not give information about nonce k which I think is what you are referring to. Each time a new transaction is signed, the k value will be a new random big integer.

If your wallet is able to leak bits in this way it would imply the value k is not chosen uniformly randomly. This is my understanding of ECDSA at least.


"phone home" is not really the right way to describe it. The attack proposed is that a hardware wallet (being a black box) can give the hardware wallet developers information about the private key.

Note that I have very little understanding of blockchain crypto, so I am unable to confirm/deny the information OP gave. However, the way I understand the attack is:

Hardware wallet generates a private key. It keeps this key in internal storage. When a transaction is made, the wallet makes a signature. According to OP, there is a variable here (I'm guessing either multiple private keys, or the ability to choose a signature algorithm, or even embedding a timestamp in the transaction) which the hardware wallet can use to "leak" information.

Let's say the wallet decides to embed a timestamp. Whenever a bit in the private key is 0, the timestamp is even, and when a bit is 1, the timestamp is uneven.

After 4096 transactions, presumably the whole private key is now stored in the blockchain as even/uneven timestamps.

This is of course a very slow way of leaking the private key, but does illustrate the problem of having unverified devices be responsible for crypto results.


But this comes down to “trusting a compromised device is bad.” The device could steal your funds from the moment you send your first transaction (it could only generate a set of known private keys).

Assuming the mode of key signing is not compromised and is producing robust uniform randomness (whether it’s a hardware wallet, airgapped device or your own hand-rolled code) it shouldn’t leak anything per transaction that would lead to your private key being more easily discoverable.


The point of such an airgapped device is that you can validate its outputs to make sure that it is not using your private key to do shady stuff.

OP's post is that since signatures are not deterministic you have no way to inspect device output and make sure that nothing subvert is going on.

Obviously you should not trust compromised devices, but you cannot know if such a device is compromised.


A “wallet” is just a series of math operations over standard cryptographic primitives. An airgapped device can be a calculator and pencil, or software that you have programmed and verified yourself in Python[1] or another language. At some point you need to trust that your tools and environment aren’t compromised; but this is a different argument than suggesting that the ‘k’ nonce in ECDSA is the only thing keeping Bitcoin from being able to be used trustlessly.

[1] http://karpathy.github.io/2021/06/21/blockchain/


My point was different; you can compute a hash (or any other deterministic computation) trustlessly by having multiple independent parties compute it separately and then checking if the result is the same.

You cannot necessarily do the same for nondeterministic computations in general. In this case you can easily verify that the signature is valid, but unless you control the rando parameters you cannot verify that a few bits of entropy have been exfiltrated by one or more parties in the computation.

In the simplest case you could with statistical methods but not with slightly more sophisticated attacks.


k is chosen by the wallet. You don't know if it was chosen randomly. A malicious wallet would chose k so that it results in a signal. For example it could chose k so that the sum of the bits in the signature are odd or even. That would signal one bit.


If your example is OK with a dice roll to generate a random mnemonic, i.e. it is uniformly random enough for your scenario, then you can do the same to generate random parameter k so that the wallet is not doing it for you.

You can also code your own wallet like I mentioned before if you do not trust a hardware wallet manufacturer, but somewhere along the line you will probably need to trust something (like trusting the room you are doing this in is not bugged).


Don't most wallets use deterministic RFC-6979 signatures? (Unfortunately verifying if that algorithm was used requires the private key)

https://datatracker.ietf.org/doc/html/rfc6979


Great point!

That means when you sign a transaction with multiple RFC-6979 compliant wallets, they should return the exact same signature.

So it is possible to tackle the "phone home via the signature" problem.

Wow, just wow! Two threads, 144 comments and we have a complete turn of the situation again!


So for the record:

To sign a transaction without trusting the hardware wallet to not leak your seed phrase or other private data, you have to sign the transaction on multiple RFC-6979 compliant hardware wallets and make sure they return the exact same signature.


The hardware wallet I use (Ledger) connects to a PC via to perform transactions, werein it interacts with PC software. This is presumably partly due to the ability to copy+paste keys. (As well as other reasons like guiding you into specific exchanges for buying and selling)

Without a careful audit of wallet firmware and/or PC software, doesn't this alone force you to trust the wallet maker?


That is not an air gapped wallet then and therefore beyond the scope of the discussion at hand.

In fact, I don't even know if there are air gapped wallets on the market.

There are some that claim to be air gapped but then use QR codes to transfer data. Which makes them not air gapped at all. A QR code displayed on one device and scanned on another is no less a communication channel than a cable. If it has a malicious helper software on the computer, the wallet could transfer your seed phrase over the image just fine. But that is a different attack vector than the one discussed here.

The discussion here is about whether an actually air gapped wallet could phone home or not. Independent of the question if one exists.


I thought air gapped meant no automated communication channel. After all a human interacting with an AG machine could memorize some of its data then reenter that data elsewhere.

Isn't a QR code passing through an analog medium, and only when read manually? It's also one way: from the wallet to the external reader. Unless I'm misunderstanding something.


Coupled with a “fountain code” and it’ll be even easier to retrieve the seed phrase.

Fountain code: you take your fixed input block and encode it with the fountain code and it generates an endless stream of output blocks. You only need to pick any N blocks from the stream to decode it.

Using a fountain code means you don’t have to retrieve the output blocks in perfect sequence.


If a hardware wallet is capable of communicating with the outside world (which is necessary for actually creating transactions), then "air gapped" seems like it'd be inaccurate, no?


This kind of thing happened 8-10 years ago, before bip 39 bip 44 seed phrases

Where signing private keys or the transaction signatures would let people derive the sender’s private key

There are always recurring opportunities to check for


> Interestingly, there was a lot of speculation and misinformation. So even on Hacker News, this topic is still only vaguely understood.

Indeed - and thanks a lot for the link, that was a super interesting reading!

> The way I understand it, the wallet can chose from a large number of possible signatures and thereby signal many bits to its creator. In every transaction.

Isn't it possible to make that deterministic by adding some rank-ordering heuristic? (ex: always prefer the smallest numerical signature, or with the most consecutive numbers etc)

Then if 2 wallets from 2 different providers disagree, you would know there's a problem!

In a way, it would be doing like in reproducible software builts: controlling the randomness, except it would be done ex-post (ranking the possible choices and selecting one) instead of ex-ante (setting the clock etc).

If that's impractical, a simpler way may be to require the wallet to make say 100 possible signatures, but then randomize which one is used in another independent step.

Also, from the article:

>> “deterministic signing” means that at least one deterministic way to generate signatures exist. It does not imply that a signer can only generate one valid signature. Due to the nature of the signing algorithms, an observer cannot detect if a standard algorithm or a customization was used.

The core problem seems to be that the hardware device obfuscates the algorithm, which should be less of a problem with software you can compile.

> Even with an air gapped hardware wallet, you are always at the mercy of the wallet manufacturer and the delivery chain that gets the wallet to you.

That's because of the above: you need to control for different things (ex: good source of randomness, correct implementation of the algorithm etc)


> The core problem seems to be that the hardware device obfuscates the algorithm, which should be less of a problem with software you can compile.

Can you trust your compiler, though? What if it changes the algorithm when it compiles your source code?

(See also "Reflections on Trusting Trust" by Ken Thompson: https://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html)


See "Fully Countering Trusting Trust through Diverse Double-Compiling" by David Wheeler: https://dwheeler.com/trusting-trust/


This is exactly what is done! This was known about and discussed in 2013: https://bitcointalk.org/index.php?topic=285142

At the end of the day nothing can be achieved if you don't have a trusted device to compare your untrusted device against. Bitcoin's threat model assumes that the client device is trusted.


> Interestingly, there was a lot of speculation and misinformation. So even on Hacker News, this topic is still only vaguely understood.

This is not a new revelation, and a similar vulnerability is discussed in Mastering Bitcoin: https://github.com/bitcoinbook/bitcoinbook/blob/develop/ch06...

> The way I understand it, it makes it completely impossible to use Bitcoin in a trustless way. Even with an air gapped hardware wallet, you are always at the mercy of the wallet manufacturer and the delivery chain that gets the wallet to you. If it gets swapped out on the way to you, you are at the mercy of whoever swapped it out.

Bitcoin's threat model assumes assumes your client software follows specifications. Airgapped hardware wallets aren't designed to eliminate "Reflections on Trusting Trust". Assuming your scenario, the attack can be tested in a similar way to that outlined in "Fully Countering Trusting Trust through Diverse Double-Compiling" (https://dwheeler.com/trusting-trust/). Under a better defined threat model, a construction can be made to effectively prove any data exfiltration scenario to be false.

In conclusion the issue your proposing is not an issue with bitcoin, but with computation in general. The word trustless in the context of cryptocurrencies has slowly been taken out of the context "trusted third parties" and evolved into an epistemological rather than pragmatic issue.


The device you own is a third party, the same way that the GCC binary you download is a third party. In both cases you can (probably) analyze the artifact (formal methods for the GCC binary, but I have no idea how to analyze a maliciously designed hardware) and detect the attack. In both cases it is practically infeasible.

The only solution seems to be what you propose, you buy an old device from before bitcoins were invented and code a client there from scratch to validate that your wallet is using the proper nonces.


Good wallets are open source and have anti-tamper and anti-interdiction features so the chance of this happening should be pretty low.


That sounds like you trust the manufacturer of the hardware wallet.

If so: The question was if Bitcoin can be used trustless.

If not: How would you check that the hardware wallet in your hand runs the open source code you trust?


How about if you use your computer to generate the key and sign transactions? Sure, the private key is stealable now, but at least you know what coude you're running..


You know what code you're running until you're hit by some browser 0-day drive-by that could steal your key


This is the good ol' Trusted System issue. I've roamed in this area before, both in terms of creating a chain of trust in crypto systems and also at a more philosophical context for voting systems. I'll decompose the issue into more abstract questions:

> Can we ever trust any system?

Yes, to an extent. There is no such thing as a system that can be trusted completely, but we don't need it to be in 99% of cases. One might say "you can trust crypto primitive XYZ. If you use it, it would take 1 billion years to break". That might be true, but side-channel attacks, leaks, statistical biases and whatnot will always be an issue.

To get as close as possible to trust in a system, it needs to be formally verified with proofs. That's the best we can do program/algorithm wise, but even if we trust the program, it cannot trust the system it resides on.

> How can we achieve trust then?

You know how bitcoin is based on a distributed consensus algorithm? It protects the whole system from collapsing due to a bad actor in the system. Even if thousands of people decides to cheat, it won't have any considerable effect.

Let's say you buy a hardware wallet from a reputable vendor - if they decide to cheat, you will be at their mercy. To combat this, you need a way to verify that what it does, it does so correctly, but also without side effects.

This is again something that needs to be formally verified. Any deviation from the spec will stand out like a sore thumb. To achieve this, we need to introduce a verifier.

The verifiers job is to check if the hardware wallet did it's job, but without being in the possession of the private key. There are lots of ways to do this, but a hot topic today is zero knowledge proofs, where the wallet would need to stand up to scrutiny.

The verifier would also need to check the results on the blockchain. Not just that the result generated is correct, but also that it is without side-effects.

> But then we have to trust the verifier!

Yep, and each time we introduce a verifier for the verifier, we will have made the system more trusted. Let's say we have N verifiers, whos best interest is that your wallet did the right thing.

In a transaction, it is not only in _your_ best interest that the transaction is correct (and without side-effects), but also the other party. We can extend this system to be a small group of people in _any_ transaction. If a small group of verifiers all agree with a certain level of consensus, then we can trust the system beyond a reasonable doubt.

This might sounds familiar to those who work with blockchains - and you would be right. It is eerily similar to how it works today. However, the blockchain covers only the cryptographic guarantees. The system needs to be extended to cover formal verification of the system as well.

> Example

Formal verification is a mostly academic exercise for most, so I'll give a small example for those of you who are unfamiliar with it.

Let's say person A and B make a transaction. Both have super secure hardware wallets and the crypto used it state-of-the-art. It should be secure right?

We can review the code of the system, but it is hard to identify mistakes. Who knows, maybe there will be a new area of vulnerabilities in a few years, and we never saw it coming.

Within the area of "correctness", we first need to make a formal specification. We create some testable properties about the system that needs to hold true, no matter the transaction or who is involved (these are called invariants).

So person A transfers 1 bitcoin to person B, they do so by signing a nonce with a private key. Person A checks the nonce and ensures it is indeed random (test 1). The signature is sent to person B, which then tests if the signature is no different than random data (test 2).

Howe test 1 and 2 are performed are incredibly important and very difficult to do, but not impossible.

If test 1 or 2 happens to be non-random, then we can just reject the transaction. We don't know if it was non-random by chance or on purpose, but since it does not live up to our criteria, we will reject it.

This means Person A will check what they got from person B and vice versa. However, why not have a bunch of random people participating in the block chain do the same checks?

If 0.1% of all in people in the blockchain checks the transaction between person A and B, and they all have a say if the transaction gets rejected, then we can trust the system beyond a reasonable doubt.

And no, this system is not perfect. We don't need it to be. We need it to be good enough so it becomes incredible hard to cheat. Also note that I've omitted a lot of details for brevity as well.


If your wallet was compromised during shipment it doesn't need to exfiltrate the seed (this is called a covert channel), unless you're somehow importing fresh private keys. It can just generate all of the keys (and seeds) using randomness that's already known to the attacker.


To prevent that, the discussion started with an approach where you create your seed phrase with dices. Look at step 1:

https://news.ycombinator.com/item?id=32115693

The question was if it is at all possible to use Bitcoin in a trustless way.

Several hard and maybe impossible to overcome challenges have come up in the thread. The fact that eliptic curve signatures are not deterministic seems to be the most fundamental.


ECDSA signatures can be made deterministic by deriving the nonce deterministically, e.g., by hashing the secret key together with a challenge provided by the user.

The problem now is that you need a way for the wallet to prove to the user that this has been done, without leaking the nonce or key. There are a bunch of ways to do this, but the most basic idea is to generate a zero-knowledge proof (probably a zkSNARK) that shows correctness of the signature w.r.t. the public key. The user would not put this zkSNARK onto the network -- it might contain a covert channel of its own! -- they would just check it locally and then dispose of that part. (Of course if the wallet was stolen by a malicious user the wallet might be able to exfiltrate the secret key to this thief through the zkSNARK portion.)

I'm assuming that all other transaction information is chosen by the user, so there's no other latitude for the wallet to cheat.


RFC 6979 is followed by compliant bitcoin wallets. Statically proving an unknown and black box wallet requires a trusted computational (as would verifying any proof) device to perform the same calculated (after all the private key is loaded via dice). This computational could be a human, by hand.

A zkSNARK wouldn't improve random exfiltration, so both of these proofs are statistical unless inputs are tested exhaustivally to result in the pigeonhole principle for some bounded internal state.


> RFC 6979 is followed by compliant bitcoin wallets.

RFC 6979 uses HMAC, which would make the proofs pretty painful. You would probably want to swap this for some more efficient primitive. It wouldn't matter for compliance with the network.

> Statically proving an unknown and black box wallet requires a trusted computational (as would verifying any proof) device

Yes, but this trusted computing device does not need to be connected to any network. It can be an airgapped computer (or even several airgapped computers sourced from different vendors and retailers), each running different software. There is still "trust" here but the degree of trust can be arbitrarily reduced, provided that you're willing to spend money and effort.

Verifying proofs using only human computation is pretty challenging.

> A zkSNARK wouldn't improve random exfiltration, so both of these proofs are statistical unless inputs are tested exhaustivally to result in the pigeonhole principle for some bounded internal state.

I'm not sure what you're saying. As I said up above in the thread, the zkSNARK is typically randomized and thus could be itself used as a covert channel, even if the signature is now (verifiably) deterministic.

But that's ok: the model I'm working with assumes that the wallet owner is trustworthy, and that any local computers they use to verify the proof are properly (physically) airgapped. The zkSNARK would be verified by the user first -- using (multiple pieces) of airgapped, trusted hardware -- then discarded. Only the final signature would ever reach a computer connected to the Internet.

If your concern is that the wallet owner and the network are both malicious,* then you're dead. (ETA: removed a note about deterministic ZK proofs, all that would let you do is detect that you're dead after the fact and only if you sent them to the network: it doesn't matter in this extreme threat model.)

* By the time your attacker has stolen the Bitcoin wallet and can force it to make signatures, you're basically toast anyway. So this does not seem like a good threat model to waste time on.


How about something more fundimental.

Consider bitcoin wallet M, that erases the private keys after n transactions? Either you trust your wallet does not do this, or you make copies and trust location/security/something else.

Trustless is a misnomer and was originally applied in the context of 3rd parties, nor computational devices in your own control running code you can theoretically verify.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: