Hacker News new | past | comments | ask | show | jobs | submit login
Ledger Live: A mobile companion app for Ledger hardware wallets (github.com/ledgerhq)
75 points by frozeus on Jan 29, 2019 | hide | past | favorite | 56 comments



Good to see Bluetooth in the new hardware. Lack of real mobile support for hardware wallets has been by far the biggest pain point for me. (With regard to the best possible physical security, I would have preferred NFC over Bluetooth, but the latter is alright for everyday use.)


AFAIK given the Ledger's on-screen confirmation process it really doesn't matter what's the medium of communication between the host machine and the device. It could as well be sent via unencrypted http routed through China, then Russia and then an NSA server all while your host device is heavily backdoored.

The supply chain attacks/evil maid attacks are a much bigger issue, as pointed out in other comments.


>>It could as well be sent via unencrypted http routed through China, then Russia and then an NSA server all while your host device is heavily backdoored.

As much as we'd like to believe this is the case, MITM (network or host) allows for replacement of destination addresses that show up on your screen. Redirected/malicious destination addresses showing up on your host screen will be cryptographically verified with Ledger's on-screen confirmation, but will not prevent you from sending your cryptoassets to the "wrong" endpoint.

I think this is much more of a reality if your host device is "heavily backdoored" than unencrypted HTTP, but could happen in either case. Another attack vector was BGP & DNS hijacking, which happened to My Ether Wallet in April 2018[1][2].

[1] https://qz.com/1261540/mew-ethereum-hack-the-internets-infra...

[2] https://doublepulsar.com/hijack-of-amazons-internet-domain-s...


Yes but the Ledger has its own screen that shows what you're actually signing. If you verify that, you're good.

There are a couple caveats. First, the Ledger Nano's screen is too small to display the entire address at once, so an attacker who knows where you might send money could generate an address that appears that same on the characters that display. (The Ledger Blue shows the full address but is getting discontinued.)

Secondly, if you're on Ethereum and using a multisig contract, the destination address is just the contract and the ETH amount is zero. The function parameters which define your actual request are just displayed by the Ledger as a warning that they exist.

I've suggested to Ledger that they come up with a way to import the json.abi and display the actual parameters on device, which is what desktop clients do. They thought it was doable but I haven't seen any suggestion of it happening.


Just for the record, the entire address for the transaction is displayed on the screen, it just scrolls side to side. Generating another address that's similar enough to be confused easily would be prohibitively difficult at best.


Yes but the middle scrolls by pretty quickly. I once saw an article that did the math on the difficulty of making an address that matched on the easy-to-read parts of the address at beginning and end, and it amounted to less than a day's work on a modern PC. That's likely to work against most users.


It doesn't scroll quickly, it's very easy to read and verify, even for my beat up eyes.

I'd be very interested to see that math, because it's unlikely that's accurate. You can get a few characters at the beginning of the address relatively easily -- the rest is in heat death of the universe territory.


It is already implemented for ERC20 token transfers which are probably >90% of smart contract invocations.


Not for multisig contracts though, which people tend to use for extra security on large amounts of funds.


> but will not prevent you from sending your cryptoassets to the "wrong" endpoint.

Can you explain how's that a problem? The only thing you can do with a signed transaction is to either broadcast it or not.

MyEtherWallet suffers from the equivalent of supply chain attack here, where the JavaScript gets replaced with malicious code.


I'm treating network & host attacks as a supply chain attack where the authentic/intended destination address is replaced by the attacker with the attacker's address. Hence my air quotes around the word "wrong".

As with MEW, as far as Ledger is concerned, a correct transaction is being signed and will in turn be broadcast. But the final outputs don't actually end up where the sender intended them to be sent because their host or network was compromised.


Yeah this is plain wrong. Sorry but you simply don't understand how transactions work in cryptocurrencies.

You cannot edit a transaction (for example by changing it's outputs) after it has been signed. That's how cryptographic signatures work, in general.

And the only way to cause an incorrect transaction to be signed on Ledger/Trezor is by tricking the user, which requires malicious code inside the device.


We're in "violent agreement". You seem to be missing my point.

I completely agree with you that you cannot edit a transaction after it's been signed. I was trying to point out above that if the user is tricked, by signing a transaction they think is correct, but is in actuality not the intended destination because we assume the attacker controls what they are seeing, and where they are going.

Ledger signs transactions with a destination address. If the user can't tell (or is tricked) into believing a malicious address is the authentic one, everything will look koshor. This is similar to the level of sophistication and control required in the MEW incident. It's unlikely, but possible.


You're not listening. This is the case that otoburb is describing:

1. Install bad browser extension.

2. Visit coinbase.com.

3. Copy deposit address.

4. Paste into Ledger software.

5. Initiate transfer of funds to Coinbase.

6. Scrutinize transaction details.

7. Approve.

Your funds are now gone, and you'll never see them again.

All cryptography worked as designed in this case. "That's how cryptographic signatures work, in general."

Substitute scenarios with receive addresses triple-notarized with medallion guarantees imprinted in blood if you wish for steps 1-3. It doesn't matter. The deputy is confused.


I'll reply here to the three comments: I admit that you're right guys. I have to concede my point about backdoors. If you modify user's view of Coinbase DOM then no security in the Ledger/Trezor is going to help with that.

But also it's independent of the discussion around whether NFC or Bluetooth is a better choice for host <-> device comms.

PS. I love the term "violent agreement".


Unless the destination address is changed before it gets to the ledger. Surely you’ve seen the copy/paste malware that changes addresses?

https://techcrunch.com/2018/07/03/new-malware-highjacks-your...


Technically, only copy-paste mutations are not enough as you confirm addresses on the screen. But, for example, DOM edits from a malicious extension are enough.


> AFAIK given the Ledger's on-screen confirmation process it really doesn't matter what's the medium of communication

Except for that time when the attacker finds a way to bypass the on-screen confirmation process and make the changes hidden to you, of course, which has happened like only a thousand times on Windows.


To do that they would have to hack the Ledger's secure chip, which is quite a bit harder than hacking Windows.


I thought this was about ledger, the cli accounting tool. I thought ledger + hardware + wallet? Wow!

https://ledger-cli.org/


Me too.


My greatest concern with the Ledger hardware wallet has been making sure the device hasn't been tampered with during shipment. Fortunately they provide a script to check hardware integrity, it's probably a good idea to run it before doing anything with the device.

https://support.ledger.com/hc/en-us/articles/115005321449


Note that this doesn't actually do anything to attest the safety of the device, as has been pointed out in a CCC talk recently. It attempts to confirm that the code running on another processor is legitimate by asking it to read its entire flash to a "HSM" chip, which is obviously simple to deceive by reading back something that is not the processors flash. I personally think that this is deceptive and counter productive.


Remote attestation implementations via HSMs will always remain subject to a confused-deputy problem, but they're still leaps and bounds better than pure software solutions. Any threat you can describe that involves a facade hardware UI is much easier to implement in software, meaning that attackers are more likely to invest resources in software attacks (like spraying bad Electrum servers into the pool) than hardware attacks (like modifying hardware wallets and setting up a storefront on eBay).


I'm still very much in favour of managing my own private keys in an encrypted database on my desktop/laptop. I feel like everyone is blindly trusting these devices, having been scared in to it by horror stories of malware lifting keys off machines.

I will eat my paper wallets if my meagre holdings are stolen from me like that.


I sort of agree with this.

One caveat is that laptops are commonly compromised and your security would depend on nobody stealing keys/passwords needed to access your database/password manager/whatever.

Having hardware token with paper backup, makes this harder. Having a lot of tokens creates a huge incentive for getting hacked.

If you have non trivial amounts of tokens under your control, you need to consider all the points of failure. Laptops get compromised in all sorts of ways and can be equipped with key loggers or worse. Unless you are a security expert, defending against a determined & skilled hacker is super hard. Most of us never get our setups audited by an expert and I'm afraid that a bog standard OS X/linux setup is probably only get you so far. Even if you turn on disk encryption and do all the rest of the things you are supposed to do.

So the advantage of a token is that it does not depend on your laptop being uncompromised and that it is a third party solution that can be scrutinized and audited. That being said, I'm not a big fan of having a proprietary software/hardware package and would prefer to trade in my ledger for a properly OSS platform. There are a few of these platforms but it is early days and I'd need it to support Stellar though. As far as I know, ledger is the only thing working with that. I own a few of those for this reason.

IMHO there's a big market opportunity for creating a secure, easy to use hardware token for ubikey/oauthn signins, managing blockchain wallets, and doing 2fa. Not impossible, but making open hardware/software platforms commercially is apparently still a big challenge. I'd buy several if the price and feature set were right . Assuming enough auditing/vetting has happened by people that are smarter than me, of course.


Founder of StellarGuard here, so sort of in the same realm. Just wondering what additional features you'd want out of such a hardware token. Would it need to do the actual signing of transactions on the device for you to feel secure with it, or would generic U2F (Yubikey) + signing on the software be sufficient, assuming we could do it securely?


Yes, that is the point. Basically you have to work under the assumption that your laptop may be compromised. So anything that exposes private keys to it is going to end up leaking those keys. With the ledger you approve transactions on the token. You configure it from a paper backup or by letting it generate a private key for you and you use it to sign transactions.


Signing on the hardware is pretty much the only way to safeguard the keys. PCs and phones have an attack surface much too large to properly secure.


My Ledger Nano supports fido u2f and manages blockchain wallets, so I presume you're just asking for an open version of that?

Maybe the ease of use isn't quite where it could be.


Yes, I'd feel more comfortable with something that is end to end auditable without any secret/proprietary stuff.

Also the Nano does some of that but doesn't work with e.g. Firefox. I have to use Chrome to be able to do anything with it.


I use Firefox nightly, and flick the security.webauth.u2f configuration setting.


Another interesting alternative is multisignature wallets, like the new Gnosis Safe: https://safe.gnosis.io/

An on-chain contract holds your funds and requires some number of signatures to authorize transactions (for personal use usually 2, i.e. one from your desktop computer and one from your phone). That way at least you know two separate devices would have to be compromised to cause loss of funds. This also allows for interesting key recovery strategies like having a third paper wallet that is also authorized. You could use that as a backup key that would allow authorization of a new key if your phone were stolen, etc.


I used both a "swarm of crypto clients" and ledger/trezor, and the thing about the latter is that not only is it more secure (and argue with that if you want), but also more convenient. To a regular user those devices are kind of Netflix/Spotify for crypto.


Another possible take on this would be Parity Signer (https://www.parity.io/signer/), which is an opensource app turning your (full-disk encrypted and put into an airplane mode) old phone into an improvised hardware wallet.

Very narrow attack surface (no USB or Bluetooth, only QR codes are used for communication), and more safety against the supply chain attacks make it a viable alternative for some particular threat models.


These should be safer than that. The keys are generated on a secure processor and should not ever be able to be removed from the device.


A "secure", closed source processor. Given the Ledger bootloader had a rather nasty and bluntly obvious bug in it that allowed you to bypass all of the write protection and boot any firmware, I'd give them nearly zero chance of having got anything else right.


Let he who has never written software that had a bug throw the first stone...


Hi, I've written bootloaders before.I know that blacklisting addresses doesn't work, as many memory locations will be mapped multiple times. Strangely, most people that have worked with microcontrollers is aware of this, except for the people who wrote the closed source bootloader at ledger.


Well, since you're the expert, why haven't you written a better one? There's a huge market for this...


I don't think my threat model encompasses the safety of other people's money.


> Let he who has never written software that had a bug throw the first object Object...


Ugh, I haven't even moved my existing crypto from cold storage to the original Ledger Nano sitting on my desk for a year.


I'm not able to find the md5 signature for Ledger Live: would you please help me?


A mobile app managing cryptocurrency, but does not sign all their release, and don't even react the situation quickly. Apparently they have no clue what they are doing in terms of security.

Since they also use react native, and npm is notorious for being exploit to distribute malware. I have a brief look at the package.json. Seems to be a typical javascript project where developers tend to put one more dependency for a simple feature rather than implementing themselves. So, if one of the hobbyist project owner's key is compromised or hand over their orphan project to somebody malicious to manage their npm, then they are screwed. Although same could apply to other language which have package management, npm is the worst among those. Do they ensure the dependencies are signed before building the binary? And always use the last known good version for building new binary? I really doubt.


The repo has a yarn.lock file, which contains the hashes of all of the dependencies, so yarn verifies the dependencies match that at least.


wow the md5 does NOT exist: https://github.com/LedgerHQ/ledger-live-desktop/issues/942 How is this possible


Shouldn't everyone use SHA2, or at least Blake 2 (same software performance as MD5) by now?


I wonder whether these apps cache sensitive information like account numbers and balances. I think they shouldn't.


Yes they do - you don't need to plug in your device to see your current balance, even after power cycling the host device.

That being said - there's virtually no privacy in most cryptocurrencies. Your Bitcoin/Ethereum/Ripple/... balance is fully public.


The way they operate, I expect that there's HTTP logs of all activity. They do not operate in a way which is conductive to privacy.


ledger is garbage and their software falls over if too many people are using it. Why do I know this? Their customer service literally told me that I cant access my wallet because too many people are trying to use the software. As a silly consequence I was not able to update the firmware.

I switched to a more open source solution for the hardware, and for mobile mycelium.

Ledger can suck a fat one.


Is the hardware device required?


[flagged]


Thanks for the constructive comment and plug of your product.


I'm not unwriter


who cares?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: