Hacker News new | comments | show | ask | jobs | submit login

The most important thing to understand about this attack is that it's premised on attackers physically flipping bits, usually in targeted memory locations, during particular phases of computation. Even for attackers with physical access to a device, it's not an easy attack to pull off. It's for the most part not a line of attacks relevant to client/server and cloud software (your cloud provider has less dramatic ways of destroying your security).

The textbook example of a setting where fault attacks matter is smart cards: they're computing devices that perform crypto operations and their users are naturally adversarial (the card controls their access to some resource they'd naturally prefer unfettered access to). The crypto in a smart card needs to work even when the custodian of the card is zapping it with lasers.

The paper mentions IOT as a setting that will increasingly implicate fault attacks. To some extent, maybe. Most IOT devices aren't adversarial with the people whose custody they're in; in other words, if you can get your hands on an IOT device once deployed, the threat model for that device basically already says you own all the "secrets" in the device.

Another pretty important thing to know is that while the paper is interesting, it's not unexpected that DFA attacks would compromise EdDSA; they compromise pretty much everything, don't they?

Finally: I think you really don't care what your crypto library has to say about DFA attacks on its EdDSA implementation. All the mainstream implementations will be "vulnerable" to the attack (EdDSA is unworkably slow with the most obvious workaround of checking the signatures you yourself generate). If you need protection against fault attacks in your threat model, you are way past the point where library safeguards will help you. This is probably the point at which you can't responsibly ship products without getting an actual crypto consultancy to sign off on it.

These are relatively straightforward attacks, both conceptually (the paper has a pretty useful breakdown of "nine" attacks on EdDSA that are basically the same fundamental attack applied to different phases of the algorithm) and to simulate in software, so if you really want to get your head around it, boot up a Sage shell, get EdDSA working, and just play with fake "bit flips".




I have some customers with a stronger threat model than 'physical access owns the secrets' for reasonably mundane 'IoT' products. IP protection / late-point differentiation / any as-a-service model requirements can bump you out of that category.

Fault attacks themselves require exploiting the structure of whatever primitive you're targeting, so in that sense whilst 'everything is susceptible' isn't false, it really does vary quite a bit, and attack strategies can be really quite complex. Some fault attacks on block ciphers are effectively augmented differential cryptanalysis.

In general you're not going to be able to take a library and be able to make any claims about protection against invasive or non-invasive SCA without first knowing about the hardware it will be run on, so agree that it's not usually meaningful to add countermeasures against aforementioned types of attacker.

IMO applied crypto is at its hardest when you're close to bare metal and you have a physical attacker.


I'm not surprised that there are IOT products that need to be secure against their custodians, but the likelihood that the median example product from that set is so secure that fault attacks are a serious concern seems pretty low.

It's a sort of intuitive Venn Diagram situation here, where we're capturing:

1. The subset of hardware products that are "IOT" devices

2. The subset of that that are security sensitive

3. The subset of that that must be secure against their custodians (an extremely narrowing step here)

4. The subset of those products that rely on cryptographic primitives in order to operate.

To put this in perspective: most automotive and industrial secure hardware elements don't make it all the way down to (4), and are compromised by plain ol' bog standard software security attacks.


Yes, don't really disagree with the general trust of that, particularly once you start considering the technical barriers to executing an attack.

My main point was that the threat model includes the manufacturer needing to maintain ownership of secrets more often than you'd expect. I'd suggest that your step 3 isn't as narrowing.

Whether meeting that model warrants building in resistance to classes of SCA is an almost independent question, and I would agree that it's not very likely in most cases. The consumer-facing industries in which you do see that (e.g. set-top box, printing) aren't really IoT ones, either.


You seems to have read the second paper which got published on the same topic shortly after this work was presented at FDTC. However both papers are about the same kind of fault attacks. And since you mention playing around, the first paper and the blog post here also provide some python code for you to play with bit flips.

Regarding the IOT setup, I know first hand that Nest products are trying very hard to be secure even in case of physical possession. Especially on the firmware signature verification part. But they rely on some fun post-quantum scheme, not on EdDSA.

It's true that in the end it all boils down to your threat model, but I bet smartcard producers are glad someone researched this before they got on the EdDSA train.


> but I bet smartcard producers are glad someone researched this before they got on the EdDSA train.

If the only thing controlling whether they implement something is if they can find negative results on google, I don't think they'll be glad for very long.


Can you elaborate on the Nest part? How do fault attacks apply to firmware verification?


Nest products are using hash-based signatures?


Tangentially, we as the consumers often lament that some keys in the devices we own remain hidden from us. In such cases we posses the devices and we are the "adversaries" from the point of view of the companies that produced them.

The article shows that maybe it's still somewhat easier to get them than we'd often assume?


Sure, but the whole security model behind smartcards is premised on this problem, so it's not surprising. Fault attacks aren't new, and the kinds of side channels that smart cards and similar devices need to protect against are also a lot more interesting than just timing. There's a whole subfield of cryptography that focuses on this very narrow problem space.

I'm just saying: it's probably not all that easy to pull a key off a smartcard.

Remember that in most cases, the security model baked into the device is also that it need only be secure enough to resist economical attacks --- meaning, if the value of the secret you obtain from a smart card is less than the value of the equipment and expertise you'd need to carry out the attack, the smart card has probably "won" even if it can be compromised.


As you say it is very much not, at least for now, a trivial attack even with unconstrained physical access, nor is it an unknown problem. That said I think you might be a little too blasé about the ever increasing importance of maintaining at least some security in the face of physical attacks.

>The textbook example of a setting where fault attacks matter is smart cards: they're computing devices that perform crypto operations and their users are naturally adversarial (the card controls their access to some resource they'd naturally prefer unfettered access to).

Smart cards (and more specialized HSMs), along with on-cpu/soc dedicated crypto black boxes, are also utilized by users to provide an additional layer of redundancy or at least compromise-detectability for keys and authenticity of operations in the face of attackers who gain physical access. In many countries national IDs, financial tokens and such depend on this. Obviously modern smartphones make use of this too.

>The paper mentions IOT as a setting that will increasingly implicate fault attacks. To some extent, maybe. Most IOT devices aren't adversarial with the people whose custody they're in; in other words, if you can get your hands on an IOT device once deployed, the threat model for that device basically already says you own all the "secrets" in the device.

This is definitely at least somewhat wrong. Particularly as IOT gets deployed more, they will commonly be in settings that aren't "public" per se but certainly do not enjoy high levels of physical security against anyone who might pass by either. Private individuals, educational institutions, businesses and organizations often have guests who they want to have temporary limited resource access but do not want to let leverage that into persistent/unlimited resource access.

How much this matters depends on whether this sort of attack could ever be made easy and fast enough, and it can be at least partially hardened against for relatively low cost. But I wouldn't discount the importance of manufacturers at least always keeping in mind whether that threat scenario implicates to their customers or not. This shouldn't be a big deal and is very much not headline territory, but it's part of the shift to ever more ubiquitous networked inputs that we may have to rethink from time to time how much physical access restrictions are a defense right?


> EdDSA is unworkably slow with the most obvious workaround of checking the signatures you yourself generate

Checking a signature is twice as expensive as generating one. Yes, a 3X cost is significant, but it's not "unworkably slow". It's just a tradeoff of performance vs security. And for some applications, like bitcoin wallets, it might well be a price worth paying.


Are you going to check with a different implementation? Where does this rabbit hole end?

This is an integrated hardware/software problem, right?

I think my comment is being read to suggest that fault attacks are generally unimportant, which is not my point. I think if there's a subtextual reaction to anything in my comment, it's the notion that crypto library developers should be implementing and marketing fault attack countermeasures. If you need defenses against fault attacks, as a general class of attacks, you need something no crypto library is likely to provide you.


The security rabbit hole never ends. I did not mean to imply that an internal check is a panacea, only that it might be a cost that some not-entirely-unreasonable person might be willing to pay in certain applications. I was just pushing back against the glib dismissal of this countermeasure as "unworkably slow."


It's not a good defense, because a double fault injection attack could bypass the signature verification as well.

The only software defense that actually works reliably is preventing duplicate r-values from being generated in the first place. This can be accomplished by augmenting the fully deterministic r with some additional randomness, producing a synthetic r which is still guaranteed to be unique-per-message even if we have an RNG failure. See:

https://moderncrypto.org/mail-archive/curves/2017/000925.htm...

Note this is quite different from the k-value in ECDSA, which is not synthesized from the message contents at all.


libsodium can be compiled with ED25519_NONDETERMINISTIC defined in order to compute r as recommended in the generalized eddsa proposal.

libhydrogen always worked that way.


Yep, that looks like a good approach.


> If you need protection against fault attacks in your threat model, you are way past the point where library safeguards will help you. This is probably the point at which you can't responsibly ship products without getting an actual crypto consultancy to sign off on it.

I wish there was a way to make comment text bold, because this definitely seems like it needs to be emphasized.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: