The textbook example of a setting where fault attacks matter is smart cards: they're computing devices that perform crypto operations and their users are naturally adversarial (the card controls their access to some resource they'd naturally prefer unfettered access to). The crypto in a smart card needs to work even when the custodian of the card is zapping it with lasers.
The paper mentions IOT as a setting that will increasingly implicate fault attacks. To some extent, maybe. Most IOT devices aren't adversarial with the people whose custody they're in; in other words, if you can get your hands on an IOT device once deployed, the threat model for that device basically already says you own all the "secrets" in the device.
Another pretty important thing to know is that while the paper is interesting, it's not unexpected that DFA attacks would compromise EdDSA; they compromise pretty much everything, don't they?
Finally: I think you really don't care what your crypto library has to say about DFA attacks on its EdDSA implementation. All the mainstream implementations will be "vulnerable" to the attack (EdDSA is unworkably slow with the most obvious workaround of checking the signatures you yourself generate). If you need protection against fault attacks in your threat model, you are way past the point where library safeguards will help you. This is probably the point at which you can't responsibly ship products without getting an actual crypto consultancy to sign off on it.
These are relatively straightforward attacks, both conceptually (the paper has a pretty useful breakdown of "nine" attacks on EdDSA that are basically the same fundamental attack applied to different phases of the algorithm) and to simulate in software, so if you really want to get your head around it, boot up a Sage shell, get EdDSA working, and just play with fake "bit flips".
Fault attacks themselves require exploiting the structure of whatever primitive you're targeting, so in that sense whilst 'everything is susceptible' isn't false, it really does vary quite a bit, and attack strategies can be really quite complex. Some fault attacks on block ciphers are effectively augmented differential cryptanalysis.
In general you're not going to be able to take a library and be able to make any claims about protection against invasive or non-invasive SCA without first knowing about the hardware it will be run on, so agree that it's not usually meaningful to add countermeasures against aforementioned types of attacker.
IMO applied crypto is at its hardest when you're close to bare metal and you have a physical attacker.
It's a sort of intuitive Venn Diagram situation here, where we're capturing:
1. The subset of hardware products that are "IOT" devices
2. The subset of that that are security sensitive
3. The subset of that that must be secure against their custodians (an extremely narrowing step here)
4. The subset of those products that rely on cryptographic primitives in order to operate.
To put this in perspective: most automotive and industrial secure hardware elements don't make it all the way down to (4), and are compromised by plain ol' bog standard software security attacks.
My main point was that the threat model includes the manufacturer needing to maintain ownership of secrets more often than you'd expect. I'd suggest that your step 3 isn't as narrowing.
Whether meeting that model warrants building in resistance to classes of SCA is an almost independent question, and I would agree that it's not very likely in most cases. The consumer-facing industries in which you do see that (e.g. set-top box, printing) aren't really IoT ones, either.
Regarding the IOT setup, I know first hand that Nest products are trying very hard to be secure even in case of physical possession. Especially on the firmware signature verification part. But they rely on some fun post-quantum scheme, not on EdDSA.
It's true that in the end it all boils down to your threat model, but I bet smartcard producers are glad someone researched this before they got on the EdDSA train.
If the only thing controlling whether they implement something is if they can find negative results on google, I don't think they'll be glad for very long.
The article shows that maybe it's still somewhat easier to get them than we'd often assume?
I'm just saying: it's probably not all that easy to pull a key off a smartcard.
Remember that in most cases, the security model baked into the device is also that it need only be secure enough to resist economical attacks --- meaning, if the value of the secret you obtain from a smart card is less than the value of the equipment and expertise you'd need to carry out the attack, the smart card has probably "won" even if it can be compromised.
>The textbook example of a setting where fault attacks matter is smart cards: they're computing devices that perform crypto operations and their users are naturally adversarial (the card controls their access to some resource they'd naturally prefer unfettered access to).
Smart cards (and more specialized HSMs), along with on-cpu/soc dedicated crypto black boxes, are also utilized by users to provide an additional layer of redundancy or at least compromise-detectability for keys and authenticity of operations in the face of attackers who gain physical access. In many countries national IDs, financial tokens and such depend on this. Obviously modern smartphones make use of this too.
>The paper mentions IOT as a setting that will increasingly implicate fault attacks. To some extent, maybe. Most IOT devices aren't adversarial with the people whose custody they're in; in other words, if you can get your hands on an IOT device once deployed, the threat model for that device basically already says you own all the "secrets" in the device.
This is definitely at least somewhat wrong. Particularly as IOT gets deployed more, they will commonly be in settings that aren't "public" per se but certainly do not enjoy high levels of physical security against anyone who might pass by either. Private individuals, educational institutions, businesses and organizations often have guests who they want to have temporary limited resource access but do not want to let leverage that into persistent/unlimited resource access.
How much this matters depends on whether this sort of attack could ever be made easy and fast enough, and it can be at least partially hardened against for relatively low cost. But I wouldn't discount the importance of manufacturers at least always keeping in mind whether that threat scenario implicates to their customers or not. This shouldn't be a big deal and is very much not headline territory, but it's part of the shift to ever more ubiquitous networked inputs that we may have to rethink from time to time how much physical access restrictions are a defense right?
Checking a signature is twice as expensive as generating one. Yes, a 3X cost is significant, but it's not "unworkably slow". It's just a tradeoff of performance vs security. And for some applications, like bitcoin wallets, it might well be a price worth paying.
This is an integrated hardware/software problem, right?
I think my comment is being read to suggest that fault attacks are generally unimportant, which is not my point. I think if there's a subtextual reaction to anything in my comment, it's the notion that crypto library developers should be implementing and marketing fault attack countermeasures. If you need defenses against fault attacks, as a general class of attacks, you need something no crypto library is likely to provide you.
The only software defense that actually works reliably is preventing duplicate r-values from being generated in the first place. This can be accomplished by augmenting the fully deterministic r with some additional randomness, producing a synthetic r which is still guaranteed to be unique-per-message even if we have an RNG failure. See:
Note this is quite different from the k-value in ECDSA, which is not synthesized from the message contents at all.
libhydrogen always worked that way.
I wish there was a way to make comment text bold, because this definitely seems like it needs to be emphasized.
(I still think letting the user control `r` is too dangerous.)
Monocypher can "only" produce 7000 signatures per seconds on a single thread on my laptop (i5 Skylake Intel). With the verification, I would waste a whooping 0.3ms on each signature, reducing the throughput to 2000 signatures per seconds.
Unless you're running a signing intensive application on the scale of Let's Encrypt, you won't really care about the slow-down. If you don't fully trust your hardware however, faults might be a real problem.
2. If you can induce faults on Lets Encrypt's signing infrastructure you have better attacks to deploy.
3. Lets Encrypt doesn't use Elliptic Curve in the first place.
If the algorithm is immune to faults, the worst thing that can happen is that you produce an invalid signature, which will then be rejected. This turns the security concern into a more mundane performance concern.
The truly paranoid would never believe any algorithm is immune to faults.
Even if full mathematical code analysis shows it to be so, can you prove that the proving analysis algorithm itself is immune to faults? It is potentially faulty turtles all the way down!
The proof checker can be ran several times to account for faults. Then you don't need to run it at all (it's proven). And you can use another proof checker (or that proof checker itself) to prove that the proof checker works. Etc.
Now there's the problem of the compiler, which gives a upper bound on the confidence you can have: if there's a bug in it, you're screwed even with a correct algorithm.
> Even if full mathematical code analysis shows it to be so
Come to think of it, that may not be possible for lots and lots of algorithms —whose mathematical security is only conjectured to begin with. Thanks for pointing that out.
This is what SEL4 people are doing.
The remaining part is verifying if your machine model is correct. Not trivial for some CPU and memory controller as complex as x86. (Even though just a subset of operations are used.)
Doing this also allows you a proof against fault attacks and timing attacks.
A better question would be whether, in a theoretical sense, fault-free algorithms even exist. There are similar questions, like whether one-way functions exist, where we have lots of evidence to suggest that the answer is "yes" but we have yet to prove it.
But you can prove many things. Many programs we care about can be proven correct (for whatever definition of "correct" we manage to formally specify), and those we can't prove correct can probably be transformed into provable versions —assuming they're short enough.
Edit: Removed silly question on why the order has to be 'r' before the second hash. Spolier: Signature is two parts - and at least one needs to be fixed to prevent an forger just modifying both. That needs to be the 'r' part, so it has to be before the S.
The way EdDSA works, one needs to hash the message twice, in a way that makes incremental hashing impossible (you need to complete the first pass before you begin the second):
r = Hash(secret_key | message)
R = Base_point × r
H = Hash(R | public_key | message)
// use H to compute the signature
We could for instance change the second hash to this:
H = Hash(public_key | message | R)
A more involved tweak would hash the message first:
r = Hash(message | secret_key)
R = Base_point × r
H = Hash(message | R | public_key)
Now the question: would those modifications work? Are they still immune to collision attacks? How about length extension attacks?
Subsidiary question: if my modifications are secure, how DJB and al didn't think of it? DJB in particular is reputed to be quite performance sensitive.
Look for posts around June 2015, e.g., with Subject:s like "Summary of the poll: Elliptic Curves - signature scheme: friendliness to low memory implementations" or subjects containing "IUF" (Init, Update, Final) here:
(The archives' main page is https://www.ietf.org/mail-archive/web/cfrg/current/maillist....)
I think the idea with the ordering of the inputs to the hash functions was to always put the most secret data first, to make things harder for the attacker. It was probably expected that the "message" would always be small, like it must be with for instance RSA.
It could be, and that's what I currently recommend. But then isn't there a problem with collision attacks? Or does one still has to perform a pre-image attack on the hashes of previous messages?
Assuming I've correctly skimmed the paper, it looks like it relies on your Ed25519 impl incorrectly reusing the same 'r' value when signing messages - and then deliberately flipping bits whilst it does so, in order to gradually recover the private key. Our usage of Ed25519 doesn't reuse 'r' values, so I think Olm is okay.
The fault attack acts after that deterministic generation, so tha fact that Olm doesn't reuse r values explicitly doesn't matter. You're not _erroneously_ reusing r: the attack involves you trying to sign the same message twice, with a fault occurring on the second signing.
You care about these kinds of attacks if you build or rely on a device that uses ECDSA for security and is in the physical custody of adversaries. It's not an attack that is especially relevant for client/server software or cloud cryptography (there are less theatrical ways your cloud provider can destroy your security).
I imagine something like rowhammer could be used..
1) Failed to test Ed209 on demo conditions (medium sized meeting room).
2) Failed to anticipate the failure of the sound recognition algorithm (gun on on carpet instead of gun on concrete).
3) Failed to implement software failsafes for the demo: a layer on the motor system so it never touches anything (especially moving things such as humans), a weapons system that doesn't actually fire…
4) Failed to implement a proper shut down button, such as a physical plug one could pull from behind.
5) Loaded the damn thing with live ammo.