I'm torn.† On the one hand, it sends a "jury is out" message on modern password hashing in general. On the other hand, developers already handwave about "bcrypt not having gotten enough cryptographic review", as if someone was ever going to publish a cryptanalytic result showing bcrypt to be worse than SHA1.
I'd have liked the jury to have been back on this last decade, but I'll settle for it being in next year.
By the way, the construction you're looking for is scrypt.
One advantage of a jury being out is that someday said jury can come back in and return a verdict.
> as if someone was ever going to publish a cryptanalytic result showing bcrypt to be worse than SHA1.
SHA-1 has a formal specification, an RFC, a reference implementation, implementation guidance, and comprehensive test vectors published. To date, bcrypt is lacking some of those things.
Give me a break. That's an application implementation flaw, and one no standard could have prevented. It's like saying that insufficient cryptanalysis is responsible for the OpenSSL RCEs.
Again: I'm not really torn. It's a good thing you're all doing this.
Then I'm sure you'll have no trouble* finding similar vulnerabilities introduced by implementation flaws of any NIST (or even IETF) defined algorithms.
I'll bet that "Specially crafted AES and RC4 packets" does not actually refer to the implementation of a NIST standard algorithm, just its usage in a context outside of such a standard.
Granted, these were just candidates and not actually NIST defined algorithms, but the point stands that algorithms can be fine while standard implementations have bugs.
Those were round 1 submissions, not even close to being "standard implementations". Which proves my point that the standardization process works to minimize implementation bugs in implementations of the standard.
+ /* The state and buffer size are driven by SHA256, not by SHA224. */
memcpy(context->state, sha224_initial_hash_value,
- (size_t)(SHA224_DIGEST_LENGTH));
- memset(context->buffer, 0, (size_t)(SHA224_BLOCK_LENGTH));
+ (size_t)(SHA256_DIGEST_LENGTH));
+ memset(context->buffer, 0, (size_t)(SHA256_BLOCK_LENGTH));
The NetBSD code was confused about which algorithm it was implementing. This can hardly be used to generalize about vulnerabilities in specific NIST approved algorithms.
You're just wrong about this point, Marsh. You are very smart and often right, but not invariably so.
So even if we allow this example as meeting my test of similar vulnerabilities introduced by implementation flaws of any NIST (or even IETF) defined algorithms I can still claim that this bug that existed in NetBSD only for 3 months in the spring of 2009 is the exception which proves the rule.
EDIT: Sorry, I'm looking at the wrong patch. It appears that at the time indicated in the advisory, a bunch of other stuff was added to the source tree.
Either I'm missing something obvious or there's a bit of misdirection going on. For example, it says: *"The overflow occurs at the time the hash init function is called (e.g.
SHA256_Init). The init functions then pass the wrong size for the context as an argument to the memset function which then overwrites 4 bytes of the memory buffer located after the one holding the context." and "fixed: NetBSD-4 branch: Jul 22, 2009"
I'd have liked the jury to have been back on this last decade, but I'll settle for it being in next year.
By the way, the construction you're looking for is scrypt.
† I'm not really torn.