

Password Hashing Competition - dchest
https://password-hashing.net/

======
josephscott
"The poor state of passwords protection in web services: passwords are too
often either stored in clear (these are the services that send you your
password by email after hitting "I forgot my password"), or just hashed with a
cryptographic hash function (like MD5 or SHA-1), which exposes users'
passwords to efficient brute force cracking methods."

If you can't get people to use current crypto hashing techniques like bcrypt
or scrypt why would coming up with another one help?

Of course it would be nice to see something with the work factor features of
bcrypt that were more specifically resistant to GPU optimized crackers.

~~~
tptacek
I'm torn.† On the one hand, it sends a "jury is out" message on modern
password hashing in general. On the other hand, developers already handwave
about "bcrypt not having gotten enough cryptographic review", as if someone
was ever going to publish a cryptanalytic result showing bcrypt to be worse
than SHA1.

I'd have liked the jury to have been back on this last decade, but I'll settle
for it being in next year.

By the way, the construction you're looking for is scrypt.

† _I'm not really torn._

~~~
marshray
> _"jury is out"_

One advantage of a jury being out is that someday said jury can come back in
and return a verdict.

> _as if someone was ever going to publish a cryptanalytic result showing
> bcrypt to be worse than SHA1._

SHA-1 has a formal specification, an RFC, a reference implementation,
implementation guidance, and comprehensive test vectors published. To date,
bcrypt is lacking some of those things.

And yes, bcrypt has gotten pwned worse than SHA-1 as a result:
<http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-2483>

~~~
tptacek
Give me a break. That's an application implementation flaw, and one no
standard could have prevented. It's like saying that insufficient
cryptanalysis is responsible for the OpenSSL RCEs.

Again: I'm not really torn. It's a good thing you're all doing this.

The jury is not really out on bcrypt, though.

~~~
marshray
_one no standard could have prevented_

Then I'm sure you'll have no trouble* finding similar vulnerabilities
introduced by implementation flaws of any NIST (or even IETF) defined
algorithms.

*Note: sarcasm.

~~~
jgeralnik
How about this:
[http://blog.fortify.com/blog/fortify/2009/02/20/SHA-3-Round-...](http://blog.fortify.com/blog/fortify/2009/02/20/SHA-3-Round-1)?

Granted, these were just candidates and not actually NIST defined algorithms,
but the point stands that algorithms can be fine while standard
implementations have bugs.

~~~
marshray
Those were round 1 submissions, not even close to being "standard
implementations". Which proves my point that the standardization process works
to minimize implementation bugs in implementations of the standard.

~~~
tptacek
NIST standardization sure didn't help SHA2:

[http://mail-index.netbsd.org/tech-security/2009/07/28/msg000...](http://mail-
index.netbsd.org/tech-security/2009/07/28/msg000250.html)

You're just wrong about this point, Marsh. You are very smart and often right,
but not invariably so.

~~~
marshray
OK, so here's the patch to the bug in question:

[http://cvsweb.netbsd.org/bsdweb.cgi/src/common/lib/libc/hash...](http://cvsweb.netbsd.org/bsdweb.cgi/src/common/lib/libc/hash/sha2/sha2.c.diff?r1=1.17&r2=1.18&only_with_tag=MAIN)

    
    
         +	/* The state and buffer size are driven by SHA256, not by SHA224. */
          	memcpy(context->state, sha224_initial_hash_value,
         -	    (size_t)(SHA224_DIGEST_LENGTH));
         -	memset(context->buffer, 0, (size_t)(SHA224_BLOCK_LENGTH));
         +	    (size_t)(SHA256_DIGEST_LENGTH));
         +	memset(context->buffer, 0, (size_t)(SHA256_BLOCK_LENGTH));
    

The NetBSD code was confused about _which_ algorithm it was implementing. This
can hardly be used to generalize about vulnerabilities in specific NIST
approved algorithms.

 _You're just wrong about this point, Marsh. You are very smart and often
right, but not invariably so._

So even if we allow this example as meeting my test of _similar
vulnerabilities introduced by implementation flaws of any NIST (or even IETF)
defined algorithms_ I can still claim that this bug that existed in NetBSD
only for 3 months in the spring of 2009 is the exception which proves the
rule.

It doesn't compare at all in scope to <http://cve.mitre.org/cgi-
bin/cvename.cgi?name=CVE-2011-2483> in terms of the length of time, number of
systems affected, or number of credentials created with weak cypto.

EDIT: Sorry, I'm looking at the wrong patch. It appears that at the time
indicated in the advisory, a bunch of other stuff was added to the source
tree.

Either I'm missing something obvious or there's a bit of misdirection going
on. For example, it says: *"The overflow occurs at the time the hash init
function is called (e.g. SHA256_Init). The init functions then pass the wrong
size for the context as an argument to the memset function which then
overwrites 4 bytes of the memory buffer located after the one holding the
context." and "fixed: NetBSD-4 branch: Jul 22, 2009"

But the diffs of 2009-07-22
[http://cvsweb.netbsd.org/bsdweb.cgi/src/sys/sys/sha2.h.diff?...](http://cvsweb.netbsd.org/bsdweb.cgi/src/sys/sys/sha2.h.diff?r1=1.1.4.1&r2=1.1.4.2&only_with_tag=netbsd-4)
[http://cvsweb.netbsd.org/bsdweb.cgi/src/common/lib/libc/hash...](http://cvsweb.netbsd.org/bsdweb.cgi/src/common/lib/libc/hash/sha2/sha2.c.diff?r1=1.2.4.4&r2=1.2.4.5&only_with_tag=netbsd-4)
don't seem to affect the memset call in SHA256_Init() at all or the size of
the structure.

------
Zenst
"• The poor state of passwords protection in web services: passwords are too
often either stored in clear (these are the services that send you your
password by email after hitting "I forgot my password"), or just hashed with a
cryptographic hash function (like MD5 or SHA-1), which exposes users'
passwords to efficient brute force cracking methods."

That is a huge issue that I wished was made a criminal action.

But when a user forgets there password, the issue of issuing a new one falls
down to verifying the user is who they say they are without a password and
falls down to the previous details held by that user.

    
    
       Be it phone number to SMS a reset code or email to issue a rest code or new password.  It is a hard area that will always have a weakness.
    

I do wonder if the postal system could step in by offering a service which you
could walk in - show your passport or supporting ID and have them issue you
with a rest code which you could use upon the site.

Certianly would add to the services they offered and whilst it would not be
instant or as easy for many it certainly would be far more robust way to
address the issue. A service that until something better comes along that
would work for everybodies best interests.

Maybe a good opertunity for a startup to pursue.

~~~
cheald
Hashing data with SHA1 should be a criminal offense? Seriously?

~~~
harryh
It already is. The FCC regulates this sort of thing. If you are a large
business with lots of user data and you don't take appropriate steps to
protect it they can and will fine you.

~~~
tptacek
The FCC has never fined anyone for SHA1-hashing passwords.

~~~
harryh
I'll take your word for it, but I bet that has to do with the fact that once
you get big enough for the FCC to care you probably know enough to properly
store a password.

They've certainly fined other companies for other transgressions of a similar
nature.

~~~
tptacek
For instance...?

~~~
harryh
Path just had to pay an 800k fine (though I think that was for coppa
violations?).

Twitter & Google have also had very prominent investigations & settlements in
the past couple of years.

~~~
dfc
I think tptacek might be using the socratic method or else he is just messing
with you but I can't watch it any longer.

The regulatory function you are talking about falls under the purview of the
_FTC_ , not the FCC.

 _"Path Social Networking App Settles FTC Charges it Deceived Consumers and
Improperly Collected Personal Information from Users' Mobile Address Books

Company also Will Pay $800,000 for Allegedly Collecting Kids' Personal
Information without their Parents’ Consent"_[1]

[1] <http://www.ftc.gov/opa/2013/02/path.shtm>

~~~
tptacek
For the record: I genuinely do not know the answer to the question I just
asked, although obviously I'm skeptical about this underlying claim.

~~~
dfc
I am equally skeptical about the SHA-1 claim. I am just commenting that as a
general rule customer protection falls under the purview of the FTC. There are
certainly corner cases where industry specific regulations introduce
additional oversight when it comes to customer information protection, e.g.
OCC/OTS/NCUA and GLBA[1], or HHS and HIPPA[2].

[1] [http://www.occ.gov/news-issuances/news-releases/2005/nr-
ia-2...](http://www.occ.gov/news-issuances/news-releases/2005/nr-
ia-2005-35.html)

[2]
[http://www.hhs.gov/ocr/privacy/hipaa/administrative/breachno...](http://www.hhs.gov/ocr/privacy/hipaa/administrative/breachnotificationrule/index.html)

Side Bar:

If you are bored and want to get your wonk on search regulations.gov for
sha-1[a]. It looks like most of the proposed rules mentioning SHA-1 come from
HHS, FRA (Federal Railroad Administration) and the NIGC (National Indian
Gaming Commission). However there is one reference to SHA-1 in an FCC rule
about the Commercial Mobile Alert System[b]:

    
    
      CMAC-digest
         Optional element. The code representing
         the digital digest (``hash'') computed
         from the resource file. Calculated using
         the Secure Hash Algorithm (SHA-1) per
         [FIPS 180-2]. Alert Gateway uses the CAP
         digest element to populate this element.
    
    

[a]
[http://www.regulations.gov/#!searchResults;rpp=25;po=0;s=sha...](http://www.regulations.gov/#!searchResults;rpp=25;po=0;s=sha-1;fp=true;ns=true)

[b]
[http://www.regulations.gov/#!documentDetail;D=FCC-2008-0002-...](http://www.regulations.gov/#!documentDetail;D=FCC-2008-0002-0001)

------
bradleyjg
Given that bcrypt and scrypt have had very little attention from
cryptographers, it would be a better to start there than adding even more
schemes. Unless there is some reason to believe that the winner of this
contest will attract serious analysis. If that's true then they should
probably allow the established players to enter.

Diversity isn't necessarily a good thing in security. Look at all the attacks
on SSL/TLS that rely on being able to negotiate a cypher.

~~~
marshray
Actually, TLS cipher suite negotiaion has been a lifesaver for mitigating
recent attacks.
[http://www.phonefactor.com/resources/CipherSuiteMitigationFo...](http://www.phonefactor.com/resources/CipherSuiteMitigationForBeast.pdf)

Unless you're referring to SSLv2 or home-rolled nonstandard downgrade logic
invented by browser vendors, modern TLS is pretty good at preventing
ciphersuite downgrade attacks.

If a single standard had been specified it would have been some minor variant
of AES-CBC-HMAC<SHA>. We know of several ways to attack that combination now.

~~~
tptacek
Ah-ah! We know several ways to attack poor implementations of that combination
now. AES-CBC with HMAC is still sound.

~~~
marshray
Where "poor implementations" = "just about about all of them, ever, except
those patched in the last 24 months".

~~~
tptacek
Are you arguing that AES-CBC + HMAC is NOT a sound construction, or are you
just augmenting my comment with more information?

~~~
marshray
I'm not sure.

I think I'm saying that a "formally proven sound" construction that is full of
poorly understood traps for the implementer on actual machines is not always
the best choice in practice.

~~~
tptacek
Absurdly intelligent and well-informed crypto people have screwed up CTR mode
before.

~~~
marshray
Yep. I think we can go farther than that and say that it's been historically
screwed up by protocol designers and/or implementers more often than not (with
predictable IVs, timing and length oracles, MAC-then-encrypt, etc)

------
Zenst
No panic, just if you use IE read this:
<http://en.wikipedia.org/wiki/Server_Name_Indication#Support>

Real browsers like Opera, Chrome, firefox.... all good

my bad and edited my panic away.

Thanks dchest for being more awake than I.

~~~
stephengillie
The certificate looks good to my browser. _shrug_

It's a StartCom intermediate certificate, good from 2/5/13 to 2/7/14, and the
connection is encrypted with RC4_128. So says Chrome...

~~~
Zenst
Yes yoru spot on, just looked into it more and it is TLS 1.0 on chrome and I
disabled that upon my IE browser, so may be the issue and explains why. Though
as different certificate my initial thoughts are still with some anti IE
script of some form - IE gets cert for differnt site and also from same
issuing authority.

No panic.

------
jstalin
While I think it's cool to have a competition to help come up with better,
"standardized" methods of hashing, PBKDF2 is pretty good already. The problem
seems to be a lack of implementation, not a lack of good options.

~~~
dchest
PBKDF2 is not pretty good: the difference between GPU/hardware-accelerated
implementations of PBKDF2 with common hashes and software implementations for
CPUs is huge. Plus you can compute it in parallel cheaply. See scrypt paper
for details: <http://www.tarsnap.com/scrypt.html>

~~~
jstalin
I can't find in that paper how many iterations they put PBKDF2 through and
which hashing algorithm they used. Any implementation of PBKDF2 I do uses a
very high number of iterations (50,000+) and I use whirlpool, which is amongst
the slowest of algorithms.

~~~
dchest
100ms in table: PBKDF2-HMAC-SHA256 with an iteration count of 86,000; 5s:
PBKDF2-HMAC-SHA256 with an iteration count of 4,300,000 (see page 13).

If you're using the slowest possible algorithm/biggest number of iterations,
you're still vulnerable to parallel attacks: imagine a huge number of cheap
chips bruteforcing passwords. Scrypt tries to maximize the cost of such chips
by requiring large amounts of RAM for computation. Again, this is explained in
the paper.

~~~
jstalin
Thanks.

