
Cryptographic Right Answers - sweis
https://gist.github.com/tqbf/be58d2d39690c3b366ad
======
alextgordon
> If you can get away with it: use SHA-512/256, which truncates its output and
> sidesteps length extension attacks.

Note that "SHA-512/256" is a separate algorithm, not to be confused with
"SHA-512 or SHA-256" which are two other less secure algorithms.

~~~
erkl
Unless you know something about SHA-512 that I don't, calling it less secure
than SHA-512/256 seems like a mistake.

~~~
nialo
SHA-512 allows a length extension attack that SHA-512/256 does not. Some
links:

[http://en.wikipedia.org/wiki/Length_extension_attack](http://en.wikipedia.org/wiki/Length_extension_attack)
[http://cryptopals.com/sets/4/challenges/29/](http://cryptopals.com/sets/4/challenges/29/)

~~~
erkl
First off, thanks for the reply.

I have to say it feels a bit weird to deduct points (so to speak) from a
highly regarded cryptographic hash function because it doesn't outright
prevent one particular, broken MAC generation scheme, but I guess the argument
has some merit.

While I think it's harmless to say that SHA-512/256 is stronger than SHA-256
(as they otherwise provide the same theoretical level of security), I still
think it's wrong to claim that SHA-512/256 is also stronger than SHA-512,
which has a vastly greater theoretical security margin.

Just use a MAC algorithm that isn't terrible.

~~~
tptacek
Susceptibility to length extension would also have disqualified SHA2-512 from
SHA-3, where that property was a requirement, so it seems like the
cryptographic community has come to conclusion about this.

The "security margin" of a full SHA2-512 digest, over its truncated
SHA2-512/256 alternative, is not meaningful in practice.

If you want to use full-width SHA2-512, go ahead. SHA2-512/256 is safer.

------
JoshTriplett
> Password handling (Was: scrypt or PBKDF2): In order of preference, use
> scrypt, bcrypt, and then if nothing else is available PBKDF2.

What's the reason to prefer scrypt over bcrypt? And, what's the reason to
prefer both over PBKDF2? (Asking because I see quite a few bits of software
that use PBKDF2.)

> Asymmetric signatures (Was: Use RSASSA-PSS with SHA256 then MGF1+SHA256
> yabble babble): Use Nacl, Ed25519, or RFC6979.

Could you make a recommendation for or against using GPG, since that's by far
the most common approach for asymmetric signatures? (Obviously such a
recommendation would need to point at specific key/algorithm choices to use or
avoid.)

> Client-server application security (Was: ship RSA keys and do custom RSA
> protocol) Use TLS.

Using which of the many TLS implementations?

~~~
cperciva
_What 's the reason to prefer scrypt over bcrypt?_

scrypt is asymptotically much more expensive to crack.

 _And, what 's the reason to prefer both over PBKDF2?_

scrypt is asymptotically much more expensive to crack.

bcrypt is asymptotically marginally more expensive to crack than PBKDF2, but
not enough to matter; I'm guessing tptacek's point here is that bcrypt has
more library support available (despite PBKDF2 being the _de jure_ standard).
I wouldn't say there's a strong argument in either direction.

 _Could you make a recommendation for or against using GPG, since that 's by
far the most common approach for asymmetric signatures?_

Avoid if possible. The code was written by a colony of drunk monkeys, in an
era before anyone understood the basics of modern cryptography; I'm really not
sure which is worse between gnupg and OpenSSL. Of course, GPG is the standard
for encrypted email, just like SSL/TLS is the standard for web sites, so you
may have no choice...

(FYI: [https://vuxml.freebsd.org/freebsd/pkg-
gnupg.html](https://vuxml.freebsd.org/freebsd/pkg-gnupg.html) )

~~~
JoshTriplett
> Avoid if possible. The code was written by a colony of drunk monkeys, in an
> era before anyone understood the basics of modern cryptography; I'm really
> not sure which is worse between gnupg and OpenSSL. Of course, GPG is the
> standard for encrypted email, just like SSL/TLS is the standard for web
> sites, so you may have no choice...

Are there any viable FOSS implementations of the OpenPGP standard other than
GPG? Detached GPG signatures seem to be the most common mechanism to validate
software distribution and similar.

~~~
wolf550e
Has anyone checked [https://github.com/google/end-to-
end](https://github.com/google/end-to-end) ? I want to believe google did
their due diligence.

------
zokier
Considering the recommendations for NaCL, what is the current status of it?
There is NaCL proper and its webpage has link to a 2011 version. Then there is
TweetNaCL which seems more recent with a 2014 release. And finally there is
libsodium which is not from DJB. What is the recommended version to use? I'd
guess TweetNaCL because it is most recent, but idk.

On a slightly related note, I just noticed that there is also µNaCL for
embedded use that seems really cool.

~~~
tptacek
The current state of it is that Nacl (pronounced: "turnips") circa 2011 is
just fine, Tweetnacl is just fine, and if you have packaging concerns, you can
use libsodium --- but stick to the constructions that are also in
Nacl/Tweetnacl, because libsodium took things a little further than I think
they should have.

~~~
kentonv
Could you elaborate on what you think shouldn't have been included in
libsodium? I'm very interested in this, as someone who uses libsodium.

~~~
tptacek
Anything that libsodium does, or allows clients to do, that Nacl doesn't allow
you to do.

Nacl isn't an open source project or helpmate for application programmers;
it's an academic effort to design the best misuse-resistant crypto interface
for programmers. I like libsodium, but it is not that.

------
some_furry
> If your threat model is criminals, prefer DH-1024 to sketchy curve
> libraries. If your threat model is governments, prefer sketchy curve
> libraries to DH-1024. But come on, find a way to one of the previous
> recommendations.

I got a serious chuckle out of this. :)

------
Xorlev
Pardon my ignorance, but is the NaCl referred to in the gist this NaCl?
[http://nacl.cr.yp.to/](http://nacl.cr.yp.to/) Or does it refer to libsodium
here?
[https://github.com/jedisct1/libsodium](https://github.com/jedisct1/libsodium)

I realize that the library is probably available via my package manager, but
it'd be nice if the install page
([http://nacl.cr.yp.to/install.html](http://nacl.cr.yp.to/install.html))
linked to an archive over HTTPS and had some signatures to compare hosted
elsewhere.

~~~
some_furry
Yes to the first question. Libsodium is a fork of NaCl and it even says so in
the description.

------
chaitanya
AES-GCM allows the caller to supply additional authenticated data (AAD) --
data that is only authenticated but not encrypted. However NaCl's
authenticated encryption mode doesn't seem to provide anything like this:
[http://nacl.cr.yp.to/secretbox.html](http://nacl.cr.yp.to/secretbox.html)

So when I have AAD, what should I do when using NaCl? Add it as part of the
message to crypto_secretbox(), or should I authenticate this data separately?

~~~
agwa
You could use libsodium instead of NaCl, which has an AEAD interface:

[https://download.libsodium.org/doc/secret-
key_cryptography/a...](https://download.libsodium.org/doc/secret-
key_cryptography/aead.html)

~~~
arielby
Unfortunately that interface is rather dangerous because of the 64-bit nonces
- it is essentially only useful for encrypting multiple messages over a single
connection.

------
hobarrera
The lack of justifications makes this as useful as anybody else out there
claiming "use X. Don't use Y".

Eg: > Avoid: AES-CBC, AES-CTR by itself, block ciphers with 64-bit blocks ---
most especially Blowfish, which is inexplicably popular, OFB mode. Don't ever
use RC4, which is comically broken.

Why not 64-bit blocks? What's wrong with them? How do they affect us?

Mind you, I'm not saying the statement is incorrect, but with no justification
for it, I'm not convinced why I should avoid them.

~~~
tptacek
I mean this sincerely and not as snark: if this is a question you have to ask,
just use Nacl; don't design with ciphers yourself. Since there is a "right
answer" to this question and a "wrong one", "convincing" doesn't seem like a
good use of anyone's time.

The right way to learn about cryptography is to start by learning how to break
it. If that's something you're willing to sink time into, try this thing we
set up:

[http://cryptopals.com](http://cryptopals.com)

It's totally free and by the send of set 3, you'll have an appreciation for
block sizes.

~~~
some_furry
Hey Thomas, does anyone at Matasano still review submissions if someone wants
to submit them for a particular programming language? I'm looking to establish
myself as the luminary crypto nerd of the furry fandom :3

~~~
tptacek
I don't know why you got downvoted. Maybe it's the furry thing. Cryptopals is
still ongoing (there's a set 8 in the works, all elliptic curve attacks). As
for posting the solutions: we're doing that, too, in the abstract, but we're
all busy and every time we bring it up a bunch of people say "noooo don't post
solutions".

~~~
some_furry
Heh. Haters gonna hate.

Okay, very glad to hear that there's still work being done on that end. I'll
start sending solutions in then :3

------
cperciva
Since this is heading towards the top of HN, I figure it's worth responding to
the specifics here:

 _AES-GCM_

As tptacek says, this has pitfalls on some platforms. I also dislike exposing
AES cores to malicious data, which is my primary reason for preferring a hash-
based MAC construction.

 _Avoid: key sizes under 128 bits._

My recommendation for 256-bit symmetric keys isn't because I think AES-128 can
be broken mathematically; rather, it's because AES implementations have a
history of leaking some of their key bits via side channels. This is less of
an issue now than it was five years ago (implementors have found and closed
some side channels, and hardware AES implementations theoretically shouldn't
have any) but given the history of leaking key bits I'd prefer to have a few
to spare.

 _Avoid: userspace random number generators_

Thomas and I have argued about this at length; suffice to say that, as someone
who has seen interesting misbehaviours from kernel RNGs I'd prefer to use them
for seeding and then generate further bits from a uesrspace RNG. (Thomas's
counterargument, which has some validity, is that he has seen interesting
misbehaviours from userspace RNGs. This largely comes down to a question of
whether you think the person writing your userland crypto code is more or less
prone to making mistakes than the average kernel developer.)

 _avoid RSA_

Thomas is correct to imply that a random RSA implementation is more likely to
be broken than an average elliptic curve implementation. This is true for the
same reason as a random program written in python is more likely to have bugs
than a random program written in Brainfuck: Inexperienced developers usually
don't even try hard problems. On the other hand, _for any particular
developer_ , an RSA implementation they write is more likely to be correct
than an elliptic curve implementation they write.

I also continue to be wary of mathematical breakthroughs concerning elliptic
curves. Depending on the amount of new research we see in the next few years I
might be comfortable recommending ECC some time between 2020 and 2025.

 _use NaCl_

This is not entirely a bad idea. The question of "implement yourself or use
existing libraries" comes down to the availability of libraries and whether
the authors of the library are more or less prone to making errors than you;
"random developer vs. NaCl developers" is straightforward and doesn't have the
same answer as "random developer vs. OpenSSL developers".

 _you discover that you made a mistake and your protocol had virtually no
security. That happened to Colin_

Just to clarify this, the (very embarrassing) bug Thomas is referring to was
in the at-rest crypto, not the encrypted client-server transport layer.

 _Online backups (Was: Use Tarsnap): Remains Tarsnap. What can I say? This
recommendation stood the test of time._

I have to agree with Thomas on this one. ;-)

~~~
tptacek
* If you're concerned about attacker data hitting the AES core, Salcha20+Poly1305 doesn't have that problem, and is generally preferable to AES-GCM in every scenario anyways. There is no scenario I can think of where you can do CTR+HMAC and can't do Salcha20+Poly1305. If you have to stick with standards-grade crypto, GCM is your best bet.

* The track record of userspace RNGs vs. kernel RNGs speaks pretty loudly. In any case, we should be clear that you're advocating for "bootstrap with /dev/urandom and then expand in-process", not, like, havaged or dakarand. We're closer on this than people think.

* I'm not even talking about people writing their own RSA. Do I need to say that? If so, recommendation #1: don't write your own RSA. I'm saying that all else equal, if you're using good libraries, _still avoid RSA_ , for the reasons I listed.

* In fairness, the CTR problem you had is also a threat to GCM. This used to be why I recommended CBC a few years ago: because we kept finding gameover CTR bugs in client code, and not so often CBC bugs. My opinion on this has changed completely in the last year or so.

~~~
cperciva
_If you 're concerned about attacker data hitting the AES core,
Salcha20+Poly1305 doesn't have that problem, and is generally preferable to
AES-GCM in every scenario anyways._

Right. And I'm optimistic about Salcha20 and Poly1305, but I'd like to see a
few more years of people attacking them before I would be willing to recommend
them.

 _we should be clear that you 're advocating for "bootstrap with /dev/urandom
and then expand in-process"_

Right. Or to be even more precise: Use HMAC_DRBG with _entropy_input_ coming
from /dev/urandom.

Also: For $DIETY's sake, if you can't read /dev/urandom, _exit with an error
message_. Don't try to fall back to reading garbage from the stack, hashing
the time and pid, or any other not-even-remotely-secure tricks. Denial of
service is strictly superior to falsely pretending to be secure in almost all
conceivable scenarios.

------
netheril96
One problems I have with most cryptographic libraries, like OpenSSL and NaCl
as recommended here, is their extensive use of globally mutable variables. I
can't understand how that seems a good idea in 2015.

------
lmm
> Avoid: offbeat TLS libraries like PolarSSL, GnuTLS, and MatrixSSL.

I'm interested to hear the rationale behind this. Those seem like reasonable
options considering OpenSSL's (and their) security history.

~~~
tptacek
I've reviewed the code of several of these libraries (I won't say which ones I
have which levels of confidence in), and: short summary: if you want to be the
site that reincarnates 1990s RSA bugs or 2000's-era curve bugs, go ahead and
use a TLS library nobody else uses.

~~~
JoshTriplett
PolarSSL and MatrixSSL definitely seem far off the beaten path, but _many_
projects use GnuTLS (both as one of the more well-known non-OpenSSL codebases
and because it has a GPL-compatible license). I'd be interested to know if
you're concerned about it in particular.

~~~
evmar
There was a GnuTLS vulnerability introduced in 2000 was discovered in 2014 due
to an audit. To summarize there was a refactoring that had no accompanying
test coverage that had the effect of inverting a check.

Bugs happen to everyone, but the process that led to this one is really
concerning. (OpenSSL certainly has bad process too but as the GP mentions,
more people are hammering on it.)

This blog post has more (including an LWN article about it):

[http://gehrcke.de/2014/03/gnutls-vulnerability-is-unit-
testi...](http://gehrcke.de/2014/03/gnutls-vulnerability-is-unit-testing-a-
matter-of-language-culture/)

~~~
JoshTriplett
Every security library has had vulnerabilities, and I'd be more concerned
about libraries that _don 't_ (since it implies nobody is looking). Does
GnuTLS seem significantly more prone to vulnerabilities than other
implementations?

~~~
tptacek
I would flag use of GnuTLS in an audit. Sev:lo.

~~~
JoshTriplett
What would you recommend that _isn 't_ derived from the OpenSSL codebase, for
C projects that can't use OpenSSL for license reasons?

Your recommendation for TLS elsewhere in the thread was:

> You should use BoringSLL, LibreSSL, Go crypto/tls, or OpenSSL, in roughly
> that order.

Three of those are based on OpenSSL, and Go crypto/tls presumably only works
with Go.

~~~
tptacek
Porting to Windows and using schannel.

Sorry.

~~~
JoshTriplett
Guess I'll be sticking to GnuTLS then, if there's no better option available
for GPLed projects to use.

~~~
walyne
Another option is wolfSSL
([https://wolfssl.com/wolfSSL/Home.html](https://wolfssl.com/wolfSSL/Home.html))
which is GPL-compatible, but also has a commercial license option. They have
an OpenSSL compatibility layer, but are not a derivative of OpenSSL.

My experience with their software has been very positive, and they have
avoided the majority of recent insecurities. Plus they have great support for
anyone working on open source projects.

------
emaste
For reference, Colin's 2009 "Cryptographic Right Answers" blog post is here:
[http://www.daemonology.net/blog/2009-06-11-cryptographic-
rig...](http://www.daemonology.net/blog/2009-06-11-cryptographic-right-
answers.html)

------
nine_k
> _Avoid: constructions with huge keys, cipher "cascades"_

Can anyone please explain what's wrong with e.g. 4096 bit keys (instead of
1024 bit) and piling 2-3 different or same encryption passes? Performance
implications are obvious; what are security implications?

~~~
cperciva
This is in the context of symmetric keys, so I'm guessing "huge keys" is a
reference to the fact that "448-bit crypto" is a giant red flag because it
screams "we're using blowfish".

~~~
tptacek
See I just write 1/5th of a recommendation and leave it open-ended so Colin or
'pbsd can make it look like I was smart to begin with. Yeah... Blowfish...
that's what I meant... :)

~~~
cperciva
Well, in the more general case "huge symmetric keys" is a flag for "doesn't
understand crypto", but 448-bit blowfish keys are the most common place I see
this happening.

------
bradleyjg
> There is a class of crypto implementation bugs that arises from how you feed
> data to your MAC, so, if you're designing a new system from scratch, Google
> "crypto canonicalization bugs".

I get a whole bunch of links about javax.xml.crypto.dsig throwing exceptions,
which wasn't terribly illuminating.

I think the reference is to the bugs discussed on page 21 here:
[http://www.contextis.com/documents/33/Exploiting_XML_Digital...](http://www.contextis.com/documents/33/Exploiting_XML_Digital_Signature_Implementations-
HITBKL20131.pdf) but I'm not sure.

~~~
TheLoneWolfling
It boils down to this:

Make sure the data fed to your MAC is unambiguous. Or rather, make sure the
data fed to your MAC is done in such a way that you cannot have different
messages appear the same to the MAC encoder.

For instance, say you sort and concatenate your options without a delimiter.
Then ["ab", "cd"] will have the same MAC as ["a", "bcd"], as in both cases the
actual data fed to the MAC will be "abcd". This is a very bad thing.

------
aftbit
What if I need to send up encrypted logs from a number of clients? I tried to
use nacl for this, but in its opinionated style, it holds that I have to have
a sender private key to authenticate my logs, and it won't decrypt unless I
provide the corresponding public key on the other end.

I don't want authentication here - there's no way for me to manage these keys;
I just want to prevent someone from reading my logs off the disk...

~~~
marcosdumay
Do you want symmetric encryption? NaCl does that too, it's just a section
bellow the asymmetric ones on its documentation.

But I'm not sure you completely thought this out. If somebody can read your
disk, and if that includes software configuration, the only way to make it
impossible for people to read your logs is by using asymmetric crypto. And
yes, that'll require using different keys on the writing and reading software.

------
hellbanner
"Asymmetric encryption (Was: Use RSAES-OAEP with SHA256 and MGF1+SHA256 bzzrt
pop ffssssssst exponent 65537): Use Nacl.

You care about this if: you need to encrypt the same kind of message to many
different people, some of them strangers, and they need to be able to accept
the message asynchronously, like it was store-and-forward email, and then
decrypt it offline. It's a pretty narrow use case."

Is this like bitmessage?

------
arielby
Also:

For each key you use, pick 1 format of messages for it to authenticate.
Document that format. Version-control that documentation along with the code
that uses it. If the format changes in a non-back-compat way, pick a new key
(so try to use a backwards-compatible format). Ensure the documented messages
make sense (try not to have a "fire this person" message without knowing who
is that person) - timestamps and/or nonces can really help here.

If you can't pick just 1 format, you can say have the first 16 bytes of the
message be a UUID, and document each UUID-format (with the same documentation
rules as if you are not using a UUID).

Seriously, that and "don't mix secret and unauthenticated things" together
covers 90% of all vulnerabilities.

------
alokedesai
Can anyone elaborate why we shouldn't use BouncyCastle?

~~~
sarahj
The message here is avoid low-level crypto - if you find yourself having to
mess around IV's or choosing modes and padding then you are far more likely to
screw something up.

NaCl/libSodium provide higher level interfaces where the underlying primitives
are removed from the developer which makes it much more difficult to implement
bad crypto (at least as far as the individual constructs go...protocol design
may still get you)

~~~
alokedesai
Ahh got it, thanks!

------
ilurk
Didn't notice at first that it was from Thomas Ptacek.

But still feels odd OP is sharing it since it was a secret link.

~~~
cperciva
tptacek posted it to twitter, so I don't think it was secret.

~~~
some_furry
Probably posted as a "secret" gist so as to not clutter up his gist history?
That's the only reason I can imagine.

Or, more than likely, he had it as "secret" to get feedback from colleagues
and other crypto folks before he published it.

~~~
tptacek
Nope. This is literally just something I was going to twerp-storm, and then I
thought, "I don't want to be that guy on Twitter" (any more than I already
am), and so I found the least official place I could to put it.

~~~
cperciva
... and then it ended up at the top of HN anyway.

~~~
tptacek
Leave Britney alone. She's not well.

~~~
cperciva
I feel like I'm missing something here.

------
aftbit
Is there anything wrong with using haveged after the system has been up long
enough to generate a seed the traditional way?

I occasionally use it to make /dev/random unblock for applications that think
they need to use /dev/random to generate keys (cough gpg cough).

------
caf
_Avoid: ... SRP, J-PAKE, ..._

Are there any recommended schemes for password-authenticated key exchange?

~~~
tptacek
Don't do password-authenticated key exchange.

~~~
dfox
And what about zero-knowledge password proofs in general? (I tend to agree
that PAKE is bad idea, but I'm not sure if my reasons are same as yours)

In my opinion one should create encrypted channel essentially without any
authentication and then do authentication inside of such channel, with ZKPP
being one of the interesting ways of how to do that (with "plug password into
scrypt and use the result as EdDSA secret key" being particularly
straightforward solution), which obviously assumes that you have threat model
where exposing password to server is meaningful security concern (usually it
is not).

I've seen many systems where ZKPP is the right thing to do (such systems
usually involve offline operation with multiple users using same device), but
their authors came up with some weird-ass construction with bunch of symmetric
primitives that is anything but secure.

------
ingenter
Hypothetically, if any weakness is found in Curve25519, what happens to NaCl
users?

~~~
marcosdumay
The same thing that happens to 2048 bit RSA users if yet another weakness is
found on it. Or the same thing that happens on the users of the NIST curves if
some weakness is found (or disclosed).

------
nickpsecurity
This article does plenty right but gets a few things wrong. Overlooks a few
others. I'm going to hit on a few of these in order I see them.

"Avoid cipher cascades." I've pushed and successfully used cascades in highly
assured work for years. Cryptographers talk down about it but "meet in the
middle" is best attack they can cite. So, they're full of it & anyone who
cascaded might have avoided many algorithm/mode breaks. My polymorphic cipher
works as follows: three strong algorithms applied out of almost 10 potentials;
algorithms are randomly selected with exception that each pass is a new
algorithm; separate keys; separate, initial, counter values; process driven by
a large, shared secret. Breaking it without the secret requires breaking all
three and no cryptographer has proven otherwise despite massive speculation.

I'll briefly mention scrypt because it's _ironically_ great advice. I asked
cryptographers for over a decade to deliver a slow-by-design hash function
that couldn't be sped up. They, for years on end, criticized me (see
Schneier's blog comments) saying it was foolish and we just need to iterate a
fast one. I expected problems and hackers delivered them. I had to homebrew a
cryptosystem that input a regular HMAC scheme into another scheme: (a)
generated a large, random array in memory, (b) did impossible to speed up
operations on random parts of it, (c) iterated that excessively, and (d)
finished with a proper HMAC. Array size always higher than GPU or FPGA onboard
memory in case opponents used them. Eventually in a discussion, a Schneier
commenter told me about scrypt and I finally got to ditch the inefficient
homebrew. A true outlier in the crypto field.

Avoid RSA: bad advice for commercial if NSA is opponent. All his risks are
true. NaCl is great and my default recommendation. Yet, he doesn't mention
that NSA has another reason for pushing ECC: they own 26 patents on it that
they license conditionally on the implementation details along with ability to
restrict export. We know what NSA's goal for crypto is and therefore I avoid
ECC commercially like the plague. I just used RSA implementations and
constructions pre-made by experts with review by experts. Esp GPG, as NSA
haven't even broken it. They use it internally, actually.

For asymmetric signatures, see above. All points apply. I'll just add that,
for post-quantum, there's been tremendous process in Merkle signatures with
things such as unlimited signatures. Their security just depends on a hash
function, there's no known quantum attacks on them, and they're doing pretty
good against classical attacks, too. So, I'm following and doing private R&D
on standardizing Merkle signatures plus hardware to accelerate it on either
end.

He says use OpenSSL and avoid MatrixSSL, PolarSSL, etc. He said some vague
stuff about their quality. Problem: anyone following the git comments of
OpenBSD team that tore through OpenSSL knows that IT WAS S __*. It was about
the worst quality code they 've run into with so much complexity and potential
to be exploited that the NSA would be proud of it. I'd be surprised if Matrix,
Polar, etc are worse and less structured than that. If OpenSSL is really the
best, then we're in a bad situation and need to fund a clean-slate design by
experts like Galois and Altran-Praxis.

Although I'm focused on problematic points, his last piece of advice deserves
special mention: use TLS. These protocols have proven difficult to implement
properly. TLS and their ilk have had many problems along with massive effort
to smash them. Against that backdrop, it's actually done pretty well and using
it like he suggests is best option for COTS security. Medium to high assurance
systems can always use variants custom-designed for that level. Most don't
need that, though.

~~~
tptacek
The oddball TLS libraries do not have poorer "code quality" than OpenSSL,
though they are not perfect and have received far, far less scrutiny than
OpenSSL, so if you have to bet on which is going to have memory corruption,
OpenSSL isn't a sure bet.

But my concerns aren't about code quality. They're cryptographic.

~~~
nickpsecurity
I appreciate you clarifying on that.

------
saganus
Wow...so hard to keep up with best practices.

~~~
marcosdumay
Best practices are:

\- Use OpenSSL with TLSv1.2 for TLS

\- Use Tarsnap for online backups

\- Use NaCl for anyhing else

\- Try not to use anything new that you invent before it's reviewed

------
snvzz
Recommends openssl, ignores libressl exists at all.

------
tellor
useful, but..

\- What about the correct password length ?

------
chris_wot
It cracks me up that on Office 365, Microsoft has a Lync auto discover
protocol that uses https but the certificates have name mismatches.

Then again, it cracks me up that Microsoft have https at all, given the
protocol checks https and http when it goes to lyncdiscover.domainname

------
quotemstr
> Random IDs (Was: Use 256-bit random numbers): Remains: use 256-bit random
> numbers.

256-bit random identifiers are overkill. 122 random bits (as in a GUID) should
still be more than sufficient. Size is important for IDs because people whine
about the storage overhead. A 256-bit identifier requirement may unfortunately
convince some people that it's better to use much smaller, non-random
identifiers, and that'd be a shame.

~~~
jacobparker
The 256 bit advice is golden if only to encourage people to not use GUIDs in
these scenarios.

GUIDs are unique--not necessarily unguessable. Any implementation may be using
a CSPRNG but in general you shouldn't rely on that (unless its your
implementation and its a documented behaviour.)

Honestly I've found this (perhaps pedantic) mistake to be highly correlated
with other badness/sloppiness.

GUIDs are awesome, and can be used in plenty of places near crypto, like OAuth
1.0-style nonces, IDs for public keys... just don't use them for their
"randomness".

~~~
quotemstr
Of course you have to be aware of your implementation. On Windows, UuidCreate
returns unguessable GUIDs. (COM security depends on this property.) libuuid
provides similar guarantees if /dev/urandom is available.

But anyway, my point wasn't that you should necessarily use GUIDs for
unguessable IDs (although that's fine if you're using real randomness), but
that 256 bits is overkill and that 128-ish is good enough.

