
Be wary of one-time pads and other crypto unicorns - randomwalker
https://freedom-to-tinker.com/blog/jbonneau/be-wary-of-one-time-pads-and-other-crypto-unicorns/
======
tptacek
Another way to say this: there's a hierarchy of ways in which cryptography
fails.

At the very lowest level, a cipher can catastrophically fail, like FEAL-4 did.

Going up through the levels, your block cipher mode (the adapter layer that
allows a fixed-size cipher to encrypt a whole message) could be improperly
constructed. Your random number generator could be biased. Your messages could
be unauthenticated. You can accidentally create an oracle from observable
behavior. Your key exchange could be insecure. Your UX could be insecure, so
it's impossible to tell who you're talking to. Your encryption could be off by
default.

Of all the levels where modern cryptosystems are going to fail, _a
catastrophic failure of the cipher itself is the least likely_. It is
fantastically unlikely that a secure messaging app is going to fall because
AES is broken. And the best fundamental attacks on the worst ciphers still
make demands of attackers that are unrealistic for message apps.

So the whole idea of using a one time pad to secure a messaging app is
alarming. It's like an engineer told you they were going to design a bridge
out of, I don't know, lithium to save on weight.

There are lots of ways for a cryptosystem to fail. Unlike fundamental cipher
breaks, which are so extraordinarily rare as to be discountable, some of them
are _very, very_ likely. In cryptography engineering, the way these problems
are mitigated is to use standardized, well-known, well-tested constructions.
Any time you leave the golden path --- an AEAD stream cipher construction ---
you're risking lots of flaws across every layer of your cryptosystem. Doing
that to avoid an AES break is malpractice.

Also, as this post points out: this app didn't even use an OTP. It used a
stream cipher keyed by the HMAC-DRBG (iirc) embedded in JVM SecureRandom. So
there's that, too.

~~~
samatman
Serious question: does the same apply to encryption between routers? Like
large PiB/day loads. Between the old 'bandwidth of a 747 full of tapes'
exercise and the speed of XOR, I've wondered for awhile now why it isn't
standard. Specialized entropy sources can make a lot of the stuff rapidly, and
it seems like a case where reducing that layer of security to a a couple
consumable tapes that get swapped out like batteries when they're dead would
be practical, and would have the advantage that seizing the cipher clearly
involves having several tens of grammes of tightly-encoded data.

~~~
tptacek
It doesn't matter how many AES ciphertexts you collect. No number of them that
physics suggests could ever be feasible will allow you to break the cipher.
That's part of what makes one time pads so silly. The gap between the security
margin of AES and the "infinite" security margin of a one time pad is totally
meaningless in reality. Meanwhile, attempting to use a one time pad has huge
collateral costs throughout the rest of the system.

------
cbd1984
"Uses, or claims to use, one-time pads" is, by itself, strong enough evidence
that an encryption product is bogus that you'd be wise to not use the product
based on that evidence alone.

Similar is the claim that it uses a proprietary algorithm.

[https://www.schneier.com/crypto-
gram/archives/1999/0215.html](https://www.schneier.com/crypto-
gram/archives/1999/0215.html)

------
josephg
So, there's a pattern I'm noticing again and again. Whenever a startup
announces a new secure communication product, extremely knowledgeable and
respected people in our community write posts talking about the (awful)
mistakes they've made.

In the wake of the Snowden revelations, its incredibly important that we
bootstrap a usable new generation of software with security at its core. But
I'm worried that lots of people will be afraid to make security a core feature
of their product because of the bad press they'll get if they do it a little
bit wrong. Frankly the security community does not have a history of making
systems that my mum can use. We really need an army of software engineers to
work on fixing this problem and I don't think blog posts like this are
helping. (Notable exception is Textsecure).

In this case, as I understood the press release:

\- The QR code is used to initialize a direct (encrypted) bluetooth / wifi
direct connection. Taking a picture of the QR code isn't enough to get the
one-time pad.

\- Cameras on modern mobile phones should be able to generate more than enough
entropy (even if this isn't happening _yet_ ).

But there is no discussion of these points so far. Everybody is just piling
hate on the product for its failings. We need to be better than that if we
want a secure app ecosystem.

~~~
diminoten
It ends up being a product of the fact that the smartest crypto folks aren't
software developers, and the smartest software developers (or the most
innovative -- the guys starting companies or building products) aren't crypto
folks.

So you have a bunch of what are effectively scientists who aren't all that
familiar with how products get made (first build _something_ , release it,
then refine it over time through feedback) and you have a bunch of developers
who aren't familiar with how security products are managed (you _don 't_
release anything until you're willing to put people's lives on your product).

What you're left with is an ecosystem of developers releasing code that's not
cryptographically sound because they need feedback and a community to help
them refine the software, and crypto scientists who scream bloody murder every
time they find a bug because, "what if the next Snowden uses your new tool!?!"

Everyone's right in this argument, it's just hard to bridge the gap between
these two _very_ different schools of thought.

~~~
tptacek
Snowden and Greenwald _actually used_ one of the most popular of those tools,
as you know, and according to the person who did the most conclusive,
reputable, and devastating security analysis of that tool, they used it during
a time when it was catastrophically broken.

The clear message you are trying to communicate is that the security community
hyperventilates over security problems to the detriment of the community.
That's not what happens. Look at Lavabit --- another example: the problem with
serverside encrypted Internet email was absolutely written off as
hyperventilation, to the detriment of every single Lavabit user, whose
communications were almost certainly recorded and are, as a result of the way
Lavabit handled a request for Snowden's communications, retroactively
decrypted.

~~~
diminoten
Since our prior conversations, I've softened on my stance and at this point I
think it's mostly just a UX issue. I don't think you or any other expert is
_wrong_ in being as... "hyper" about the problems in current software. I just
think "your" (as the stand-in for crypto experts everywhere) solutions are,
while strong, not the best UX, and I consider UX to be a critical part of
security -- if people won't use it, then it's not good.

All I'm saying here is that there's a disconnect between security and
usability that needs to be bridged, and sometimes one side forgets about the
difficulty of correctly implementing the other.

~~~
tptacek
I'm sorry, but just to make sure I'm getting my point across, I'm going to
restate it more simply:

Security people freak out at new amateur encryption tools because when they
don't, those tools get adopted by hapless users who entrust real secrets with
them. It's not a theoretical concern; it happens over and over again in the
real world. It happened to Snowden _twice_ , once with Cryptocat and once with
Lavabit. In both instances, crypto experts begged people not to use those
tools. In both instances, misguided privacy advocates pushed back on the
crypto experts. The tools either didn't get fixed (because, in Lavabit's case,
they couldn't be) or didn't get fixed in time. Almost certainly, NSA has
decrypted intercepts as a result.

It's happening right now with other tools. It will, with certainty:100%,
happen again with Cryptocat.

During the snake oil era of medicine, people used to take radium tonics as a
cure-all. That's what these tools are: radium tonics.

~~~
diminoten
That's not really what I'm talking about, though it is a manifestation.

"Don't use the tool" is a cryptographically sound piece of advice, given what
you know about the specific implementation, but it's utterly useless for
Cryptocat, from a product standpoint. What's a developer supposed to do when a
crypto expert just says, "no"? That's terrible feedback, it needs to be
better, and it can be better (sometimes it actually is better, I think you've
mentioned in the past that you've lent a hand to Cryptocat).

As with every other "theory vs. practice" discussion, the theorists need to
tone down the rhetoric, and the practitioners need to actually listen to what
the theorists are saying.

~~~
tptacek
To be clear: I've never done anything to help Cryptocat, nor will I.

The right thing to happen with Cryptocat is for it to be scrapped. Its
security issues pervade beyond bad cryptography. Its users are literally
better off with Google Hangouts or iMessage.

And, in fact, the same was true of Lavabit: there _was no recommendation_ that
would allow a mail service that worked (for users) the same way Google Mail to
resist serious attacks. But that was the premise of the system! It needed to
be scrapped.

~~~
diminoten
And _this_ is the nonuseful advice that the crypto experts need to stop
giving. Joe Developer sees this advice and decides _not_ to write crypto
software.

That's unambiguously bad, because the software industry _thrives_ on failing a
lot until a good solution is found.

There needs to be a way to develop crypto software that falls in a land of,
"Don't use this _yet_ , but maybe someday after we're done testing it, you can
start using it and be secure" rather than "nope, just scrap the whole thing".

~~~
crpatino
You are wrong on practically every account:

> Joe Developer sees this advice and decides not to write crypto software.

False. Moe Manager stomps his foot and makes Joe Developer wire together
"something good enough", regardless of Joe's qualifications. Since neither Joe
nor Moe are able to tell how good a crypto system is, they invariably end up
with something in the awful-to-kludge spectrum. Both Moe and Joe need to be
repeatedly reminded that they don't know enough about this non trivial problem
to make good enough calls.

> That's unambiguously bad, because the software industry thrives on failing a
> lot until a good solution is found.

"Unambiguosly bad" is a thought stopper, a rhetoric trick. This just means
that since you don't like any of the options made available to you by reality
(like... paying big bucks to a real expert), self deception is preferable.

Now, the fact that our software industry thrives on failure... I am ashamed to
say that this just means our industry is really good at externalizing the
consequences of our own incompetence to the customers. This is all good and
fun, until real lives are on the line. Crypto is one of those cases, so it is
one of those lines you don't cross.

> There needs to be a way to develop crypto software that falls in a land of,
> "Don't use this yet, but maybe someday after we're done testing it, you can
> start using it and be secure" rather than "nope, just scrap the whole
> thing".

Are you familiar with the term "Security by Design"? It is one of those pesky
realities I was talking about. It basically says that you cannot create a
product giving no concern to security, sprinkle some crypto fairy dust on top
of it, and magically turn it into a secure product. If you try to do that,
you'll end up with a product with broken security.

Which is not all that bad in practice. There are lots of useful products that
we all are better of having in insecure form than not having at all. But if
the whole point of product X is to be "Y, but done securely" there is no
redeeming feature that can save X from the axe.

~~~
diminoten
I'm not sure how to respond to this. On one hand you talk about "rhetorical
show stoppers", and on the other hand you reply to a statement I've made with
the one word sentence, "False.", and in another sentence you say I'm "wrong on
practically every account".

I guess I could point out that there lies, in a continuum of software, a place
for experimental products which offer nothing other than the exploration of an
idea and the opportunity to refine said idea through testing and discussion. I
could also point out that your strawman of "sprinkle some crypto fairy dust"
is entirely irrelevant to this conversation, and that your lack of
understanding of how threat actors are modeled is evident in your inability to
comprehend a tool that might protect against certain threats but can
acceptably not protect against all threats.

But I think those points would fall on the "ears" of someone who just wants to
be combative. Maybe someone else will pick up where you left off and engage in
a more honest and less antagonizing manner, but this deep in the comment tree,
"here be dragons".

~~~
crpatino
I guess am sorry that we ended up in bitter ends of a pointy discussion. For
what's worth, I was not trolling. I stand by anything that I said (specially
the "sprinkle of crypto fairy dust", which I have seen IRL more often that I
wish). If I am ignorant or not, I am by definition unqualified to tell.

Other than that, if you prefer to have non combative discussions, please stop
bossing professionals around and tell them what to do. Specially if they are
not providing a service you paid money for.

~~~
diminoten
I am a professional and I "boss" other professionals around on this topic
(admittedly, mostly other topics, but this specific one does come up) for a
living.

But it's funny, because what you're saying here (don't "boss" people around)
is similar to what _I_ am saying, with regard to experimental crypto software.
The crypto experts need to let the developers develop, and find a constructive
way to contribute to that process, rather than chicken-little every time a bug
appears.

Imagine software that comes out, stating: "This software is not meant to do
anything other than hide your porn from your mother. Please do not use this
software to do anything other than protect yourself against a threat who has
limited understanding of computers and software."

As that software matures, the disclaimer could expand to, "This software is
not meant to do anything other than protect information against low-level
criminal activity. Please do not use this software to do anything other than
to protect your information from larceny or identity theft. This software will
not protect you against a sophisticated or well-resourced actor."

Eventually expanding to, "This software has been tested by a community of
crypto experts. While there is currently no known reason why this software
won't protect your information from all bad actors, there is no guarantee in
place that your information is absolutely safe. Always exercise extreme
caution when protecting your data from sophisticated actors."

The idea being that, once an _experimental_ piece of software becomes "good
enough", it can be used to protect against a more diverse threat model.

The idea that you _have_ to ship an NSA-proof product on day 1 is completely
unfeasible and will _never_ happen, and the crypto scientists out there need
to realize that.

~~~
crpatino
You keep talking about bugs and therefore you keep missing the point. As a
matter of fact, I tend to agree with you that the Infosec community would
benefit enormously by adopting some of the best practices that we lowly
programmers take for granted. There would arguably be less bug and less high-
profile bugs that way.

What I am trying to point out is that you cannot fix a flawed Security Design
by incremental improvements. I am not talking about individual coding defects
or even desirable features that have been left temporarily unimplemented on
purpose. I am talking about Stupid!Ideas that get implemented because there
was nobody around that recognized them as such, and then the usual traits of
human nature (denial, previous investment bias, rationalization, etc) prevents
the incumbents from scrapping those ideas until insurmountable evidence is
assembled.

The article in question talks about using One Time Pads (OTPs) to conceal
secrets. This is an idea that sounds good, OTP is mathematically proven to be
"unbreakable" and all that... The problem is when theory meets practice and
nobody asks the hard question: How are sender and receiver going to share the
OTP prior to sending the real message.

This is not a trivial question to ask, giving the fact that OTP must have the
same amount of bits as the message it is protecting. You can also not
"recycle" a single OTP for many messages (that would be a Many Time Pad, which
by definition violates the very preconditions that make OTP secure).

So, you end up with the scheme used in spy movies (I have no idea how real
spies worked during the Cold War). Each spy carries around a code-book of OTPs
that he received in hand by a trusted contact. The fact of being in possession
of such book is itself a proof of being a spy so he was to take great care to
hide it, which is hard because it is a whole book (OTP, lots a lots of
messages). Also, the spy agency will have the cumbersome work of providing a
different code-book to each spy (because otherwise, when the enemy catches one
spy, they could theoretically run the OTP on all previous communications from
suspected spies and have each one of them rounded and shot before the word
comes out that the first spy was in prison).

This example also serves to illustrate another point. Maybe a Bad!Idea can be
made to work in a limited way, if need is dire and resources are plenty. But
any organization which kept secrets of real importance and tried to rely on
this kind of solution would end up eaten alive by any adversary that had heard
the words "asymmetric cryptography" spoken together.

~~~
diminoten
I'm trying to understand how you could possibly think I've been talking about
"Bad!Idea" at all.

I really can't find in anything I've written where I talked about writing a
"bad" piece of crypto software. In fact, I've been talking about a potentially
"good" piece of crypto software that isn't getting written because no software
is perfect, but that's what crypto experts are demanding out of the gate.

Can you more succinctly sum up what you're trying to say? Do you think
software shouldn't be written in the crypto space unless it's perfect from
v0.0.0? I can't imagine you think that, so I'm wondering what exactly it is
you're arguing here.

------
DanielBMarkham
I'm not a crypto expert, but I understand the terms and arguments made in this
article.

"Wary" is the key word. If I have a true physical RNG, I can certainly make 2
DVDs full of identical random numbers. I can share one with you -- physically,
not using email or barcodes. We can then communicate in the open, saying
something like "beginning at position x on the DVD, here is my message" and
nobody can _read the part of the message that travels between the crypto
functions_

They can still read my keystrokes, or gain entrance to where I work and video
record me. They can gain useful information simply based on the pattern of you
and I communicating. They even may be able to surreptitiously copy our OTP by
using compromised software. But the OTP part is secure.

The real problem is that the entire OSI stack is a leaking piece of crap. It's
been broken at every level. So even if you have "perfect" crypto? It doesn't
matter so much.

ADD: Nothing's stopping you from using a OTP on top of a normal set of crypto
algorithms. I like the OTP idea -- I'm just not sure in the current
environment you're getting anything from it.

~~~
vog
_> They can still read my keystrokes [...] They even may be able to
surreptitiously copy our OTP by using compromised software._

 _> The real problem is that the entire OSI stack is a leaking piece of crap._

How is breaking into your computer an issue of the OSI stack? Are you using
"OSI stack" as synonymous to "the whole operating system with all its running
processes"?

~~~
DanielBMarkham
Yes, I was, but more specifically I was giving two examples of the types of
insecurities found in modern technology. It was a broad argument and a loose
choice of examples. Thanks for pointing that out. The point was that "don't
try this at home" is a little over-used from experts. Even novices can poke
holes in the idea here -- so perhaps my loose argument was appropriate.

------
JacobEdelman
The sad thing is that really cool, potentially world changing, crypto has been
created in the past few years but its not the crypto that gets media coverage.
Concepts like Fully Homomorphic Encryption and Secure Obfuscation may
revolutionize security in the coming years, but nobody has heard of them
precisely because they were published in scholarly journals instead of popular
science magazines. Maybe cryptographers should create useless apps that claim
to solve all security problems whenever they come up with actually important
crypto :)

~~~
jfindley
Well.

Homomorphic encryption is still very much an emerging tech. For most
applications it's still too slow. There's all sorts of cool things we can
build when it gets a bit more mature, but for now it's not unreasonable that
there's no press coverage of it.

On secure obfuscation - there was actually a wired article about this
(unsurprisingly the title was rather overhyped and click-baity), so I don't
think it's fair to say it didn't get press coverage. This is closer to being
usable in the real world and thus got more press coverage. Again though, it
seems unfair to complain about the amount of coverage it got, because regular
readership can't go out and use it yet.

The problem IMO is more that there's a conflict between the big splashy
software release that gets you lots of users and attention, and the slow
careful discussions on crypto lists that gets you more confidence your
software is actually secure before it's in the hands of end users.

Without the big release, it can be hard to build a user-base - and if no-one
uses your new more-secure messaging app it doesn't really do a lot of good.
Without the slow discussions you're potentially endangering your users. Some
sort of compromise is probably the answer, but I'm not sure exactly what that
should look like (although I have some ideas).

------
some_furry
> RC4 is almost 30 years old now, and despite it being “broken” practical
> attacks still require enormous amounts of ciphertext and would probably not
> affect security in a meaningful way if you deployed it in a messaging app.

Sure, but please use ChaCha20-Poly1305 if you can. Two libraries (nacl and
libsodium) implement it for you; libsodium is a portable implementation of
nacl with bindings for most popular programming languages.

~~~
tptacek
Sodium/Nacl is a good recommendation. If you can use them, do.

If you're not using Sodium/Nacl, don't go out of your way to use
Salsa/ChaCha/Poly1305. If your platform has a good AES-GCM, that's also fine.

The important thing is to use the best tested, simplest interface available to
you. If switching from AES to ChaCha means you have to do any of your own
protocol design, it's not a win.

~~~
some_furry
> If you're not using Sodium/Nacl, don't go out of your way to use
> Salsa/ChaCha/Poly1305.

Right, sorry if I implied that anyone should roll their own crypto libraries.

 _Use the standard implementations!_

[http://doc.libsodium.org/bindings_for_other_languages/README...](http://doc.libsodium.org/bindings_for_other_languages/README.html)

For PHP developers: [https://github.com/defuse/php-
encryption](https://github.com/defuse/php-encryption) (wrapper for openssl)

------
utnick
I have a few comments/questions about the post:

1) Why is a mobile phone random number generator good enough to generate
encryption keys, but not good enough to generate one time pads? What extra
weaknesses do one time pads expose?

2) For their 2nd point, Zendo has said that the pad exchange happens locally (
device to device ). So you would need to be in the room to eavesdrop. So the
transport mechanism they choose there is a little bit less of a risk I think.

3) Their 3rd point also doesn't seem to be that big enough of a deal to
warrant the tone... if sha256 is broken, a man in the middle would be able to
corrupt messages, but not in a meaningful way ( ie switching yes to no ), it
would just look like garbage to the other person...

I think #1 is probably the biggest issue, I'd like to understand it a bit
more.

~~~
tptacek
1\. It's not that SecureRandom isn't "good enough" for an OTP. It's that it's
_not_ an OTP. It's a keyed deterministic bit generator. Its security depends
on attackers not having its key. The problem isn't that SecureRandom is a
devastating weakness; it's that using it for a pretend OTP creates not an OTP
but instead a stream cipher based on an HMAC keystream. The irony is that the
hash function underneath HMAC is far more likely to fail than AES is. But it's
a theoretical observation, not a cryptanalysis.

2\. It's a key exchange that can be attacked with a video recorder and, more
importantly to this analysis, that depends on AES. There is no point in using
a one time pad whose key is encrypted with AES; you might as well eliminate
the added risks of a one time pad and just use AES.

3\. Integrity failures in cryptosystems tend to produce confidentiality
failures. For instance, error oracle attacks depend on attackers injecting
bogus messages and observing behavior. You can't have confidential message
encryption without message integrity.

~~~
yuhong
Still, it reminds me of Snuffle. Remember the lawsuit?

------
hurin
There are old established methods to guarantee sufficient security for message
encryption, it's rather trivial and idk why every other new product has to
reinvent the wheel. Key Exchange on the other-hand is hard and much more
likely to be the failure point.

------
pdkl95
Usually, the complaint with OTP is how to move a copy of the pad in a way that
is both secure and still useful/practical. I think there is a simple solution
to this (at least some of the time) if you limit the scope to "friends that
meet occasionally".

What I want to make[1] is a pair of basically any embedded micro, in a
"thumbdrive-ish" (or whatever for the prototype) size that has some amount of
flash on it to store the pad, a USB interface, and a hardware-RNG[2].
Additionally, there is some sort of serial interface that will be used to
null-modem two of these devices together.

When connected, the devices generate public keys (for authentication, etc),
and generate noise. They send the noise to each other, and save the XOR of the
noise to flash, so no device has total control of the generation.

The idea is that the "user experience" would be this: go over to a friends
house, and plug your devices together for some period of time (hours,
probably, but it can be cut off early). Then when you go home, plug it in to
your computer and you can chat securely, slowly burning the pad from flash
automatically. This is done over USB, so the pad never leaves the device. The
firmware can handle the destruction of the pad as it is used and other
details.

This doesn't solve everything, and the pad _would_ run out fast if you wanted
to send non-text data. What it should do is make it trivial for two parties to
chat/email securely. Details like refilling the pad (visit friend for dinner,
plug devices together while you eat) shouldn't be hard for most people, as
long as they can see how much pad is remaining.

[1] I plan to implement a prototype of this as soon as I have spare time &
cash for some hardware to experiment on.

[2] probably using something like the reverse-biased transistor method
discussed in TFC

------
peterwwillis
_" We don’t need crypto unicorns so much as we need diligent engineering to
deploy the crypto we already have."_

If we took this seriously, we would all still be using modified versions of
DES, which, in a twisted way, we are. This isn't a good argument for ignoring
new implementations of different cryptosystems.

 _" If a new crypto tool is first announced in a press release or popular
science magazine, don’t use it."_

This is far more accurate, but i'd extend this even further and define it as:
"Don't use any new crypto." (Unless it comes from someone with high social
influence in the sphere of cryptography research, of course)

~~~
kedean
I don't think DES is an accurate comparison here. The world abandoned DES
because a) everyone suspected it had a backdoor, and b) it was demonstrably
broken by the EFF in 1999. We rely on a primitive until someone who's job it
is finds holes in it. Engineers don't need to be inventing new encyrption.

~~~
tptacek
DES was abandoned because its key size was too small. It also has a block size
that's too small for safety, which is why we don't use Triple DES or Blowfish
as bulk ciphers either.

Ever since Biham and Shamir published the differential cryptanalysis, we've
known that NSA didn't backdoor DES, but rather helped strengthen it against
that attack. Concerns over a backdoor had nothing to do with it.

------
wodenokoto
So in order to start breaking Zendo, you need to break AES-256 and SHA-256?

