Hacker News new | past | comments | ask | show | jobs | submit login
The Cryptographic Doom Principle (thoughtcrime.org)
142 points by lobo_tuerto on Apr 14, 2015 | hide | past | favorite | 134 comments



If you believe that "don't roll your own crypto" is some kind of absurd mantra the security industry uses to keep us in business, I recommend that you roll your own crypto, and keep us in business.


Schneier's Law comes to mind

"Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break."

https://www.schneier.com/blog/archives/2011/04/schneiers_law...


Oh, that's what that is!

I had found a similar thing making my own puzzles, "It's easy to make a puzzle you can't solve, but it's hard to make a puzzle that's fun."

Well, that' not entirely the same thing, but they overlap I guess.



I would compare it to a writer's inability to spot typos in something they wrote themselves. Probably, if the same algorithm was written by someone else, they could tear it apart easily.


I do not have concerns with the mantra itself just it's usage and the entitlement that often comes along with using it. The top answer on this Stack Exchange question is a good example of what I believe to be proper usage of the mantra. http://security.stackexchange.com/questions/18197/why-should...


How far should we take that maxim? It implies that no one should ever attempt this, but that leads to nothing new (unless you are first recognised as a crypto guru -- but how could you become one?).

I think it's worth drawing a distinction between the algos/maths and attempts at implementations. Otherwise we wouldn't have things like OCaml-TLS and others.

http://openmirage.org/blog/introducing-ocaml-tls


Implementations are even more sensitive to tiny bugs with huge consequences than algorithms. It's fine for people to write their own implementations if they're never used, but anything that will be used needs a large number of experts and a large amount of time before it should be trusted.


Maybe people can show new stuff they make on HN etc before using it in their apps.


HN is not and never will be an appropriate stage for cryptographic review.


it's like Pascal's Wager for security


I think there's a more fundamental cryptographic principle. Don't implement cryptography, unless you are an absolute top expert. Even then, think twice, and get another absolute top cryptography expert to check your working. Use a pre-existing cryptography package that has been written properly instead.


That's good advice, and I've given it myself, but that doesn't mean you shouldn't read and play around with cryptography if it interests you.

I implemented a simplified version of the referenced Vaudenay attack as part of Dan Boneh's Cryptography I course on courseara+. The course was very interesting, and also fun. I'm not ready to go out and implement my own cryptography, but knowing a bit about the subject makes me a more intelligent consumer of crypto libraries.

While warning people away from implementing their own cryptography we have to make sure we don't scare people off from the subject altogether. After all, absolute top experts have to start somewhere.

+ https://www.coursera.org/course/crypto


Thank you for the link and the review. I start my CS degree in the Fall; i hope i can start and finish this before then.


For the love of all that is sacred, take advanced math and linear algebra.


Thank you for the admonition. I am double majoring in CS and Philosophy for a specific reason; i want to manage devs and IT wizzards. I want to function as an informed go between for the sweat stainless white collars and the yellowed white collars that do the work. I love the theoretical space of design and networking, but i have very low expectations of my actual capabilities at the nuts and bolts. I am a [failed] writer and indy movie producer, but i want to be 'part of your world'. I have been trying to self educate, but it is not simple with no mentor. I see uni as a chance to be around people who are smarter than me as well as make time to write the novels and short stories that make me happy. So i am going to start in CS and Philosophy, audit engineering courses and IR courses, then switch majors to whatever seems appropriate. I am a bit older than your typical freshperson, and i am building a trajectory for post-grad, but really i am button mashing like it's StreetFighter. ANy advice is appreciated; i will look into the Maths Degrees.


I heartily agree with this sentiment. I have a friend who's a few years older than I am. He majored in CS, and told me in retrospect that he wished he had majored in mathematics instead. I took his advice, and, instead of doing CS major + a few math classes, I did math major + a few CS clases. I am now very glad that I did so. (For context, my friend and I are both data analysts now.)


My comment comes from experience of being in progress of doing an Honors CS - Honors Math double major. After having completed Honors Calculus I/ II, Honors Linear Algebra Imy /II circuits in my first year I felt about 2-3 years ahead in my knowledge of mathematics and associated rigor -- this was especially evident when I was taking combined CS/EE courses and seeing student struggle with concepts that I thought were basic (infinimums, supremums, etc.) but apparently are not covered over the course of 2 years in the regular streams.


> Use a pre-existing cryptography package that has been written properly instead.

Okay, name one that has been written properly that the average web developer is going to encounter.

I'm going to focus on PHP because that's what I know best, it has an enormous market share, and I'm not a fan of blowing smoke (which I would be if I tried to speak to, e.g. Perl development).

PGP? OTR? Axolotl? These aren't part of your standard dev environment.

Mcrypt? Hasn't been touched since 2007. There is at least a patch for a bug in the 64-bit implementation of CAST-256 that has been sitting, unmerged, for six years.

Libsodium is the best we have, and you need PECL access to install it.

OpenSSL is labyrinthine. http://stackoverflow.com/a/29331937/2224584

It's still messy, despite the past year of attention from the security industry, but it's getting better.

This library offloads its underlying cryptography to OpenSSL because it's the best we got:

https://github.com/defuse/php-encryption/

Maybe Python, Ruby, and Node developers have better options available to them. But 83% of the Internet runs PHP, and having a properly written cryptography package just isn't a trivial bar to leap over.


Considering that the original comment never specified web developers, you seem to be responding to an argument that wasn't made.


His point does bear consideration - the expert-written libraries are only expert-usable. You can still badly mess up encryption even if your basic building blocks are written perfectly. It's not just the algorithms that need to be expertly written, it's the way they're combined that also needs to be expertly written and the latter part doesn't exist. OpenSSL allows you to shoot yourself in the foot pretty easily, like choosing very weak ciphers, calculating the mac of the plaintext, etc.

OpenSSL is still too low-level for the generic developer. You want things like HTTPS, using which is still too complicated for most people (hello certificate management), but you can get it right more easily by following an online tutorial. Basically anything that is more complicated than "open socket to destination, here's my certificate" is doomed to be misused by non-experts. Oh, and it better have strong default settings so it won't pick RC4 as the cipher or something.


> you seem to be responding to an argument that wasn't made.

I seem to be, but I'm not.

> I think there's a more fundamental cryptographic principle. Don't implement cryptography, unless you are an absolute top expert. Even then, think twice, and get another absolute top cryptography expert to check your working. Use a pre-existing cryptography package that has been written properly instead.

This is written towards ALL developers. In the scope of web developers that I reside, this is my rebuttal.

Some crypto experts are quick to demand people use something that isn't necessarily available to the developer. Then the language elitists tell them to switch to their preferred language instead of improving what the developers already know. End result? People just use insecure or badly designed crypto libraries.

We need better libraries. Someone has to write them. Or just make libsodium the standard. I'd be okay with that.


Where did the 83% number come from?


Rectal-Numerical Generation...


Um, no. http://w3techs.com/technologies/overview/programming_languag...

Currently 82%, last time I checked it was 83%.


Looks legit. My apologies. I was unsure of how they would even go about determining that, but after reading their methodology and limitations, it seems that is a somewhat representative number.


It looks like they count by domains. Around 80% sounds right for PHP by that measure.


It slightly decreases the validity of the page when it states that ASP.NET is a programming language.


They even claim that PHP is a language..

Seriously, just assume they mean technology stack or something.


Your comment has value on re-use capability alone. Thank you for the lel.


This article is clearly written for the experts who are implementing cryptography.


On its face I guess it is. But really, this is just another demonstration of maxim "Don't do work for untrusted parties". Yes, this applies to cryptography, but also to network protocol design, and even software design in general.

Recursive DNS lookup DoS attacks? You are doing work for some you don't know (random UDP packets)

The recent NTP DDoS amplification issues? Doing work for someone you don't know (Again. unauth-ed UDP packet trigging craploads of work)

Padded Oracle attacks in Crypto? Doing work for someone you don't know, and leaking data based on when something fails.

IP Source routing? Doing work for an untrusted party!

The moral of this story is verify what you have before you work on it. It all comes down to validating user input and user source.


Even experts can go terribly wrong; there are known embarrassing attacks that invalidated algorithms considered rock-solid for 10 years by some bright 15-year old students. Would you prevent those brights from tackling on better ways to do crypto because they aren't "top experts" yet? Most likely they will just run away in disgust and park their capabilities in less hostile field.

Celebritism doesn't really work in science; it only prevents progress.


1) Experts in all fields are always wrong sometimes. The nonexperts are wrong more frequently on average and people are less interested in proving them wrong.

2) Saying that implementing your own cryptography usually implies a production environment. I think it's generally assumed that nobody cares what people do with their own time/personal projects.

3) Trite sounds bites don't work in science either, and expert -/-> celebrity.


Ad 2) most groundbreaking projects you know originated as messy ad-hoc personal projects and not in production-sanitized environments (look even at GPG, embarrassingly for crypto community with one almost bankrupt developer). Crypto-logy/graphy is an art, someone has a bright idea while lacking in other dimensions; the crypto community instead of embracing this idea and helping this person to bring something excellent to the world, shoots them instead down and point to obvious flaws that can be fixed in minutes by someone experienced, while keeping the new idea intact. The crypto requires such an enormous amount of talent that it is bright individuals, not companies, that make things move there, and quite often the more people involved, the worse results.


> someone has a bright idea while lacking in other dimensions; the crypto community instead of embracing this idea and helping this person to bring something excellent to the world, shoots them instead down

I'm sorry, but this is essentially never the case. This is no different than in other fields, for instance math or physics, where complete novices come in every day believing they've had a completely novel idea that will revolutionize the field. 999,999 times out of a million they haven't, and in the one remaining case they've come up with a solution in search of a problem.

"Oh, you've come up with a new cipher? Congratulations. Assuming it is secure, why should we use it ? Is it faster than existing ones? Simpler and more likely to be implemented correctly? Resistant to timing attacks? Resistant to CPU power analysis? Resistant to differential cryptanalysis? Suitable for low-CPU and low-memory embedded devices? Oh, none of these things? Gee, how interesting."

I'm reminded of http://www.scottaaronson.com/blog/?p=304


> Crypto-logy/graphy is an art, someone has a bright idea while lacking in other dimensions; the crypto community instead of embracing this idea and helping this person to bring something excellent to the world, shoots them instead down and point to obvious flaws that can be fixed in minutes by someone experienced, while keeping the new idea intact.

Crypto is an environment where a single mistake can get people killed. The stakes are very high. We're not talking about a slight rendering error in CSS here. This is not an appropriate place to be universally warm, fuzzy, encouraging, and forgiving of mistakes. This is incredibly serious stuff that must be treated appropriately seriously - and everyone attempting to touch the field needs to understand that.

In addition, stouset is right. The frequency with which apparently novel ideas are actually novel is much, much, much smaller than a naive guess would lead one to expect. I've watched people attempt to introduce ideas that strike them as novel, only to discover that they're just creating exploitable weaknesses, right here on Hacker News.


What constitutes a "top expert"? I'd want the people writing cryptography software to have made mistakes and learned from them. That's how the software gets better, no? Of course, there's a theoretical minimum to be familiar with.


what kind of "experts"? like NIST experts or experts at Microsoft? this a non-sensical meme in cryptography. it all comes down to the value at risk. at some price point you can hire people i.e. invest the resources to get it right. the reality is that many security libraries and principles are ancient and infrequently updated, that much of the research in the field is academic and useless. I think its much better to think in terms of risks. good security usually doesn't more value than bad security, until there is a breach.


Yeah, you got it, the people at NIST and M$ are going to do better than you.

'Academic and useless' research huh? Go read DJB's website some day, it is far from inapplicable. All the research that publicly disclosed HUGE holes in SSL? I think it's incredibly important.

As to good security being of little value, maybe ask Sony and Adobe (and even NSA) if they wished they had better security.


This is a common idea and it's flawed.

Let KC be a well-known and implemented cipher and CC a custom cipher written by you. Instead of communicating like this:

A -> KC-encrypt -> transport -> KC-decrypt -> B

You can use:

A -> KC-encrypt -> CC-encrypt -> transport -> CC-decrypt -> KC-decrpyt -> B

The advantage is it breaks automated cryptoanalysis in case KC is broken.

A better idea than the one you stated is: don't rely solely on custom cryptography.


Technically, you're correct. In the real world, you're not.

This mythical protocol just encrypts a blob and sends it over the wire. But real protocols involving cryptography are significantly more complicated, because they often need to authenticate other parties, cryptographically bind all the messages in a session, preserve forward secrecy, preserve anonymity of one of the communicating parties, etc.

Now the problem isn't that you are simply applying your own cipher to the output of a strong cipher. It's that you've now got to implement a damn protocol, because nothing in the world supports your cipher. And that implementation is inevitably going to have more exploitable flaws than a well-maintained open-source project that doesn't implement your bogocipher.

Cryptography is hard, and it's not just the primitives that are ripe for gotchas. Combining primitives, implementing primitives, designing protocols, implementing protocols, and generally anything involving touching something anywhere in the entire stack is fraught with danger. As a rule, the more crypto-related lines of code you write, the more likely it is you have introduced weaknesses rather than added strength.


Technically correct is the best way of correct.

I never suggested my proposal for public protocols for communication between two random parties. I first and all wanted to show the root of this thread wrong.

I'd suggest it maybe for highly confidential communication that warrants the trouble.


Technically correct is usually the least useful kind of correct.

The root of this thread is extremely sound advice. Don't touch your own crypto unless you're an expert. If you're at the point where your security requirements are so sensitive that you can't possibly risk a break in an underlying well-known cipher, you should hire a real cryptographer and not dick around with inventing your own laughably-broken ciphers.

And once you have that cryptographer under your employ, you can feel free to break the rule of not rolling your own crypto. Although that cryptographer is infinitely more likely to simply compose well-known strong ciphers over deploying their own unpublished ones.


So I think it comes down to whether the root

> I think there's a more fundamental cryptographic principle. Don't implement cryptography, unless you are an absolute top expert.

is a sound advice.

I disagree with this advice strongly. Tinkering around with cryptography can teach you a lot. Especially if you are not a top expert.

Furthermore, I described a relatively simple protocol that monotonically increases the security of a communication by applying a custom cipher. I didn't add the restrictions that CC must not know anything about KC anything because I though that would be clear.

Interestingly, you get security through obscurity on top of the previous security from my scheme even when CC is trivial like swapping the nibbles in bytes and then xoring a constant. This makes untargeted mass-surveillance in case of broken KC a lot harder. Depending on the CC it's relatively easy to make it totally impractical.

I think my scheme is interesting because it's unintuitive that it adds any security.


Nobody is arguing against tinkering with cryptography as a mechanism for learning. This advice is meant to stop people from writing homebrew encryption for real-world, production systems.

You described a relatively simple protocol that, at worst, does not decrease security in a theoretical sense with at least one critically-important and unmentioned caveat (don't use the same or related keys for both ciphers). In a practical sense, your simple protocol is likely a security disaster.

In the absolute best case, it really only defends against a break in the underlying cipher that is so thoroughly devastating that ciphertexts can be decrypted essentially for free. Even DES hasn't been broken this badly, and it's been considered broken for decades. The reason this is the case is that such a simple transform is only useful against an entity that is automatically decrypting ciphertexts on the wire en masse, and who isn't looking for your data specifically. Against a targeted adversary who can "merely" break AES with substantial effort, the additional cost such a transform would impose is completely negligible. In your example of a transform on top of a TLS connection, it would be completely obvious through packet analysis that the protocol was TLS, and your transform would be trivially understood and reversed after watching a few handshakes.


I described a relatively simple protocol that monotonically increases the security of a communication

Careful. Encryption is decryption, remember. Do any of the rounds of CC undo any of the rounds of KC? Can you verify that bits that were correlated in PT and uncorrelated in KC are still uncorrelated in KC + CC? Does CC change the entropy content of the resulting message? (If it decreases it you have a problem, and most popular ciphers end up with damn near 8 bits per byte in the ciphertext.)


By definition, if CC doesn't know the key and doesn't search to key space of KC and it reduces the security of KC then CC breaks KC.


That makes sense intuitively, but you can't approach cryptography intuitively. There are many cases where the intuitive answer is "well, the worst case is that it's just as good as without our custom wrapper... so let's do it." But under cryptographic analysis it is revealed that the custom wrapper renders whatever other good, accepted cryptography that is in play completely useless.


I need to see a citation for this. Something like the parent article exposing how things get mistakenly broken by applying this concept.

It's important to be paranoid about implementing your own crypto, true. But it's not good to take this to dogmatic extremes, ie "never trust any code you write that has crypto in the name or you break everything".

I would never write and deploy a custom cypher. But if I did, and my custom cipher CC, when applied to ciphertext encrypted by AES, somehow leaked any information about the plaintext, then I have accidentally made a cryptanalysis breakthrough that has thwarted hundreds of professional crypto people.

This is unlikely.


In madez's case, his construct is provably secure, with some caveats. Specifically, the first cipher must be a secure cipher, and the keys for both ciphers must be independent. If the latter is not the case, the custom cipher could very easily leak bits of the key, which would allow the original cipher to be broken.

But more importantly, it's the implementation of this construct that's likely to introduce weaknesses. As I posted in my direct reply to him, the problem is that no real world protocol is so simple, and he will now have to implement this. See OpenSSL for what can go wrong when implementing ciphers; and that's a highly-used, generally stable project that's had the benefit of multiple eyeballs for years.


Exactly. And if the first cipher is a secure cipher, with no obvious sign it'll be broken any time soon (like AES), why not just use it instead of wrapping it over some homebrew solution?


Because you get obscurity which makes automated cryptoanalysis of your communication a lot harder (if not impossible) which in turn makes untargeted surveillance more expensive.


That seems ridiculous for 2 reasons.

First, ciphertext is ciphertext. They'd probably be able to determine it's a block cipher and what the block size is, but that's about it. They're not going to be able to tell AES ciphertext from 3DES ciphertext if they're both set to use the same block size. There's no point "obscuring" the fact that you're using AES.

Second, if you're assuming the NSA seriously has the capability to break AES within the next 10 years, then obscuring it with a homebrew cipher is pointless, because they've probably already used their AES decryption capability to find many other ways into your system (through a hosting provider, ISP, domain registrar, DNS provider, admin accounts of coworkers...).

Even quantum algorithms are only able to halve the bits of the key search space (so 2^256 will become 2^128, which is still mostly infeasible even against a government adversary).

I don't think your suggestion would decrease security, if implemented perfectly (which is unlikely), even if the cipher is insecure and vulnerable to cryptanalysis. But I don't think it would increase security either. And since it wouldn't increase it, yet may decrease it in practice due to an implementation flaw which compromises both ciphers, why bother?


> First, ciphertext is ciphertext. They'd probably be able to determine it's a block cipher and what the block size is, but that's about it. They're not going to be able to tell AES ciphertext from 3DES ciphertext if they're both set to use the same block size. There's no point "obscuring" the fact that you're using AES.

If you can break a set of ciphers it's trivial to determine which was used for a connection, if one was used; You just try each.

The whole point is to hide which is the real cipher to make it more costly to attack you. Making it more costly for the attacker is what we want.

> Second, (...)

This is no argument for or against the scheme I presented.

> Even quantum algorithms are only able to halve the bits of the key search space (so 2^256 will become 2^128, which is still mostly infeasible even against a government adversary).

Breaking a cipher means to not have to search all the key space. That's the definition of a broken cipher.

> I don't think your suggestion would decrease security, if implemented perfectly (which is unlikely), even if the cipher is insecure and vulnerable to cryptanalysis. But I don't think it would increase security either. And since it wouldn't increase it, yet may decrease it in practice due to an implementation flaw which compromises both ciphers, why bother?

We disagree on the practicality of implementing it without adding another vulnerability to your system.


The plausibility of a vulnerability in AES which reduces the search space by more than half of the key bits is essentially 0.


First, ciphertext is ciphertext. They'd probably be able to determine it's a block cipher and what the block size is, but that's about it.

You can tell mode somewhat reliably (I've hit 75% confidence or so) by the entropy content of the ciphertext, and I've never done a Markov chain longer than a native word so I'd assume that gets better the longer your chain is. I would also assume people much smarter than me have compiled heuristics that are more useful than just determining the mode.


I believe it is considered a significant break in a block cipher if you can distinguish messages encrypted via that cipher against random bits constructed in a similar way (i.e. constructing an adversary such that it has a non-negligible advantage in the PRF security game against the family of functions for the block cipher).


I need to see a citation for this

You're looking at it the wrong way. You need to demonstrate for yourself that nothing in your cipher undoes the last round of the underlying cipher.


Rendering KC useless by using CC as I described is equivalent to breaking KC. There is no other way of messing with KC when used the way I said. And you won't break KC by chance with your CC. Many people respond like you did and they simply didn't think about what I've said and repeat old ideas.

I hate people.


This is not a true statement. CC may leak information about the plaintext in ways that KC can't handle, even without breaking it. For example, applying CC might deterministically affect the message length.

One of my favorite cryptographic attacks ever was of this form, known as the CRIME attack [1]. It was a "partial plaintext" attack, where the "CC" in question was plain old gzip. Basically the attackers controlled a small part of the plaintext (for example, a field in an HTTPS request they could send with CSRF) and they were able to use known properties of the CC to get some information to leak into the message length -- namely, if the message fragment they controlled appeared elsewhere in the request, the gzip encoding would be shorter, and hence the encrypted request would be shorter. This is something that happens even with a "perfect" KC that no one could break.

[1]: http://en.wikipedia.org/wiki/CRIME

This is one of the defining characteristics of cryptography to me -- there are no perfect cyphers, there are only good systems. Littering extra cryptography around nearly always decreases security. The most secure system is the simplest system that works.


What you (and the wiki link) are describing is plaintext -> CC (gzip) -> KC.

This is different than the argument you're replying to which is plaintext -> KC -> CC.


Read this thread again. Your statement is not correct since CC never gets to see the plaintext. CC is just a bijective function on (blocksize of KC) bits.


Did you edit your post? I'm not the only one who misread it. If KC is applied first to the plaintext, then assuming you didn't flub anything in implementation the combined scheme won't be less secure than KC, I agree. Applying CC effectively becomes security through obscurity, which is admittedly sometimes helpful.


No, sirclueless, I didn't change the scheme I wrote about.

Yes, it adds security through obscurity without losing security through well-known recommended ciphers.


If you use the same key for KC and CC, CC can (and hell, in all likelihood probably does) leak bits of the key.

So yes, there is a way of breaking this construct when used the way you said.


This is almost always less secure than KC alone, when KC is a well-known secure cipher.

A simple example would be a CC that hex-encodes the plaintext before applying some transformation on the data (before you laugh, this exists in enterprise systems today). This means CC would effectively expand the underlying data 200% (0xA1 -> 0x4131) and substantially degrade the security of a block-based cipher (32-bit block -> effectively 16-bits).

Here's a link to a practical case of unsafe composition (in hashing): http://blog.ircmaxell.com/2015/03/security-issue-combining-b...

A bit more detail on randomly composing encryption algorithms: http://blog.cryptographyengineering.com/2012/02/multiple-enc...

edit: It would be better to compose CC(KC(P)) so CC can't leak any information about P or degrade KC. Any reluctance to show the world the output of CC should suggest the low-practical-value of CC.


Why does expanding the underlying data that way "substantially degrade the security of a block-based cipher"? Can you think of a generic way that an adversary can gain an advantage against the block cipher because of this transformation/restriction of the plaintext?

(I appreciate the link to Matthew Green's post; I don't think his analysis of the effect of composition is as pessimistic as yours.)


CC doesn't get to see the plaintext in the construct I explained.

CC is just a bijective function on (blocksize of KC) bits.

The author of the mentioned blog ignores the fact that cascading ciphers like I described breaks automated cryptoanalysis which is a necessity for mass surveillance in a world with wide-spread cryptography.


I think cascading ciphers might be a good idea, but if you're following Kerckhoff's rule, if your system achieves significant use and there is a cryptographic weakness, you should assume an adversary will exploit it even if it's different from the weaknesses of other systems.

I guess there's an economic argument to be made that millions or billions of pairs of communicating parties could develop their own individual means of at least obfuscating their communications so that nobody could expect to find searchable plaintext after decryption. But the economic effort that the pairs of parties invested in creating their obfuscations may have been wasted because if the same level of time or effort had instead been spent to improve mainstream cryptography, it might have yielded major qualitative security improvements for the "official" stuff.


I'm well aware that it doesn't stop a devoted attacker. But it needs a devoted attacker. That is the import point which you also recognized (beside many others). Even a devoted attacker has it harder because cracking a code with known algorithm and unknown key is easier than cracking a code with unknown algorithm and unknown key. Even more if the unknown algorithm is not available.

I think your argument that the effort put in custom ciphers maybe should be put in mainstream ciphers instead is interesting.

Tinkering around with custom ciphers can teach you a lot. Maybe you have not the knowledge or no idea how to attack/improve mainstream ciphers.

However, if you can make a difference for mainstream ciphers, of course that's what we need.


Just to put the issue in an extreme perspective, suppose that the best mathematically possible attacks against AES-256 in some setting only reduce the attacker's work by the same factor as the best mathematically possible attacks against AES-128. (There's no proof of this now, but it's conceivable that it's true.) In that case, the decision to use AES-256 in a particular application instead of AES-128 improves security against cryptanalysis of AES by a factor of 2¹²⁸ steps for the attacker. (Maybe cryptanalysis of AES isn't actually the weak point anyway, but let's set that aside because that's what inventing new ciphers tries to address.)

If this hypothesis is true, the work that Daemen and Rijmen did to invent AES-256 and the work that a particular implementer did to implement it will produce an almost inconceivably vast security benefit against this particular threat.

The reason this is important is the kind of disproportionality between the effort of Daemen and Rijmen and the AES reviewers and implementers, and the magnitude of the resulting security benefit. They might have spent a total of 500 person-years on making AES-256 work well, and received a security improvement of 340 trillion trillion trillion trillion-fold relative to whatever the security of AES-128 is. Whereas a homegrown cipher that isn't very mathematically sound might be developed with 1 person-year of effort and end up make an attacker do, let's say, 100 trillion operations. In my hypothesis, Daemen and Rijmen and other folks then got somewhere between a trillion trillion trillion and a trillion trillion trillion trillion trillion trillion trillion times better security return on their effort.

Now you might reasonably point out that if you use a standard, known cipher, the attacker's costs for a direct brute force attack are purely computational and don't involve research and development, or attempting to suborn or hack your correspondents or colleagues to discover the principles of operation or your system. Whereas if you do have a homegrown mechanism in play, an attacker incurs these other kinds of novel and sort of one-off costs, notably including making other human beings think about stuff more.

The point that I've taken from a lot of the security experts who've talked about this, though, is that the scaling benefits are the important factor here, again especially if you want to make a system that many people could use for a long time. When the limiting factor is computer time, which is really only likely to be true for systems created, refined, and reviewed by experts, you can sometimes get the really absurd security ratios that are hard to even think about, and require your adversary to spend more money than exists in the world, build more computers than can be made from all the silicon on Earth, consume more energy than the Sun outputs, etc., etc. When the limiting factor is human reasoning, you might say "but that would require human cryptographers to think about my system for 1 year!". But if that's so, that may actually happen, and in any case you can't easily get the order of magnitude of the costs and resources required up to "inhuman" levels.

The point I'd take from your idea is that it could be valuable to try to make adversaries incur diverse costs in attacking your system, especially if you don't know what capabilities and resources your adversaries do and don't have. This is kind of akin to what's happened with key derivation, where people have proposed KDFs that are very CPU-intensive and also KDFs that are very memory-intensive, and if there are other sorts of resources that you could make an attacker burn, there are probably people trying to invent KDFs that burn those, too. It's not clear to me that there's a genuinely scalable way to require human analytical effort as one of those resources, but if there is, that could be a useful property for communications systems to have for defense in depth. But making up a new cipher by hand for every system is probably not going to provide that property very reliably, or be a very effective use of resources, again when other uses of resources can improve security to a staggering extent.


You're right; I misread your construct as KC(CC(P)). Your construct [CC(KC(P))] shouldn't be weaker than KC, unless information or resources are shared by CC and KC (such as keys). Shared information or resources may introduce side channel attacks. Per the previous link, this is likely only practicable on entirely separate machines.

Any entity that can break AES at-scale will undoubtedly find any unreviewed cryptographic protocol trivial to break. Any such at-scale effort would already include attacks against typical bad-custom-crypto (because they're extremely easy and common), in addition to the AES attacks. Cascading ciphers, particularly weak ones, will not stop the NSA.

edit: addressed information leak if CC & KC share keys/resources


Thanks for agreeing that we don't lose security when using my construct.

> Any entity that can break AES at-scale will undoubtedly find any unreviewed cryptographic protocol trivial to break.

Yes, but it would involve highly paid cryptoanalysts. The reason for my first comment is first of all to disprove the root of this thread. Secondly, it makes surveillance more expensive while it's free for us.


> Thanks for agreeing that we don't lose security when using my construct.

I don't agree. The construct may not degrade security under several caveats. Most implementations are extremely likely to share resources, which will introduce weaknesses. I'd wager those weaknesses would degrade security much more than the composition would enhance it, but it'd depend on the exact situation.

> Yes but it would involve highly paid cryptoanalysts. My proposal is first of all to disprove the root of this thread. Secondly, it makes surveillance more expensive while it's free for us. That what's cryptography all about. Making their life harder while not so much for us.

My exact point was that those cryptographers would already need to develop generic attacks for all the non-standard (read: non-secure) cryptosystems out there. Composing a homegrown cipher with a peer-reviewed secure cipher will not make their lives harder. It will make maintaining and improving the system harder. The net result is overwhelmingly likely to be detrimental.


And it has the added benefit from the NSA's point of view that your connection/data is precisely fingerprinted as 'homebrew-crypto-1629: refer to analysis cell 2865JQ'.

Then the computer is sub basement 19 goes 'ding!' and sends an automated SWAT team to your house.


To fingerprint the connection/data by the used cipher they would need to break KC. If they are able to break KC they can fingerprint you also when you only use KC.


You have said multiple times that the outer wrapper was CC, and the offline, inner wrapper was KC. All they need to do is infer it is CC output, either through online attacks, anomalies in its statistical distribution, block size, etc. Also, certain modes like ECB have watermarking attacks, which inherently reveal that ECB was used.

RC4 is a common stream encryption which can definitely be detected. In contrast AES seems much harder.

It is called a Distinguishing attack. See: https://eprint.iacr.org/2013/176.pdf http://www.security.iitk.ac.in/hack.in/2009/repository/bimal... http://elligator.cr.yp.to/elligator-20130828.pdf http://cr.yp.to/streamciphers/mag/053.pdf http://christina-boura.info/sites/default/files/KeyDifferenc...


The output of KC is not necessarily completely random. It can contain some metadata. Like "encrypted with KC4096bits, initialization vector is 490282348992489, length of ciphertext is 26728 bytes".

When you pass this through a bad cipher it may create a characteristic fingerprint.


I think you confuse cipher with protocol.

To make it clear: for me a cipher is a bijective function on arrays of a predefined length.


What do you propose doing with the initialization vectors then? Afaik, all commonly used encryption algorithms have those, for good reason too.


We don't agree on the effectiveness of security through obscurity.


Cascade Ciphers were out of scope. Moxie was addressing the order of operations for authentication and encryption.


Wait are you telling me there are special classes of inputs that are unsafe to encrypt using the standard algorithms?

This sounds scary and please tell me more. In particular what happens in the pathological case where the input consists of 32/64/128/256 bit blocks, and in each block all bits are zero except the last one which may be one or zero?


If you restrict the search space of the input, you also reduce the possible outputs, and that makes it easier to narrow down the key. However, it's only really plausible on very small inputs, another channel of attack (e.g. the ones illustrated in the article), or if you know something specific about the input.

TL;DR don't base64 without a reason because it gives the attacker more bytes with fewer values to work with.


Most of the responses seem to be talking about A -> Custom -Strong -> Unstrong -> Uncustom instead of A -> Strong -> Custom -> Uncustom -> Unstrong*.

Some potential problems with applying a custom transform to the cyphertext:

- The custom transform might have effects on cache that create opportunities for timing attacks

- The custom transform might have a buffer-overflow (just like any other excess code)

- The custom transform might support features exploitable in DOS attacks, like run-length encoding

- The custom transform might be slow, costing time and money

- Sometimes applying two transforms is equivalent to encrypting with a different key (e.g. DES has this property). Might create meet-in-the-middle attacks.

- If the custom transform is given the same key as the strong crypto, as a lazy programmer might do, it could leak the key

- Switching the ordering of the transforms is a deadly mistake, but easy to glance over

- If the strong transform starts with an HMAC, as it should, that can be used to break the custom transform's key anyways (see the post we're commenting on)

Mostly those are small things. But the security benefit you get is also small.


> Most of the responses seem to be talking about A -> Custom -Strong -> Unstrong -> Uncustom instead of A -> Strong -> Custom -> Uncustom -> Unstrong*.

Yes :(

> Some potential problems with applying a custom transform to the cyphertext:

> - The custom transform might have effects on cache that create opportunities for timing attacks

> - The custom transform might have a buffer-overflow (just like any other excess code)

This are for the point of my comment irrelevant implementation details. You could apply the custom transform on another computer.

> - The custom transform might support features exploitable in DOS attacks, like run-length encoding

The custom transform ought to be a bijective function on (blocksize of Strong) bits.

> - The custom transform might be slow, costing time and money

It was never meant as a suggestion for wide-spread usage.

> - Sometimes applying two transforms is equivalent to encrypting with a different key (e.g. DES has this property). Might create meet-in-the-middle attacks.

> - If the custom transform is given the same key as the strong crypto, as a lazy programmer might do, it could leak the key

The custom transform shouldn't know at all about anything of the strong cipher. Another implementation detail.

> - Switching the ordering of the transforms is a deadly mistake, but easy to glance over

Another implementation detail.

> - If the strong transform starts with an HMAC, as it should, that can be used to break the custom transform's key anyways (see the post we're commenting on)

I just wanted to show the common idea of "cryptography is black magic, you should never touch it, you can only do bad things" wrong.

My scheme adds security through obscurity which may be worth the trouble.


> I just wanted to show the common idea of "cryptography is black magic, you should never touch it, you can only do bad things" wrong.

Per the other thread, this is overwhelmingly likely to decrease overall security without any practical benefit.

> My scheme adds security through obscurity which may be worth the trouble.

It adds potential side channels and likely no benefit over KC alone. You've sidelined many potential problems by focusing on "if the composition is properly implemented," but that's a huge problem. The likelihood of properly implementing the composition is vanishingly small. It is very likely to be improperly implemented and provide less security than KC alone (and it costs more!)


Okay, this is how I'd ad-hoc implement the scheme I described without the security problems you are commenting. A and B know each other and exchange the custom cipher securely. Imagine they each have a Raspberry Pi which does nothing else then to take each IP package sent from the other side and reverse the custom transform on the data. For each package that it sends to the other party it applies the custom transform to the data. Now A and B can route their internet traffic through their Raspberry Pi and get security by obscurity for their communication on top of the usual security. Even if the custom transform is simple, it's overwhelmingly probable that no automated TLS-break-tool will be able to break the custom transformed TLS traffic to the other party.


Was CC and KC both performed on the Pi? That introduces side channel attacks.

Does CC include implementation flaws enabling remote access? An attacker may use the Pi to enhance attacks against the system executing KC.

Does the Pi include any remotely exploitable flaws? See before.

If assume a perfect/non-exploitable CC/Pi, we may not degrade the security of KC. That key word, may, is the problem professional cryptographers spent years analyzing though. If we spent a month thinking about this, we might identify other requirements needed to avoid weakening KC. This is not recommended.

Systems which rely upon cipher-obscurity are not secure. Most amateur cryptosystems are trivially defeated without any knowledge of their internals (FBI has some nice articles on cryptanalysis of criminal ciphers). Advising amateurs to rely upon homegrown ciphers is unprofessional and encourages bad risk mitigation strategies.

We also disagree about the difficulty of breaking non-keyed bijections (trivial) versus AES-at-scale ("not trivial"). The cost of the latter easily exceeds $1B. The former would take a trained cryptanalyst less than a month. Is your data worth <$10,000?


> Was CC and KC both performed on the Pi? That introduces side channel attacks.

I specifically said the RPi does nothing else but CC.

> Does CC include implementation flaws enabling remote access? An attacker may use the Pi to enhance attacks against the system executing KC.

> Does the Pi include any remotely exploitable flaws? See before.

The interface from the PC to the RPi should of course have the same security as any internet-facing interface. Thus the remote access to the RPi wouldn't pose a risk for using the protocol beside the obvious risks that you always get in such a scenario.

> Systems which rely upon cipher-obscurity are not secure. Most amateur cryptosystems are trivially defeated without any knowledge of their internals (FBI has some nice articles on cryptanalysis of criminal ciphers). Advising amateurs to rely upon homegrown ciphers is unprofessional and encourages bad risk mitigation strategies.

The system I described is not trivially defeated because defeating it implies defeating KC.

Did you read my comments?

> We also disagree about the difficulty of breaking non-keyed bijections (trivial) versus AES-at-scale ("not trivial"). The cost of the latter easily exceeds $1B. The former would take a trained cryptanalyst less than a month. Is your data worth <$10,000?

To break my cipher an attacker would need to solve _both_ problems.

My point is that _if_ AES is broken without our knowledge, using the system I described can still make untargeted-surveillance impractical. Isn't that an interesting property for a protocol using a custom cipher?


> Thus the remote access to the RPi wouldn't pose a risk for using the protocol beside the obvious risks that you always get in such a scenario.

That risk didn't exist without the custom cipher construct (KC-system is still independently vulnerable as it was without the CC-system). This means the construct has increased the attack surface, potentially critically.

> The system I described is not trivially defeated because defeating it implies defeating KC.

Your system introduces potential new vectors to defeat KC. If KC were not broken, this construct likely weakens KC. If KC were broken, this construct may provide some minor protection. Whether it provides sufficient additional protection to actually protect the data from an adversary capable of breaking KC is extremely unlikely. Given KC is a peer-reviewed secure ciphersuite and CC is not, the emphasis should be on keeping KC secure - not weakening it to introduce an untested (likely insecure) ciphersuite in a custom composition. This is doubly the case given significant and on-going real-world costs of implementing and maintaining this custom solution.

> My point is that _if_ AES is broken without our knowledge, using the system I described can still make untargeted-surveillance impractical.

An adversary capable of automating AES decryption would already have automated weak-cryptosystem decryption. This adds nothing except cost, complexity, and faux security.


This protects only against an attacker who has broken AES so thoroughly that they can essentially surveil TLS traffic en masse at no cost.

To a targeted attacker, TLS is trivially identifiable through packet analysis. After a few handshakes, your transform (as mentioned elsewhere, reverse some nybbles and XOR against a constant) will be fully understood and broken with likely less effort than it cost you to build in the first place.


Why not just use a different mature and trusted cipher on the relaying nodes?


That would give you security against different threats.

I chose a custom cipher for 2 reasons:

1. It gives some security even when assuming that all publicly known ciphers are broken.

2. It works specifically by using a custom cipher which is normally considered bad and thus makes the results unintuitive.

I think that makes it interesting.


Are you just attempting to argue the pedantic point that some theoretical subset of homebrew crypto applications may actually be secure? Because taken as practical advice your position requires a lot of awfully strong assumptions.


Every crypto is homebrown - just maybe not in your home.

Everyone cooks just with water.

The scheme I talked about is not entirely homebrew. It consists of a mainstream cipher KC and a custom cipher CC to unite the best of both worlds: Robustness of mainstream crypto with obscurity of homebrew crypto.


> Every crypto is homebrown - just maybe not in your home.

> Everyone cooks just with water.

The problem with applying this definition of "homegrown" is that it willfully ignores any distinction implied by the term and thus renders it semantically meaningless. This is a form of straw man.

Regardless, even if we assume that all crypto, at the time of writing, is equally likely to be safe, I posit that the security and cost of implementation benefits achieved by leveraging published techniques far outweighs the benefit of having an obscure fingerprint. This is because previously published methods have the advantage of selection and iterative hardening based on peer review.

Furthermore, I posit that even if you wrap your data in a matryoshka doll of encryption, each of these layers will be more secure when implemented using proven techniques.

For the same reasons I'd also argue that even if you were to develop your own cipher you would benefit more by publishing it than by keeping it a secret.

Another way to think about it is that "an attacker reading the documentation" should not be a failure mode of well-implemented crypto.

Speculating even deeper on the subject, it occurs to me that in the face of a global adversary (of whose automated cryptanalysis your proposal aims to thwart) displaying a unique fingerprint may actually be detrimental to the security of your data as it may flag it specifically for deeper inspection and manual analysis.


So wrong. Your custom cipher could be leaking key material in the encrypted output, almost certainly leaks the key via timing channels, could be leaking the known cipher key by timing channels and more (and this is assuming separate keys).

The CC decode stage will be subjected to every possible edge case to see if it will do something dumb, like the oracle attacks mentioned in the article. And it could also leak information about the KC decode via timing attacks.

If this was for offline file encryption, where you do the steps separately, maybe okay. But not for online encryption where chosen plaintext is possible.


Read this thread.

CC is a bijective function on (blocksize of KC) bits.

CC knows nothing at all about KC.

Implementation can be done in another computer without security problem while you get obscurity.

Read this thread.


You don't mention separate computers until now. It's a big difference. I thought we were discussing implementing crypto, not some theoretical properties you achieve when conditions are just right.

I'm not saying trust the Windows implementation of AES, you can put it through some other cipher with a strong implementation as well, but don't just make up your own one.


The "Don't roll your own crypto" mantra is overly ambiguous and enforces the culture of elitism in information security. I hope that something displaces this regurgitated "advice" with something more helpful. I know Moxie's article is a few years old but it is a step in the right direction regardless.

--EDIT--

I don't think I was perfectly clear in my original comment. I attempted to expand on my position in a follow-up comment: https://news.ycombinator.com/edit?id=9376131


>enforces the culture of elitism in information security

The NSA is elite. Nobody has any business trusting you to outsmart them unless you are also elite.

This is like complaining that nobody will bet on you to defeat Gary Kasparov means there is a culture of elitism in chess. Of course there is. It's a game of skill. Other people are more skilled than you, by a lot, and you're probably going to lose.


> This is like complaining that nobody will bet on you to defeat Gary Kasparov means there is a culture of elitism in chess.

A closer analogy would be berating everyone who tries to play chess unless they studied it for years and they are professional and renowned chessographers. :)

Edit: If we're going to stretch the chess analogy further, it's important to note that the best chess players come from a very chess-positive culture, where many people grow up playing the game from a very young age.


>tries to play chess

Go ahead and play chess for fun, certainly, and you might become one of those grandmasters eventually, it's just that you shouldn't be betting people's privacy and potentially safety on the proposition that you already are one when you start.


The modern argument isn't "don't roll your own crypto". It's "don't build with crypto until you've learned how to break crypto". There are plenty of resources for people to learn how to do that online. Skipping that step and then shipping weak crypto is inexcusable.


I agree but you are the first person I've seen frame it in that way. I see people spew "don't roll your own crypto" often. Hopefully some variation of your "modern argument" becomes more prevalent in the near future. It is certainly more meaningful.


Ok, and thank you for listening to me, but let me just add that in every instance I know of where someone has been berated for shipping crypto without having a background in it, those people also hadn't taken the time to learn how to break crypto.

So agree that the argument you're calling out has the potential to be harmful in the long run, but do not agree that it has harmed anyone we know of, and would add that it has --- to the extent it's warded people off things like Cryptocat --- helped end-users.


Don't over-value being able to break crypto, though. Yes, it'll get people to skip the unauthenticated CBC, but exploiting a padding oracle is no guarantee that you'll be able to write e.g. a proper secure boot system, a fast and secure implementation of a low-level primitive, or that you'll get forward-security right - and there are quite a few levels above that...


The thing like going through all of the http://cryptopals.com/ exercises is that it raises your overall awareness (as of specific attacks) that this stuff is much more breakable than I thought. It is more enlightening than just how to protect against a specific attack.


What about using existing crypto libraries (such as OpenSSL) to add crypto to an existing project? I'm adding encryption to an open source backup tool I'm working on, planning on using AES from OpenSSL. Would it be enough to document exactly what I'm doing, and put out a call for an audit of the methods that are being used? I really want to make sure I get this right.


You are probably going to get it wrong, not because there's anything bad with you, but because standard crypto libraries are too fine grained. The primitives are too low level, and you have to assemble sequences of calls in the right order and be in the watch for unexpected side effects.

I could tell you how I have gotten it wrong in the past, but there is no guarantee that I won't get it wrong again in a different way. So, the audit idea has it's merits, but you really want not to rely on the Linus Law of eyeballs. That means knowledgeable auditors who charge actual money for their time.


Same as "rolling your own crypto"; in fact, that's generally, except in extreme cases, exactly what we're referring to when we talk about people rolling their own.


So what are my options then? I could put out an initial version that shows what I want to accomplish, then either have it reviewed heavily, or see if I can get someone who's more of an expert to give it a go based on those specifications. Here's the problems I have so far, which I know I'm out of my depth:

1) In order to keep the backup software's deduplication functionality (and to minimize the server trusting the client), I need to have the IV (initialization vector) be a computed value based on the file's contents (so that multiple copies of the same file on the clients will encrypt to the same contents going to the backend server). I know that _This will leak some data_ (i.e., you may not want an attacker to know that a given set of files are the same contents). So I plan on making this optional -- either the user gets to take advantage of dedup, or better encryption.

2) To do number 1 above, I was planning on taking a hash of the file, encrypting that with the users key, then using that as a predictable IV. You will still have a unique IV per unique file, but I don't know enough to see if this can leak any other data. It looks like RFC 5297 describes an approach similar to this, but I think it is for a different use case.

3) I need the backup server to know which version of an encryption key the client used. That is, if the client changes keys between backups, I need the server to instruct the client to do a full backup, not incremental. So I can either have the client provide a version number for the key (or if using a keyfile, use the datestamp of the file as a version string), or I can encrypt a given string using the key, and use a hash of the encrypted string as the key version number. (Note, in no case will I ever be storing a hash of the plaintext of the client's files on the backup server, as that too can leak information)

My apologies if the above makes any experts here cringe, but as I mentioned my constraint is to have same-content files encrypted to the same target contents (for dedup purposes), although I will give the user to turn that off and use a random IV for better security (and give up dedup).


I could not disagree with you more.

Maybe you could suggest what might be "more helpful" but I personally could not think of anything more useful that than telling people to not do something that puts their customers' and their own critical information at risk.

Good crypto algorithms don't get stale or anything. The point is they're fundamentally difficult.

Perhaps you could give a single decent reason for rolling your own crypto algorithm, vs something as easy as 4096 bit RSA or something.


I certainly agree with you. I just think the "don't roll your own crypto" advice is overly ambiguous. Ironically, I think my original comment was ambiguous as well. Let me clarify. I'm not endorsing rolling your own cryptosystem (e.g. a replacement for RSA). Rather, I think the advice should often be paired with additional insight on what "rolling your own" means. When building some sort of software, not everyone (currently) has the luxury of a cryptographic library that handles everything painlessly.

For example, I think most would say that I'm not "rolling my own crypto" if I'm implementing some piece of functionality in my application leveraging the use of some API with "mac", "encrypt", and "decrypt" functions. There are still ways I can screw up using these functions, but I'm arguably not "rolling my own" crypto. So in this situation, the mantra is confusing at best.


Maybe; but handing a MAC to generalist developers is going to end in a timing attack, like http://rdist.root.org/2009/05/28/timing-attack-in-google-key.... (And Keyczar is not amateur hour!)


I don't know anyone who has ever taken "don't roll your own crypto" to mean "don't write things that use crypto".


> Perhaps you could give a single decent reason for rolling your own crypto algorithm, vs something as easy as 4096 bit RSA or something.

I would not characterize implementing RSA safely as an "easy" task.


Which is part of what's meant. Don't create your own crpyto algorithm and don't write your own implementations of existing algorithms. There are so many gotchas that it would take a lot of effort to get something that's less buggy even than the much-maligned OpenSSL. Someone (I don't remember who) said that even typing the letters "RSA" is too close to rolling your own crypto.


Excuse my ignorance, but why aren't there options for salting / seeding / external RNG alternatives to crypto algorithms?


This.

The problem is that in this community you are witch-hunted if you talk about crypto and you are not a famous cryptographer.

People start with the assumption that what you are saying is wrong (this is a good thing). But then many don't have the competence to judge what you are saying (no problem. we all are ignorant in many subjects). And then, they react as if you were really wrong (this is a huge problem).


> People start with the assumption that what you are saying is wrong (this is a good thing). But then many don't have the competence to judge what you are saying (no problem. we all are ignorant in many subjects). And then, they react as if you were really wrong (this is a huge problem).

I've been dealing with crypto on a near-daily basis for the last 15 years. I'm well within the top percentile of understanding of the subject, from a theoretical and practical standpoint, having worked to break cryptographic protocols for most of my life. I wouldn't trust myself to design a crypto protocol for grocery lists, let alone anything important.

This isn't false modesty or elitism or ignorance. It's recognizing that even if every smart person I know looks at it and says it's fine, it probably isn't. Crypto isn't hard to get right -- that's actually not the argument that anyone will make -- but it's impossible to know if you did get it right or not.

You do not have the knowledge required to build safe cryptographic protocols. Neither does anyone else on this site.


I think to say we can't build safe cryptographic protocols is too strong. I'd say that it's very hard to design safe systems from the ground up. But it's still possible to realistically and practically work around certain threats without opening another vulnerability.


It's not realistic, because you will never consider every possible attack scenario. You may have considered timing attacks, but if you don't consider the way that the processor uses power when running through your algorithm, you may have just leaked key material. This sounds like an obscure case, right? Except that the Trezor bitcoin security appliance was just broken in that way last week.

Secure crypto comes from a lot of smart people repeatedly trying to destroy an algorithm and its implementation. It doesn't come from a super smart person building a cryptosystem; that's how we end up with DVD CSS.


The attack scenarios you described are against an implementation whereas I said your statement regarding protocols was too strong. Having that in mind, our comments don't disagree.


The format of a discussion does not necessarily have to follow a strict pattern of point->counterpoint->point->counterpoint->win in order to be meaningful or useful to the participants and spectators.


It's really not that hard, huh? Do you have some evidence or solid reasoning? Given that this is a hard question to get objectivity on, do you think you have more experience with crypto than daeken? If no, you should probably update your beliefs to consider daeken's opinion, rather than repeat your own without substantiation.


> You do not have the knowledge required to build safe cryptographic protocols. Neither does anyone else on this site.

You'd probably be right on any other site; but I'm guessing there are a few people who do this for a living somewhere on HN.


> there are a few people who do this for a living somewhere on HN.

And Daeken would still be right.

You skipped over daeken's point, which would stand on HN or crypto.stackexchange.com where there are undoubtedly people more qualified to talk about crypto than HN.

Crypto is impossible to know if you got it right. Read that again. It's impossible to know that you got it right.

If you get it really wrong it'll be obvious to a competent cryptoanalyst, but if you get it a tiny bit wrong, and this is computer security related, so 'tiny bit wrong' is likely bad enough to render your protocol unusable, then it won't come out at first or second glance, and may even stand up to some fairly rigorous scrutiny.

Even RSA which has stood the test of time since 1977, is only considered 'safe' in so far as there's no algorithm better than brute force for factoring products of large primes - if someone came out with an algorithm tomorrow that trivialized factoring numbers, RSA would quickly get moved to the 'unsafe' list. And that's just on the crypto primitive.

There's all sorts of manner of other attacks against an implementation of a protocol to be considered too.


> Even RSA which has stood the test of time since 1977, is only considered 'safe' in so far as there's no algorithm better than brute force for factoring products of large primes

https://en.wikipedia.org/wiki/General_number_field_sieve but most of your point still stands. RSA is edging slowly closer to the abyss.


That's actually the point, though. People that do this for a living will agree with me on this. They don't have the knowledge individually to make secure crypto. Secure crypto comes from lots of smart people and a lot of battle testing; the idea of one person making secure crypto protocols/implementations is completely ludicrous.


> culture of elitism

Reality is elitist. You either know how to do things or you don't, and if you don't, you fail. Sometimes you fail anyway. That's elitism.





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: