"Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break."
I had found a similar thing making my own puzzles, "It's easy to make a puzzle you can't solve, but it's hard to make a puzzle that's fun."
Well, that' not entirely the same thing, but they overlap I guess.
I think it's worth drawing a distinction between the algos/maths and attempts at implementations. Otherwise we wouldn't have things like OCaml-TLS and others.
I implemented a simplified version of the referenced Vaudenay attack as part of Dan Boneh's Cryptography I course on courseara+. The course was very interesting, and also fun. I'm not ready to go out and implement my own cryptography, but knowing a bit about the subject makes me a more intelligent consumer of crypto libraries.
While warning people away from implementing their own cryptography we have to make sure we don't scare people off from the subject altogether. After all, absolute top experts have to start somewhere.
Okay, name one that has been written properly that the average web developer is going to encounter.
I'm going to focus on PHP because that's what I know best, it has an enormous market share, and I'm not a fan of blowing smoke (which I would be if I tried to speak to, e.g. Perl development).
PGP? OTR? Axolotl? These aren't part of your standard dev environment.
Mcrypt? Hasn't been touched since 2007. There is at least a patch for a bug in the 64-bit implementation of CAST-256 that has been sitting, unmerged, for six years.
Libsodium is the best we have, and you need PECL access to install it.
OpenSSL is labyrinthine. http://stackoverflow.com/a/29331937/2224584
It's still messy, despite the past year of attention from the security industry, but it's getting better.
This library offloads its underlying cryptography to OpenSSL because it's the best we got:
Maybe Python, Ruby, and Node developers have better options available to them. But 83% of the Internet runs PHP, and having a properly written cryptography package just isn't a trivial bar to leap over.
OpenSSL is still too low-level for the generic developer. You want things like HTTPS, using which is still too complicated for most people (hello certificate management), but you can get it right more easily by following an online tutorial. Basically anything that is more complicated than "open socket to destination, here's my certificate" is doomed to be misused by non-experts. Oh, and it better have strong default settings so it won't pick RC4 as the cipher or something.
I seem to be, but I'm not.
> I think there's a more fundamental cryptographic principle. Don't implement cryptography, unless you are an absolute top expert. Even then, think twice, and get another absolute top cryptography expert to check your working. Use a pre-existing cryptography package that has been written properly instead.
This is written towards ALL developers. In the scope of web developers that I reside, this is my rebuttal.
Some crypto experts are quick to demand people use something that isn't necessarily available to the developer. Then the language elitists tell them to switch to their preferred language instead of improving what the developers already know. End result? People just use insecure or badly designed crypto libraries.
We need better libraries. Someone has to write them. Or just make libsodium the standard. I'd be okay with that.
Currently 82%, last time I checked it was 83%.
Seriously, just assume they mean technology stack or something.
Recursive DNS lookup DoS attacks? You are doing work for some you don't know (random UDP packets)
The recent NTP DDoS amplification issues? Doing work for someone you don't know (Again. unauth-ed UDP packet trigging craploads of work)
Padded Oracle attacks in Crypto? Doing work for someone you don't know, and leaking data based on when something fails.
IP Source routing? Doing work for an untrusted party!
The moral of this story is verify what you have before you work on it. It all comes down to validating user input and user source.
Celebritism doesn't really work in science; it only prevents progress.
2) Saying that implementing your own cryptography usually implies a production environment. I think it's generally assumed that nobody cares what people do with their own time/personal projects.
3) Trite sounds bites don't work in science either, and expert -/-> celebrity.
I'm sorry, but this is essentially never the case. This is no different than in other fields, for instance math or physics, where complete novices come in every day believing they've had a completely novel idea that will revolutionize the field. 999,999 times out of a million they haven't, and in the one remaining case they've come up with a solution in search of a problem.
"Oh, you've come up with a new cipher? Congratulations. Assuming it is secure, why should we use it ? Is it faster than existing ones? Simpler and more likely to be implemented correctly? Resistant to timing attacks? Resistant to CPU power analysis? Resistant to differential cryptanalysis? Suitable for low-CPU and low-memory embedded devices? Oh, none of these things? Gee, how interesting."
I'm reminded of http://www.scottaaronson.com/blog/?p=304
Crypto is an environment where a single mistake can get people killed. The stakes are very high. We're not talking about a slight rendering error in CSS here. This is not an appropriate place to be universally warm, fuzzy, encouraging, and forgiving of mistakes. This is incredibly serious stuff that must be treated appropriately seriously - and everyone attempting to touch the field needs to understand that.
In addition, stouset is right. The frequency with which apparently novel ideas are actually novel is much, much, much smaller than a naive guess would lead one to expect. I've watched people attempt to introduce ideas that strike them as novel, only to discover that they're just creating exploitable weaknesses, right here on Hacker News.
'Academic and useless' research huh? Go read DJB's website some day, it is far from inapplicable. All the research that publicly disclosed HUGE holes in SSL? I think it's incredibly important.
As to good security being of little value, maybe ask Sony and Adobe (and even NSA) if they wished they had better security.
Let KC be a well-known and implemented cipher and CC a custom cipher written by you. Instead of communicating like this:
A -> KC-encrypt -> transport -> KC-decrypt -> B
You can use:
A -> KC-encrypt -> CC-encrypt -> transport -> CC-decrypt -> KC-decrpyt -> B
The advantage is it breaks automated cryptoanalysis in case KC is broken.
A better idea than the one you stated is: don't rely solely on custom cryptography.
This mythical protocol just encrypts a blob and sends it over the wire. But real protocols involving cryptography are significantly more complicated, because they often need to authenticate other parties, cryptographically bind all the messages in a session, preserve forward secrecy, preserve anonymity of one of the communicating parties, etc.
Now the problem isn't that you are simply applying your own cipher to the output of a strong cipher. It's that you've now got to implement a damn protocol, because nothing in the world supports your cipher. And that implementation is inevitably going to have more exploitable flaws than a well-maintained open-source project that doesn't implement your bogocipher.
Cryptography is hard, and it's not just the primitives that are ripe for gotchas. Combining primitives, implementing primitives, designing protocols, implementing protocols, and generally anything involving touching something anywhere in the entire stack is fraught with danger. As a rule, the more crypto-related lines of code you write, the more likely it is you have introduced weaknesses rather than added strength.
I never suggested my proposal for public protocols for communication between two random parties. I first and all wanted to show the root of this thread wrong.
I'd suggest it maybe for highly confidential communication that warrants the trouble.
The root of this thread is extremely sound advice. Don't touch your own crypto unless you're an expert. If you're at the point where your security requirements are so sensitive that you can't possibly risk a break in an underlying well-known cipher, you should hire a real cryptographer and not dick around with inventing your own laughably-broken ciphers.
And once you have that cryptographer under your employ, you can feel free to break the rule of not rolling your own crypto. Although that cryptographer is infinitely more likely to simply compose well-known strong ciphers over deploying their own unpublished ones.
> I think there's a more fundamental cryptographic principle. Don't implement cryptography, unless you are an absolute top expert.
is a sound advice.
I disagree with this advice strongly. Tinkering around with cryptography can teach you a lot. Especially if you are not a top expert.
Furthermore, I described a relatively simple protocol that monotonically increases the security of a communication by applying a custom cipher. I didn't add the restrictions that CC must not know anything about KC anything because I though that would be clear.
Interestingly, you get security through obscurity on top of the previous security from my scheme even when CC is trivial like swapping the nibbles in bytes and then xoring a constant. This makes untargeted mass-surveillance in case of broken KC a lot harder. Depending on the CC it's relatively easy to make it totally impractical.
I think my scheme is interesting because it's unintuitive that it adds any security.
You described a relatively simple protocol that, at worst, does not decrease security in a theoretical sense with at least one critically-important and unmentioned caveat (don't use the same or related keys for both ciphers). In a practical sense, your simple protocol is likely a security disaster.
In the absolute best case, it really only defends against a break in the underlying cipher that is so thoroughly devastating that ciphertexts can be decrypted essentially for free. Even DES hasn't been broken this badly, and it's been considered broken for decades. The reason this is the case is that such a simple transform is only useful against an entity that is automatically decrypting ciphertexts on the wire en masse, and who isn't looking for your data specifically. Against a targeted adversary who can "merely" break AES with substantial effort, the additional cost such a transform would impose is completely negligible. In your example of a transform on top of a TLS connection, it would be completely obvious through packet analysis that the protocol was TLS, and your transform would be trivially understood and reversed after watching a few handshakes.
Careful. Encryption is decryption, remember. Do any of the rounds of CC undo any of the rounds of KC? Can you verify that bits that were correlated in PT and uncorrelated in KC are still uncorrelated in KC + CC? Does CC change the entropy content of the resulting message? (If it decreases it you have a problem, and most popular ciphers end up with damn near 8 bits per byte in the ciphertext.)
It's important to be paranoid about implementing your own crypto, true. But it's not good to take this to dogmatic extremes, ie "never trust any code you write that has crypto in the name or you break everything".
I would never write and deploy a custom cypher. But if I did, and my custom cipher CC, when applied to ciphertext encrypted by AES, somehow leaked any information about the plaintext, then I have accidentally made a cryptanalysis breakthrough that has thwarted hundreds of professional crypto people.
This is unlikely.
But more importantly, it's the implementation of this construct that's likely to introduce weaknesses. As I posted in my direct reply to him, the problem is that no real world protocol is so simple, and he will now have to implement this. See OpenSSL for what can go wrong when implementing ciphers; and that's a highly-used, generally stable project that's had the benefit of multiple eyeballs for years.
First, ciphertext is ciphertext. They'd probably be able to determine it's a block cipher and what the block size is, but that's about it. They're not going to be able to tell AES ciphertext from 3DES ciphertext if they're both set to use the same block size. There's no point "obscuring" the fact that you're using AES.
Second, if you're assuming the NSA seriously has the capability to break AES within the next 10 years, then obscuring it with a homebrew cipher is pointless, because they've probably already used their AES decryption capability to find many other ways into your system (through a hosting provider, ISP, domain registrar, DNS provider, admin accounts of coworkers...).
Even quantum algorithms are only able to halve the bits of the key search space (so 2^256 will become 2^128, which is still mostly infeasible even against a government adversary).
I don't think your suggestion would decrease security, if implemented perfectly (which is unlikely), even if the cipher is insecure and vulnerable to cryptanalysis. But I don't think it would increase security either. And since it wouldn't increase it, yet may decrease it in practice due to an implementation flaw which compromises both ciphers, why bother?
If you can break a set of ciphers it's trivial to determine which was used for a connection, if one was used; You just try each.
The whole point is to hide which is the real cipher to make it more costly to attack you. Making it more costly for the attacker is what we want.
> Second, (...)
This is no argument for or against the scheme I presented.
> Even quantum algorithms are only able to halve the bits of the key search space (so 2^256 will become 2^128, which is still mostly infeasible even against a government adversary).
Breaking a cipher means to not have to search all the key space. That's the definition of a broken cipher.
> I don't think your suggestion would decrease security, if implemented perfectly (which is unlikely), even if the cipher is insecure and vulnerable to cryptanalysis. But I don't think it would increase security either. And since it wouldn't increase it, yet may decrease it in practice due to an implementation flaw which compromises both ciphers, why bother?
We disagree on the practicality of implementing it without adding another vulnerability to your system.
You can tell mode somewhat reliably (I've hit 75% confidence or so) by the entropy content of the ciphertext, and I've never done a Markov chain longer than a native word so I'd assume that gets better the longer your chain is. I would also assume people much smarter than me have compiled heuristics that are more useful than just determining the mode.
You're looking at it the wrong way. You need to demonstrate for yourself that nothing in your cipher undoes the last round of the underlying cipher.
I hate people.
One of my favorite cryptographic attacks ever was of this form, known as the CRIME attack . It was a "partial plaintext" attack, where the "CC" in question was plain old gzip. Basically the attackers controlled a small part of the plaintext (for example, a field in an HTTPS request they could send with CSRF) and they were able to use known properties of the CC to get some information to leak into the message length -- namely, if the message fragment they controlled appeared elsewhere in the request, the gzip encoding would be shorter, and hence the encrypted request would be shorter. This is something that happens even with a "perfect" KC that no one could break.
This is one of the defining characteristics of cryptography to me -- there are no perfect cyphers, there are only good systems. Littering extra cryptography around nearly always decreases security. The most secure system is the simplest system that works.
This is different than the argument you're replying to which is plaintext -> KC -> CC.
Yes, it adds security through obscurity without losing security through well-known recommended ciphers.
So yes, there is a way of breaking this construct when used the way you said.
A simple example would be a CC that hex-encodes the plaintext before applying some transformation on the data (before you laugh, this exists in enterprise systems today). This means CC would effectively expand the underlying data 200% (0xA1 -> 0x4131) and substantially degrade the security of a block-based cipher (32-bit block -> effectively 16-bits).
Here's a link to a practical case of unsafe composition (in hashing): http://blog.ircmaxell.com/2015/03/security-issue-combining-b...
A bit more detail on randomly composing encryption algorithms: http://blog.cryptographyengineering.com/2012/02/multiple-enc...
edit: It would be better to compose CC(KC(P)) so CC can't leak any information about P or degrade KC. Any reluctance to show the world the output of CC should suggest the low-practical-value of CC.
(I appreciate the link to Matthew Green's post; I don't think his analysis of the effect of composition is as pessimistic as yours.)
CC is just a bijective function on (blocksize of KC) bits.
The author of the mentioned blog ignores the fact that cascading ciphers like I described breaks automated cryptoanalysis which is a necessity for mass surveillance in a world with wide-spread cryptography.
I guess there's an economic argument to be made that millions or billions of pairs of communicating parties could develop their own individual means of at least obfuscating their communications so that nobody could expect to find searchable plaintext after decryption. But the economic effort that the pairs of parties invested in creating their obfuscations may have been wasted because if the same level of time or effort had instead been spent to improve mainstream cryptography, it might have yielded major qualitative security improvements for the "official" stuff.
I think your argument that the effort put in custom ciphers maybe should be put in mainstream ciphers instead is interesting.
Tinkering around with custom ciphers can teach you a lot. Maybe you have not the knowledge or no idea how to attack/improve mainstream ciphers.
However, if you can make a difference for mainstream ciphers, of course that's what we need.
If this hypothesis is true, the work that Daemen and Rijmen did to invent AES-256 and the work that a particular implementer did to implement it will produce an almost inconceivably vast security benefit against this particular threat.
The reason this is important is the kind of disproportionality between the effort of Daemen and Rijmen and the AES reviewers and implementers, and the magnitude of the resulting security benefit. They might have spent a total of 500 person-years on making AES-256 work well, and received a security improvement of 340 trillion trillion trillion trillion-fold relative to whatever the security of AES-128 is. Whereas a homegrown cipher that isn't very mathematically sound might be developed with 1 person-year of effort and end up make an attacker do, let's say, 100 trillion operations. In my hypothesis, Daemen and Rijmen and other folks then got somewhere between a trillion trillion trillion and a trillion trillion trillion trillion trillion trillion trillion times better security return on their effort.
Now you might reasonably point out that if you use a standard, known cipher, the attacker's costs for a direct brute force attack are purely computational and don't involve research and development, or attempting to suborn or hack your correspondents or colleagues to discover the principles of operation or your system. Whereas if you do have a homegrown mechanism in play, an attacker incurs these other kinds of novel and sort of one-off costs, notably including making other human beings think about stuff more.
The point that I've taken from a lot of the security experts who've talked about this, though, is that the scaling benefits are the important factor here, again especially if you want to make a system that many people could use for a long time. When the limiting factor is computer time, which is really only likely to be true for systems created, refined, and reviewed by experts, you can sometimes get the really absurd security ratios that are hard to even think about, and require your adversary to spend more money than exists in the world, build more computers than can be made from all the silicon on Earth, consume more energy than the Sun outputs, etc., etc. When the limiting factor is human reasoning, you might say "but that would require human cryptographers to think about my system for 1 year!". But if that's so, that may actually happen, and in any case you can't easily get the order of magnitude of the costs and resources required up to "inhuman" levels.
The point I'd take from your idea is that it could be valuable to try to make adversaries incur diverse costs in attacking your system, especially if you don't know what capabilities and resources your adversaries do and don't have. This is kind of akin to what's happened with key derivation, where people have proposed KDFs that are very CPU-intensive and also KDFs that are very memory-intensive, and if there are other sorts of resources that you could make an attacker burn, there are probably people trying to invent KDFs that burn those, too. It's not clear to me that there's a genuinely scalable way to require human analytical effort as one of those resources, but if there is, that could be a useful property for communications systems to have for defense in depth. But making up a new cipher by hand for every system is probably not going to provide that property very reliably, or be a very effective use of resources, again when other uses of resources can improve security to a staggering extent.
Any entity that can break AES at-scale will undoubtedly find any unreviewed cryptographic protocol trivial to break. Any such at-scale effort would already include attacks against typical bad-custom-crypto (because they're extremely easy and common), in addition to the AES attacks. Cascading ciphers, particularly weak ones, will not stop the NSA.
edit: addressed information leak if CC & KC share keys/resources
> Any entity that can break AES at-scale will undoubtedly find any unreviewed cryptographic protocol trivial to break.
Yes, but it would involve highly paid cryptoanalysts. The reason for my first comment is first of all to disprove the root of this thread. Secondly, it makes surveillance more expensive while it's free for us.
I don't agree. The construct may not degrade security under several caveats. Most implementations are extremely likely to share resources, which will introduce weaknesses. I'd wager those weaknesses would degrade security much more than the composition would enhance it, but it'd depend on the exact situation.
> Yes but it would involve highly paid cryptoanalysts. My proposal is first of all to disprove the root of this thread. Secondly, it makes surveillance more expensive while it's free for us. That what's cryptography all about. Making their life harder while not so much for us.
My exact point was that those cryptographers would already need to develop generic attacks for all the non-standard (read: non-secure) cryptosystems out there. Composing a homegrown cipher with a peer-reviewed secure cipher will not make their lives harder. It will make maintaining and improving the system harder. The net result is overwhelmingly likely to be detrimental.
Then the computer is sub basement 19 goes 'ding!' and sends an automated SWAT team to your house.
RC4 is a common stream encryption which can definitely be detected. In contrast AES seems much harder.
It is called a Distinguishing attack. See:
When you pass this through a bad cipher it may create a characteristic fingerprint.
To make it clear: for me a cipher is a bijective function on arrays of a predefined length.
This sounds scary and please tell me more. In particular what happens in the pathological case where the input consists of 32/64/128/256 bit blocks, and in each block all bits are zero except the last one which may be one or zero?
TL;DR don't base64 without a reason because it gives the attacker more bytes with fewer values to work with.
Some potential problems with applying a custom transform to the cyphertext:
- The custom transform might have effects on cache that
create opportunities for timing attacks
- The custom transform might have a buffer-overflow (just like any other excess code)
- The custom transform might support features exploitable in DOS attacks, like run-length encoding
- The custom transform might be slow, costing time and money
- Sometimes applying two transforms is equivalent to encrypting with a different key (e.g. DES has this property). Might create meet-in-the-middle attacks.
- If the custom transform is given the same key as the strong crypto, as a lazy programmer might do, it could leak the key
- Switching the ordering of the transforms is a deadly mistake, but easy to glance over
- If the strong transform starts with an HMAC, as it should, that can be used to break the custom transform's key anyways (see the post we're commenting on)
Mostly those are small things. But the security benefit you get is also small.
> Some potential problems with applying a custom transform to the cyphertext:
> - The custom transform might have effects on cache that create opportunities for timing attacks
> - The custom transform might have a buffer-overflow (just like any other excess code)
This are for the point of my comment irrelevant implementation details. You could apply the custom transform on another computer.
> - The custom transform might support features exploitable in DOS attacks, like run-length encoding
The custom transform ought to be a bijective function on (blocksize of Strong) bits.
> - The custom transform might be slow, costing time and money
It was never meant as a suggestion for wide-spread usage.
> - Sometimes applying two transforms is equivalent to encrypting with a different key (e.g. DES has this property). Might create meet-in-the-middle attacks.
> - If the custom transform is given the same key as the strong crypto, as a lazy programmer might do, it could leak the key
The custom transform shouldn't know at all about anything of the strong cipher. Another implementation detail.
> - Switching the ordering of the transforms is a deadly mistake, but easy to glance over
Another implementation detail.
> - If the strong transform starts with an HMAC, as it should, that can be used to break the custom transform's key anyways (see the post we're commenting on)
I just wanted to show the common idea of "cryptography is black magic, you should never touch it, you can only do bad things" wrong.
My scheme adds security through obscurity which may be worth the trouble.
Per the other thread, this is overwhelmingly likely to decrease overall security without any practical benefit.
> My scheme adds security through obscurity which may be worth the trouble.
It adds potential side channels and likely no benefit over KC alone. You've sidelined many potential problems by focusing on "if the composition is properly implemented," but that's a huge problem. The likelihood of properly implementing the composition is vanishingly small. It is very likely to be improperly implemented and provide less security than KC alone (and it costs more!)
Does CC include implementation flaws enabling remote access? An attacker may use the Pi to enhance attacks against the system executing KC.
Does the Pi include any remotely exploitable flaws? See before.
If assume a perfect/non-exploitable CC/Pi, we may not degrade the security of KC. That key word, may, is the problem professional cryptographers spent years analyzing though. If we spent a month thinking about this, we might identify other requirements needed to avoid weakening KC. This is not recommended.
Systems which rely upon cipher-obscurity are not secure. Most amateur cryptosystems are trivially defeated without any knowledge of their internals (FBI has some nice articles on cryptanalysis of criminal ciphers). Advising amateurs to rely upon homegrown ciphers is unprofessional and encourages bad risk mitigation strategies.
We also disagree about the difficulty of breaking non-keyed bijections (trivial) versus AES-at-scale ("not trivial"). The cost of the latter easily exceeds $1B. The former would take a trained cryptanalyst less than a month. Is your data worth <$10,000?
I specifically said the RPi does nothing else but CC.
> Does CC include implementation flaws enabling remote access? An attacker may use the Pi to enhance attacks against the system executing KC.
> Does the Pi include any remotely exploitable flaws? See before.
The interface from the PC to the RPi should of course have the same security as any internet-facing interface. Thus the remote access to the RPi wouldn't pose a risk for using the protocol beside the obvious risks that you always get in such a scenario.
> Systems which rely upon cipher-obscurity are not secure. Most amateur cryptosystems are trivially defeated without any knowledge of their internals (FBI has some nice articles on cryptanalysis of criminal ciphers). Advising amateurs to rely upon homegrown ciphers is unprofessional and encourages bad risk mitigation strategies.
The system I described is not trivially defeated because defeating it implies defeating KC.
Did you read my comments?
> We also disagree about the difficulty of breaking non-keyed bijections (trivial) versus AES-at-scale ("not trivial"). The cost of the latter easily exceeds $1B. The former would take a trained cryptanalyst less than a month. Is your data worth <$10,000?
To break my cipher an attacker would need to solve _both_ problems.
My point is that _if_ AES is broken without our knowledge, using the system I described can still make untargeted-surveillance impractical. Isn't that an interesting property for a protocol using a custom cipher?
That risk didn't exist without the custom cipher construct (KC-system is still independently vulnerable as it was without the CC-system). This means the construct has increased the attack surface, potentially critically.
> The system I described is not trivially defeated because defeating it implies defeating KC.
Your system introduces potential new vectors to defeat KC. If KC were not broken, this construct likely weakens KC. If KC were broken, this construct may provide some minor protection. Whether it provides sufficient additional protection to actually protect the data from an adversary capable of breaking KC is extremely unlikely. Given KC is a peer-reviewed secure ciphersuite and CC is not, the emphasis should be on keeping KC secure - not weakening it to introduce an untested (likely insecure) ciphersuite in a custom composition. This is doubly the case given significant and on-going real-world costs of implementing and maintaining this custom solution.
> My point is that _if_ AES is broken without our knowledge, using the system I described can still make untargeted-surveillance impractical.
An adversary capable of automating AES decryption would already have automated weak-cryptosystem decryption. This adds nothing except cost, complexity, and faux security.
To a targeted attacker, TLS is trivially identifiable through packet analysis. After a few handshakes, your transform (as mentioned elsewhere, reverse some nybbles and XOR against a constant) will be fully understood and broken with likely less effort than it cost you to build in the first place.
I chose a custom cipher for 2 reasons:
1. It gives some security even when assuming that all publicly known ciphers are broken.
2. It works specifically by using a custom cipher which is normally considered bad and thus makes the results unintuitive.
I think that makes it interesting.
Everyone cooks just with water.
The scheme I talked about is not entirely homebrew. It consists of a mainstream cipher KC and a custom cipher CC to unite the best of both worlds: Robustness of mainstream crypto with obscurity of homebrew crypto.
> Everyone cooks just with water.
The problem with applying this definition of "homegrown" is that it willfully ignores any distinction implied by the term and thus renders it semantically meaningless. This is a form of straw man.
Regardless, even if we assume that all crypto, at the time of writing, is equally likely to be safe, I posit that the security and cost of implementation benefits achieved by leveraging published techniques far outweighs the benefit of having an obscure fingerprint. This is because previously published methods have the advantage of selection and iterative hardening based on peer review.
Furthermore, I posit that even if you wrap your data in a matryoshka doll of encryption, each of these layers will be more secure when implemented using proven techniques.
For the same reasons I'd also argue that even if you were to develop your own cipher you would benefit more by publishing it than by keeping it a secret.
Another way to think about it is that "an attacker reading the documentation" should not be a failure mode of well-implemented crypto.
Speculating even deeper on the subject, it occurs to me that in the face of a global adversary (of whose automated cryptanalysis your proposal aims to thwart) displaying a unique fingerprint may actually be detrimental to the security of your data as it may flag it specifically for deeper inspection and manual analysis.
The CC decode stage will be subjected to every possible edge case to see if it will do something dumb, like the oracle attacks mentioned in the article. And it could also leak information about the KC decode via timing attacks.
If this was for offline file encryption, where you do the steps separately, maybe okay. But not for online encryption where chosen plaintext is possible.
CC is a bijective function on (blocksize of KC) bits.
CC knows nothing at all about KC.
Implementation can be done in another computer without security problem while you get obscurity.
Read this thread.
I'm not saying trust the Windows implementation of AES, you can put it through some other cipher with a strong implementation as well, but don't just make up your own one.
I don't think I was perfectly clear in my original comment. I attempted to expand on my position in a follow-up comment: https://news.ycombinator.com/edit?id=9376131
The NSA is elite. Nobody has any business trusting you to outsmart them unless you are also elite.
This is like complaining that nobody will bet on you to defeat Gary Kasparov means there is a culture of elitism in chess. Of course there is. It's a game of skill. Other people are more skilled than you, by a lot, and you're probably going to lose.
A closer analogy would be berating everyone who tries to play chess unless they studied it for years and they are professional and renowned chessographers. :)
Edit: If we're going to stretch the chess analogy further, it's important to note that the best chess players come from a very chess-positive culture, where many people grow up playing the game from a very young age.
Go ahead and play chess for fun, certainly, and you might become one of those grandmasters eventually, it's just that you shouldn't be betting people's privacy and potentially safety on the proposition that you already are one when you start.
So agree that the argument you're calling out has the potential to be harmful in the long run, but do not agree that it has harmed anyone we know of, and would add that it has --- to the extent it's warded people off things like Cryptocat --- helped end-users.
I could tell you how I have gotten it wrong in the past, but there is no guarantee that I won't get it wrong again in a different way. So, the audit idea has it's merits, but you really want not to rely on the Linus Law of eyeballs. That means knowledgeable auditors who charge actual money for their time.
1) In order to keep the backup software's deduplication functionality (and to minimize the server trusting the client), I need to have the IV (initialization vector) be a computed value based on the file's contents (so that multiple copies of the same file on the clients will encrypt to the same contents going to the backend server). I know that _This will leak some data_ (i.e., you may not want an attacker to know that a given set of files are the same contents). So I plan on making this optional -- either the user gets to take advantage of dedup, or better encryption.
2) To do number 1 above, I was planning on taking a hash of the file, encrypting that with the users key, then using that as a predictable IV. You will still have a unique IV per unique file, but I don't know enough to see if this can leak any other data. It looks like RFC 5297 describes an approach similar to this, but I think it is for a different use case.
3) I need the backup server to know which version of an encryption key the client used. That is, if the client changes keys between backups, I need the server to instruct the client to do a full backup, not incremental. So I can either have the client provide a version number for the key (or if using a keyfile, use the datestamp of the file as a version string), or I can encrypt a given string using the key, and use a hash of the encrypted string as the key version number. (Note, in no case will I ever be storing a hash of the plaintext of the client's files on the backup server, as that too can leak information)
My apologies if the above makes any experts here cringe, but as I mentioned my constraint is to have same-content files encrypted to the same target contents (for dedup purposes), although I will give the user to turn that off and use a random IV for better security (and give up dedup).
Maybe you could suggest what might be "more helpful" but I personally could not think of anything more useful that than telling people to not do something that puts their customers' and their own critical information at risk.
Good crypto algorithms don't get stale or anything. The point is they're fundamentally difficult.
Perhaps you could give a single decent reason for rolling your own crypto algorithm, vs something as easy as 4096 bit RSA or something.
For example, I think most would say that I'm not "rolling my own crypto" if I'm implementing some piece of functionality in my application leveraging the use of some API with "mac", "encrypt", and "decrypt" functions. There are still ways I can screw up using these functions, but I'm arguably not "rolling my own" crypto. So in this situation, the mantra is confusing at best.
I would not characterize implementing RSA safely as an "easy" task.
The problem is that in this community you are witch-hunted if you talk about crypto and you are not a famous cryptographer.
People start with the assumption that what you are saying is wrong (this is a good thing). But then many don't have the competence to judge what you are saying (no problem. we all are ignorant in many subjects). And then, they react as if you were really wrong (this is a huge problem).
I've been dealing with crypto on a near-daily basis for the last 15 years. I'm well within the top percentile of understanding of the subject, from a theoretical and practical standpoint, having worked to break cryptographic protocols for most of my life. I wouldn't trust myself to design a crypto protocol for grocery lists, let alone anything important.
This isn't false modesty or elitism or ignorance. It's recognizing that even if every smart person I know looks at it and says it's fine, it probably isn't. Crypto isn't hard to get right -- that's actually not the argument that anyone will make -- but it's impossible to know if you did get it right or not.
You do not have the knowledge required to build safe cryptographic protocols. Neither does anyone else on this site.
Secure crypto comes from a lot of smart people repeatedly trying to destroy an algorithm and its implementation. It doesn't come from a super smart person building a cryptosystem; that's how we end up with DVD CSS.
You'd probably be right on any other site; but I'm guessing there are a few people who do this for a living somewhere on HN.
And Daeken would still be right.
You skipped over daeken's point, which would stand on HN or crypto.stackexchange.com where there are undoubtedly people more qualified to talk about crypto than HN.
Crypto is impossible to know if you got it right. Read that again. It's impossible to know that you got it right.
If you get it really wrong it'll be obvious to a competent cryptoanalyst, but if you get it a tiny bit wrong, and this is computer security related, so 'tiny bit wrong' is likely bad enough to render your protocol unusable, then it won't come out at first or second glance, and may even stand up to some fairly rigorous scrutiny.
Even RSA which has stood the test of time since 1977, is only considered 'safe' in so far as there's no algorithm better than brute force for factoring products of large primes - if someone came out with an algorithm tomorrow that trivialized factoring numbers, RSA would quickly get moved to the 'unsafe' list. And that's just on the crypto primitive.
There's all sorts of manner of other attacks against an implementation of a protocol to be considered too.
https://en.wikipedia.org/wiki/General_number_field_sieve but most of your point still stands. RSA is edging slowly closer to the abyss.
Reality is elitist. You either know how to do things or you don't, and if you don't, you fail. Sometimes you fail anyway. That's elitism.