C# had an option that I finally got to work but the documentation specifically said not to use it; but the other choices didn't work.
I had to encrypt something in PHP then decrypt it in ColdFusion. Despite being the same algorithm, key, etc, it didn't work. Strangely, the exact same data in the other direction (CF->PHP) worked just fine.
I'd just like the libraries I use to have a simple choice: Encrypt(data,key) and Decrypt(data,key) and be done.
Don’t roll your own crypto. Ok, that makes sense.
Don’t implement existing crypto algorithms. Ok, arguments sound good.
This leaves the options of using existing libraries. Cool. So let’s go find an existing library to deal with validating an SSL certificate chain in Python.
PyCrypto (https://www.dlitz.net/software/pycrypto/api/current/) - hmm, so basically the OpenSSL API, probably a minefield unless you are really up on crypto.
M2Crypto (https://gitlab.com/m2crypto/m2crypto) - umm, so no real docs. The recommendation is to go read a book about network security with OpenSSL. So after reading a network security book, I should be good to wire together some OpenSSL, right?
cryptography (https://cryptography.io/en/latest/) - sweet, we have docs! Hmm, so for humans we have Fernet and the ability to look at X.509 certificates. But nothing about validating. Oh, but there is a stalled PR (https://github.com/pyca/cryptography/pull/2460) from 14 months ago, to verify a certificate signatures.
Which then leaves the typical programmer to – hack something together?
So it is (practically) 2017, and using one of the most popular programming languages, you can’t verify an SSL certificate in a sane way without becoming journeyman cryptographer.
I like crypto, but using cryptography to integrate with existing protocols/standards sucks. Can we really not have an end-user focused implementation of RFC 5280 with a few knobs to turn?
Perhaps people would stop "rolling their own" crypto if there were half-decent, maintained, documented solutions out there. Maybe some day!
For transport security, HTTPS or SSH. Why go lower level?
Cert based stuff sucks, and cert management sucks. All of these suck harder because of backwards compatibility. Slowly, people are coming round to the fact you need to just deprecate stuff and get on with our lives. But the most sucky thing is OpenSSL.
Sometimes, I think OpenSSL has actually done more damage than good. You accept a shitty library because everybody uses it so it must be secure, right? Wrong. And the API is so hostile, the docs are awful. Most things that use it or try to replace it are awful (M2Crypto, PyCrypto, PyOpenSSL, even urllib3 ), as if the awfulness of OpenSSL seeps into your thinking - I know it's happened to me once or twice, just interfacing with OpenSSL. Even cryptography suffers from being based on OpenSSL.
Every time I have to use OpenSSL on a new project, I can't wait to see it die.
Because not everything on this green earth uses HTTPS and SSH.
(1) I want to implement a SAML consumer, and per the spec, I need to verify signatures. Crypto.
(2) I want to use a client-side cookie so that users can remain authenticated in the current browser session. Crypto.
(3) I want to issue a URL with a signed assertion that the owner of the content has granted permission to access it. Crypto.
Asymmetric encryption is not the be-all end-all of cryptography.
Does using certs still suck though? I get that it's a hard problem, but cert revocation and distribution, that's really hard even for OSes.
If you install the dependencies, requests uses the package cryptography to use openssl to validate certificates, by running that code.
Note that openssl does not validate certificates the way Mozilla Firefox does, even if they use the exact same root CA bundle, because some of the validation logic is code in NSS instead of data (e.g. certificate transparency requirements). Ryan Sleevi wrote on tweeter that Red Hat are working on doing something to let applications using openssl use Firefox's full cert validation logic.
This only underscores/validates wbond's critique though. Apparently the advise on how to figure this stuff out isn't even through "RTFM", docs or an easy to use library, it's "go look at the dependency of this other library which uses that other dependency that you want to use on how it is done". And then copy-paste that and hopefully not get anything wrong along the way.
That said, does the proverbial "you" know all of the chain validation features that are NOT implemented by OpenSSL when using code such as in urllib3? What if you actually care more about revocation than utmost in connection performance? How does one trust a custom CA root? What if you want to verify a cert chain for something other than a TLS connection?
My main point is: here we are with people needing crypto, and (it seems) no one has taken it upon themselves to write good crypto libraries for (some of) the types of tasks that are fairly common. The obvious exception being NaCl. However, that has issues with pragmatic things like distribution due to CPU optimizations, hence libsodium.
You might need to care a whole lot about X509 when dealing with loading your certificates and the complete chain though. But overall if you have a nice python wrapper that shouldn't be too bad. Not pleasant, but not too bad.
We do have several quite pleasant cryptography libraries like libsodium, SJCL, etc, but none of these deal with x509.
So using crypto code will be good once we rid the world of contemporary code and move into the future where all protocols are implemented on top of poly1305, ed25519 and chacha20. Considering how long it took to get rid of SSLv2 and v3, I just think perhaps some effort should go into making good, usable solutions for crypto that needs to be used now.
Thus you have proven his point.
The key the CA uses to sign OCSP must be held safely because it is important, even if that key can't sign new certificates. I think it belongs in an HSM.
I've read Let's Encrypt spends ~98% of the time of its intermediate signing keys on OCSP, not on new certs. If the OCSP was good only for an hour instead of a week, they would need to perform many more signatures per unit of time which would require more hardware.
And anyway, that's the server case. OP just wanted to check the revocation as a client as far as I can tell, which definitely doesn't require any extra hardware.
My use case involves embedding it, but also being able to verify it (Adobe distributes trusted CA certs through a signed PDF). Never would I need to run my own PKIX infrastructure.
(To be fair, it can be done without certificate chains, but it could also be done with chains.)
C/C++, Ruby, JS, etc. are much, much worse in usability.
This is one of my projects using these libraries, a PGP replacement based entirely on the NaCl API:
Simpler than Openssl or whatever, but still not ideal.
One of the main frustrations is you don't know if the output is correct until you manage to reproduce some known key/document exactly.
When you're debugging anything else, you can sort of see what's wrong as you come closer to a correct solution. A crypto scheme is only correct (you've put the right things in the right places) when it's exactly right. Until then, you have a jumble of characters.
It took me ages to match ECDSA on java with Python. In the end, it was because of some fine print where I needed to hash one version but not the other before signing.
I was doing some bitcoin calculations and needed it.
Otherwise, though, it's an inferior and outmoded signature system. It's got the worst random nonce dependency of any modern crypto primitive: if it's even biased, just a little bit, you can recover keys from groups of signatures (a full repeat instantaneously destroys security with a single pair of signatures). It's weak against simultaneous attacks on multiple signatures, so it minimizes the effort attackers need to spend. Meanwhile, it's inefficient compared to the alternatives, so it tends to maximize the effort you have to spend. It's also hard to implement without side channels.
Modern cryptography engineers would recommend something like EdDSA instead.
People have been misconstruing the bikeshedding and cliquishness of the IETF as enemy action for decades.
John Gilmore has a story about how, during IPSEC standardization, someone was pushing for a CBC chained-IV construction hard, and that he was both confident it could only be enemy action and had sources suggesting it was. This came out right after the Snowden leaks, so everyone took it seriously.
But if you look at it context, I'm pretty sure the people he was talking about were Perry Metzger and Bill Simpson† (both clearly not NSA plants). They were arguing with Phil Rogaway --- calling one of the most famous and prolific cryptographers a "so-called" cryptographer when he cautioned them not to do dumb things like chaining IVs.
There's a message thread you can look up on the Internet where this happened. Rogaway even got a petition put together from a bunch of other cryptographers, including Rivest. No luck! The IPSEC standards committee ignored them.
A decade or so later (earlier, really, but nobody took Bard's paper seriously) we discovered chained CBC IVs lead to the BEAST attack on TLS.
Enemy action? No. Crypto standards groups don't need enemy action. They are intrinsically evil, and need to be avoided.
† I think this is the case, but I haven't confirmed it with Gilmore; maybe he's talking about a different controversy during IPSEC standardization. But these are the ones where the details fit from what I can tell.
>a full repeat instantaneously destroys security with a single pair of signatures
Roughly--assuming ECDSA parameters (H,K,E,q,G)--where H is a hash function, E the Elliptic Curve over finite field K w/ point G of prime order q. Suppose two different messages m and m' have been signed with private key x using the same (non-ephemeral) random nonce value of k.
According to ECDSA Signing these messages m, m' become signatures (r,s), and (r',s') where;
r = r' = kG,
s = (H(m) + x*r)/k mod q,
s' = (H(m') + x*r)/k mod q.
(H(m) + x*r)/s = k = (H(m') + x*r)/s' mod q.
x*r(s' - s) = s*H(m') - s'*H(m) mod q.
x = s*H(m') - s'*H(m) / r*(s' - s) mod q.
That still ends up being 'rolling your own crypto' and mostly the kind of crypto the article is talking about. The author is describing putting together cryptographic constructs from the sort of low-level pieces provided by your typical runtime crypto library. This is probably the far bigger problem with the admonition - it's vague enough that someone might think that since they're not actually implementing AES themselves, they're not rolling their own crypto. Stackoverflow is full of questions just like your PHP/Coldfusion problem.
The thing that saved me finally turned out to be that Bouncy Castle actually do package some example programs in with the source. So once I cloned the git repo and dug into those, I was finally able to find an example of doing exactly what I needed. But trying to piece it together from the javadocs and the other online documentation? Hell no... I'd have been working on that until the heat death of the universe. :-(
Incidentally: Noise is great, but it's also fine to use TLS, if you're just a little careful.
Blanket "don't use X" statements are much more useful with context and information about why not to use X (and when using X may be required, useful, or a better choice).
Why not be complete and list the caveat, instead of incomplete and possibly lead that one in a thousand down the wrong path?
And lots of articles about best practice leave out very rare edge cases. No advice is always always right.
I don't think that's an argument in support of leaving out edge cases, and even if you disagree in general, I think security articles should be more diligent than random best-practices articles.
> Far more than one in a thousand will use it as an excuse to do the wrong thing.
If an article says "always do X" and someone does X and the implementation is subtly wrong or insecure, that's an argument to improve the article (which is what I'm suggesting).
But if an article says "almost always do X, and be aware of this edge case" and someone gets it wrong by not following instructions, then I don't think the article is at fault.
I would personally want my security advice to cover any edge case I could think of, even if only in the footnotes.
Just because the article recommends seeding over using /dev/random early at boot time doesn't mean the original gist shouldn't mention these issues so that readers are aware of them.
Just saying "use /dev/urandom all the time" without mentioning the caveats we're discussing here means someone might read your article and in a (misguided) appeal to authority implement something solely on your recommendation without doing their research properly first.
I would have thought someone writing security articles would want to avoid misleading someone into implementing something that's accidentally insecure. I hope I'm not wrong.
Which backs up my point that using /dev/urandom blindly without knowing about some of the edge cases isn't a good idea, and lists those edge cases and what to do about them (which is what I suggested you do too).
> And another:
Which also lists the edge cases I'm talking about.
Hey look if you just want to say "I'm aware of the edge cases and don't want to put them in my gist for others to see" that's fine with me, but dodging the issue by claiming there are no edge cases (and then listing 3 articles which all mention the edge cases) isn't the right reply I think.
Don't be surprised if someone suggests that a gist listing security best practices list some edge cases that go along with the blind advice too. Feel free to disagree, but at least disagree honestly.
That's the easy part. You can find a library that does exactly that, and only that. Will probably be just one file, not even a library .
But then you'll also need padding, MAC, signatures, key distribution, web of trust, etc.
 Example: https://github.com/dimview/speck_cipher
Furthermore, Speck has a variable block size between 32 and 128 bits. If you pick wrong, safe-looking things like CTR+HMAC-256 become unsafe (it's not hard to overflow a 32-bit counter).
The problem is, it's not nearly enough to just do Encrypt() and Decrypt(). Only when you see that, the benefits of a real crypto library NaCL become apparent.
Additionally, "bigger is better" is not a reasonable way to pick primitives. Defaults matter. Perf matters. From experience and stats, people pick according those two vectors a lot more often than they pick along the "bigger is better" axis. When's the last time you saw an RSA-16384 signature? AES-256 might have twice as big a number as AES-128, but how much faith do you have in that extended key expansion? You use AES-128 a lot more often than you use AES-256, and you definitely use RSA-2048 a lot more often than RSA-16384.
I don't think you can responsibly insist that this was an answer to the question. It is not a secure way to encrypt messages.
Complexity begins when you try doing something more involved than just symmetric block encryption.
I think I sort of see your point, but SPECK is a weird recommendation.
How is the IV stored for example.
Don't roll your own crypto.
Once you start adding things like CTR mode, MAC, etc. the API is no longer as simple.
Padding issues most likely.
PHP might auto detect padding, while CF might not.
We've all had "don't roll your own crypto" pounded into us, but nothing teaches a lesson as soundly as trying something yourself. I've tried, failed and learned a lesson from far less ambitions endeavors.
The 'don't roll your own crypto' argument is mostly just shorthand to 'defer to the opinion of experts, use ready-made constructs when possible, and if not, then exercise caution when hooking crypto primitives together in unproven ways'. 
Crypto code, like other library code, is question of trust.
Do I trust Daniel Bernstein? Do I trust Joan Daemen, who is half of the AES team and a quarter of the Keccak team ? In practical matters, do I trust tptacek ? Yes, I trust them, until people more educated in cryptography than me cast enough doubt or prove it otherwise -- but you might have a different model of who you trust. But ultimately, you're the one who answers to your systems.
It's also a bit like science where we come up with a hypothesis (this seems to work...) and then try very hard disprove it, so our knowledge evolves as we go. It's important to understand that there was a time when it was best practice to use certain primitives that are now considered broken, and this is okay -- assuming we all upgraded our systems since. For cases then that assumption can't hold true, we need to account for that risk.
This is also why it's wise to go with cryptosystems that receive a good amount of peer scrutiny. Your homegrown secret sauce might indeed be super secure, but few will publish papers on how they can obtain collisions on a round-reduced version. It's network effects, it's 'given enough eyes' all over again.
 https://news.ycombinator.com/item?id=12400040  https://news.ycombinator.com/item?id=12766941  https://gist.github.com/tqbf/be58d2d39690c3b366ad
If I use e.g. Bernstein's cryptosystem and he's evil, then he and whoever hires him can read my data.
If I use my own or your cryptosystem, then either the cryptosystem or implementation definitely (with a much, much greater confidence than any trust issues) is horribly broken due to some bug, oversight or side channel, and I just haven't noticed yet. So the end result is that everyone can read my data - sticking with someone evil would have been more secure than rolling my own.
In that light, it'd be perhaps better to phrase trusting AES as "not distrusting the union of everyone who's tried to cryptanalyse AES", rather than trusting the Rijndael team specifically. In many cases, especially djb-brand crypto, there's even less need for trust: the way you derive the X25519 curve is extremely well defined (or, as djb puts it, "rigid").
From the article, "Last Step in Crypto:"
Get a PhD in cryptography. You need to be an expert yourself if you ever hope to invent a primitive that works.
Publish your shiny new primitive. It may withstand the merciless assaults of cryptanalists, and be vetted by the community.
Wait. Your primitive needs to stand the test of time.
1. It proposes that ECDH is secure, as a protocol, so long as the curve parameter is carefully chosen. But this just isn't true, or at least, it's true only given a technicality that moots the point. For instance: when accepting a point from a counterparty in ECDH, you have to carefully validate that the point is valid on the curve you expect to be working on, or else your own computation might both be confined to an unexpectedly weak curve and disclose information about the results. This is one of Sean's cryptopals set 8 challenges, and it's one of the better and more surprising exercises that project came up with.
2. It suggests that it's reasonable for people designing cryptography to come up with their own curves. But in reality, nobody ever does this! We're increasingly confident about the structure of curves we want to be using (you want curves for which the math rules are consistent and don't require special cases, for which it's easy to convert between equivalent curve structures for signatures and key exchange, with prime structure that makes the curve math fast). Once you find a good curve there (25519 is the best-known example, for its security level), there's practically nothing to be gained from using any other curve.
I get why you walk people through picking a new curve! It's a great exercise; playing with very "small" curves in code is probably the best way to get a feel for how elliptic curve works. But this is the kind of place where people rolling their own stuff can get into a lot of trouble.
Sorry to spoil it, but the conclusion will basically be the same as the article, as in "just don't".
I have since revised my article. It should be less dangerous now.
That doesn't sound right. I guess this should read something like »is provably as hard to break as the underlying block cipher« in the same way that that a hash function built using the Merkle-Damgård construction is provably as hard to break as the compression function used. The article easily gives the impression that Poly1305 is provably secure in the same way as one-time pad.
The truth of the matter is, poly1305, while provably secure, is almost as impractical as a one time pad: it relies on a shared random authentication key. Where does that key is supposed to come from? In practice, you'd derive it from a session key (and so rely on a symmetric cipher such as AES or Chacha), or from a key exchange scheme such as Diffie-Hellman, and you rely on that.
I left my phrasing as it was, because it is (I think) closer to the truth, and not dangerous. I do reckon the AEAD scheme I recommend only has a reduction proof, not a safety proof. I could be more precise, but I don't like clutter.
There are a few "layers" of rolling your own crypto. Often, people think they're not rolling their own crypto because they're using AES instead of some hand-rolled bizarro cipher. This is not that: the primitives the post references are solid (e.g. BLAKE2b, ChaCha20, X25519).
There are definitely challenges with explaining crypto to programmers in a way that is non-scary and at the same time factually accurate; I walk that tightrope constantly with Crypto 101. There are significant factual inaccuracies in this post that matter and aren't just a temporary educational tool so you can get a concept across quickly. Someone on Reddit already pointed out most of the ones that were glaring at me after a cursory review. To the author's credit, he's put that link up by the start of the post.
There are at least two kinds of crypto education for programmers. One is practical advice or libs to help people build better applications with crypto. This means building better tools. Don't "pick Ed25519" -- pick a library like libsodium that did signatures right for you. Saying "compose ChaCha20 and append a MAC" isn't the best level of advice we can strive for. Instead, "use Fernet" and possibly "use libsodium's secretbox" is (although I think having to specify a nonce may not be the greatest default -- I'm working on fixing that). The other kind of crypto education for programmers is to help people break things. That can help you become an expert, but I don't think we should expect any meaningful percentage of programmers will spend a bunch of time finishing all 8 sets of Cryptopals.  It is, however, a reasonable assumption that a (super)majority of programmers will at some point touch some crypto: the question is what they find when they do that. Will they have secure password storage and not even notice because that was the right default in the software they were using, or will they have to cobble together their own? I for one am hoping it's the former.
Fundamentally, there are two reasons why you don't want to roll your own crypto. One is because there's a good chance you'll mess it up. The other is because that's effort, and you are not going to do better than the programmers that already have done the hard work for you. This is sometimes true for primitives (don't generate your own FFDH prime/implement your own FFDH -- you will probably have small subgroups!), but especially true for high-level recipes at varying levels of complexity from authenticated encryption (you won't be better than OCB or secretbox) all the way to cryptographic ratchets.
Those reasons put together form my main criticism for this article. Sure, it has fundamental inaccuracies, they're not trivial, and that's not OK. But even if it was technically perfect, it doesn't help the next piece of software be safer.
: I am a cryptographer. I did https://www.crypto101.io. I am Latacora's crypto nerd.
For example, naive me once wanted to use BCrypt as a signature algorithm for a web-based game when exchanging game state information, to prevent cheating by allowing the server to verify game state transitions. For the purposes at hand, it seemed like a strong enough cryptographic hash (the server handled the signing between clients). The only problem was that BCrypt disregards everything after the first 72 characters. So after headers, everything was signed valid. Everything.
That said, your code looks too simple not to be easily breakable. I'm no expert, but that piqued my interest. I'll take a look. If I break it, I'll write an article about it.
I concentrated my efforts on your Cape::encrypt() method. Inlining Cape::crypt() and simplifying the resulting code gives something like this (it appears some computations cancelled each other):
void Cape::encrypt(char *source, char *destination, uint8_t length)
uint8_t iv = this->generate_IV();
for (i = 0; i < _key_length; i++)
real_key[i] = _key[(i ^ _reduced_key) % _key_length];
for (i = 0; i < length; i++)
destination[i] = source[i] ^ iv ^ i ^ real_key[i];
destination[length] = iv ^ _reduced_key;
First, I don't believe your IV is truly randomly generated. I don't know of a real CSPRNG on Arduino, so… probably not your fault.
Second, your IV is only one byte. You cannot securely send more than 256 messages with the same key.
Third, the attacker can get rid of the IV anyway: since it is given at the end of the ciphertext, I can just XOR it with the rest of the ciphertext before trying to crack it. This makes your IV irrelevant. You are now limited to one message per key. In practical terms, you don't do better than a one time pad.
Fourth, the way you access the key in a weird order doesn't matter. It's the same as using a different key. Assuming your key is properly unpredictable in the first place, there is no need to access it in a weird order. I made this clear by reordering the key in a real_key.
Fifth, XORing the message with the index doesn't buy you anything: the attacker will just reverse it before trying to crack the rest.
Sixth, if your key is shorter than the message, you have a repeating key. Repeating keys are shown to be broken since… a long time. The reason why is because XORing parts of the ciphertext encoded with the same key reveals the XOR of parts of the plaintext, and that is easily broken in practice. So you're forced to use keys as long as the message.
Conclusion: you have a fancy one time pad. It is secure if your key is as long as the message, and you use it only once. I have to say your home-made crypto is not completely broken. But… a simple one time pad would have achieved the same results more simply.
A word of (dangerous) advice: you won't accomplish much with XOR alone. Modern ciphers tend to also to rearrange the order of the bits as well. If I may, we have a real cryptographer here, and he has written a course: https://www.crypto101.io/
Some (but not all) of the code has been audited. Based on DJB's TweetNaCl.
LOL brilliant. That was fast.
Another example: criticizing that he mentions appending a MAC (you should choose an AEAD!)... when later he recommends using an AEAD.
But yeah, some of the criticism reads kinda nit-picky / "I'm going to find false things in here". But on the other hand he provides good hints how the article could be improved.
> Another example: criticizing that he mentions appending a MAC (you should choose an AEAD!)... when later he recommends using an AEAD.
Later, in a section called "Level 2: choosing crypto", he recommends using ChaCha20 Poly1305.
Anyway, this isn't really my crusade, I just think that if you read the informal blog post as an informal blog post, it's fine.
Possible, but my readers aren't fair either. If I phrase stuff the wrong way, someone might go off and do something stupid.
In a similar vein, one that stops reading at level 1 will miss my AEAD recommendation at level 2. Were they asking for it, or should I have put the recommendation earlier? Tough call.
I have updated my article since, and also made clear you can still screw up even if you had help.
And Muphry's Law strikes again!
But it's good that people look into it -> for fun and learning. Just not for realsies.
Roll your own crypto, but be aware of the stakes, and be prepared to drop it the second that the stakes become real.
It's not a hivemind-y attitude, it's Software Engineering.
If you cannot show that a bridge will support a given weight, you do not build the bridge. Period. You learn that in your first Engineering course, freshman year. It's part of the responsibility of being an Engineer.
You can experiment with new bridge designs all you want, but until you can prove mathematically that the bridge is sound, you do not build the bridge. If you really want to roll your own bridge over the ditch in the your backyard, go for it, but think three times before inviting your friend to drive their truck over it, especially if you're not already an expert on bridges.
Cryptography is not a small or simple subject. If you are not an expert in the field, it is incredibly unlikely that you will not know enough to critique your own work. Instead of starting out by rolling your own, start by studying.
And yet I still say that nothing you said here is at odds with my statement: know the stakes. But play anyway.
You think engineers don't build wacky and dangerous shit for their friends to play with? Prototypes that stretch the limits of design and would be irresponsible to mass produce as-is? We do it all the time. It's part of the learning process. I've been physically hurt, shocked, burnt both by heat and chemicals, while working with other engineers' experiments. AND IT'S OK! It's ok to try your own solutions on your friends and peers while the stakes are low. Studying is only one half of learning. The other half is bringing textbook knowledge to the real world by building and prototyping and getting your peer group to throw hammers at it and see what happens. Engineering is not theory. Engineering is theory applied to the real world. You can't be an engineer if you don't have one foot on each side. Study on one side, build and play on the other.
It definitely is a hivemind attitude, because the literal hundreds of other engineers that I know -- people who design actual bridges, and buildings, and medical devices, and cars, and oil rigs, and theater sets, things that hundreds of millions of people rely on to be safe -- don't share this attitude. Their attitude is: know the stakes. Most software projects go nowhere and are usually only used by a handful of people close to the developer. That's the definition of low stakes. And it's ok to experiment, and to roll your own crypto, when the stakes are low. I think it's damaging, and probably hubristic, to think that the stakes are high all the time. They're not.
You don't become a "crypto expert" by experimenting with crypto; it's not an API or a library. You learn crypto by studying it. People have already made mistakes in the past, so why not learn from them instead of repeating them? Especially in the high-stakes situation that is causing you to use crypto?
Not going to rehash the arguments, but this was discussed recently: https://news.ycombinator.com/item?id=13199471
I'd need to think about it a bit more to convince myself it doesn't have any significant downsides though; it's less obvious to me. For one thing, requiring an OTP approach seems to constrain the set of encryption schemes you can use. (e.g. imagine an algorithm that reverses all the bits of every block before doing anything else.) Furthermore, if your custom encryption scheme happens to leave any kind of "watermark" on the ciphertext that makes it obvious it wasn't something standard like AES, applying the standard layer last will prevent the attacker from realizing you have a custom layer at all, until the standard layer is broken. Whereas if you use the OTP+XOR scheme, it might become obvious something else is going on too.
That said, this approach might actually be stronger than the layered approach, so it might actually be better. Need to think about it more :)