> Designed to be as secure as a one-time pad, without a weakness due to the use of repeating keys,
That immediately shows that the person writing it doesn't know cryptography. You simply can't get the security of one-time pad without having a key the size of the data, and if you do, just use it as the pad.
> That immediately shows that the person writing it doesn't know cryptography. You simply can't get the security of one-time pad [..]
Talking about a one-time pad at all in the context of making something actually secure for internet usage, shows that the talker doesn't know about cryptography.
A one-time pad having "perfect security" is only true in the context of a particular security model that doesn't generally hold true on the internet - one where the adversary cannot change the ciphertext whilst it is in transit.
Under a more realistic security model, we need a message authentication code or some other equivalent, to protect against adversaries changing the ciphertext. It's a well-known theorem of modern cryptography that in fact if you don't have secure authentication under this model, then you cannot achieve secure privacy. In other words, your ciphertext has to be bigger than the plaintext for security on the internet.
(There is in fact a one-time-pad equivalent for MACs where your key is the length of the plaintext and the ciphertext ends up being several multiple times the length of the plaintext, but crackpots when making security arguments about "one-time pads" generally aren't referring to this and aren't even aware of the existence of this. It's a relatively unknown construction and nobody really talks about it in the context of serious modern cryptography.)
Historically OTP where shared as physical pads of paper with a different sheet of paper for each message. This is useful as lost messages did not stop communication. If a sheet did not decode the message then try the next sheet.
However, this meant there where leftover bits of each key. Though if you think of each bit as a different key then sure, just start each message with the offset.
I've always wondered if the assumption, they're never using the same one twice, would suffice to practically always use the same one then (assuming data-length never changes).
Not sure what you're asking? If you use the same key over and over again, then you're using the same key much more than just twice.
The idea of a one-time-pad is that you're basically just randomly flipping the bits of your input, which means the output of an OTP cipher is indistinguishable from random data.
If you're using the same key multiple times though, then the output of the cipher (considered over time) won't be random, and you'll be able to detect patterns from the original input in the cipher output (e.g., the shape of an image, frequency of certain letters).
> The output is always random, regardless how often the key has been used.
The output of any _one_ use of the pad is, yes, but the point is to consider all of the data that an attacker may have. If you re-use the key multiple times, then the entirety of the cipher texts an attacker has is not random. (See also: https://xkcd.com/221/)
> But how do you know they're using the same one? Or how are you sure, they're not?
Note, that illustration (Tux_ECB) is demonstrating a different problem—ECB cyphers may expose patterns across blocks—rather than reused one-time pads. One-time pads will always produce random images as their output.
Eh, it's sort of the same? You can imagine each pixel as its own message: the point is that repeatedly transforming things in a consistent but "random" way isn't actually random. The ciphertext of each pixel is "random", but the pattern when looking at all the pixels is clear.
Someone intercepts the data, and has: EDGF and HGJI
And now?
Or maybe like this: Since OTP and data are interchangeable, due to matching lengths, isn't using the same OTP with different data, essentially the same like using the same data with a different key?
The normal operation for an OTP is xor. Now if you reuse the random key K on messages A and B, you get encrypted messages A' = K xor A and B' = K xor B.
Now, an attacker who learns A' and B' just needs to do A' xor B' = A xor K xor B xor K = A xor B. Since your input is not random, but structured data like natural language, this is now relatively trivial to break using crypt analysis since you essentially end up with something like "MEET AT DAWN" xor "I LIKE TRAINS".
Story time: The USSR once reused an OTP key (after years or even decades, can't recall), but a US' three letter agency had the old ciphertext (A') and reused that to break the new ciphertext (B'). They probably had some scheme with a broadcaster saying "use codebook 1234, the secret is GARBLED DATA". At least that's the story a cryptography lecturer told us (and the fragments I remember).
I'm reaching the limits of my stats knowledge, but you may be able to figure out, even from just those two ciphertexts, something about the input plaintexts. It's obviously harder with shorter inputs.
I guess one thing to note is that, if what you were transmitting was just random noise to begin with, OTP re-use may not matter/be evident. But essentially all data that people care about transmitting isn't random noise, it has some structure, and that structure comes through with OTP re-use (more and more the more you re-use and the more data you re-use with).
EDGF encrypted with HGJI is now the same as ABCD encrypted with DEFG. In this example, that means the distance between characters of the encrypted messages is the same as the distance between the original messages.
From my limited knowledge of the matter, that alone doesn't give you the cypher - you'll need to know additional information about the messages to get the cypher (statistics of words, conditional probabilities of letter sequences etc.). But without the one reuse of your cypher, you couldn't apply these techniques.
Look at an ASCII table. Each byte could be any value, but if you’re sending text data the 8th bit is very likely to be 0. That means if your sending say 10 different messages the same pattern is going to show up 10 different times making it clear something is going on.
That ASCII example is rather extreme, but all messages have patterns as long as you’re given enough of them you can break a reused OTP.
Obviously it's more complicated with text (where you have less information filled in); for that you have to do crib dragging which is a bit more involved but not fundamentally difficult.
I'm confused, this and a blog post linked somewhere else in this topic, describes the OTP has XORing data with key.
I've always assumed it was just adding the key-value to the data-value, as is described in the Wikipedia article[0].
And those two can't be the same, e.g. with both data and key value 'a' (0x5c), I'd get 0x0 with XOR and 0xb8 with addition.
EDIT: Ah, damnit, second paragraph, it says: "On July 22, 1919, U.S. Patent 1,310,719 was issued to Gilbert Vernam for the XOR operation used for the encryption of a one-time pad."
Everything I've posted in this thread about OTP was under the assumption of an additive cipher. My bad.
The Wikipedia article is using modular addition as it's more accessible to the layperson than a binary XOR, but they're functionally equivalent for the purposes of cryptanalysis. XOR is just modulo arithmetic on single bits instead of larger numbers.
Edit for your edit: All of this discussion applies the same for additive OTP as it does for XOR OTP; once you have depths you can start applying the OTP (however it's done) “in reverse” as it were to begin extracting patterns in the data.
Both are one-time pads, there isn't one exact algorithm that's "the one-time pad". And XOR is really the same as addition, just done per-bit instead of per-byte/per-character. But as far as I know, XOR is the most commonly referenced example.
You can easily discover reused keys if you can guess any part of either plaintext. From that guessed fragment, you can recover both plaintexts and the entire key (pad) using the "Zig Zag" method.
(P=plaintext, C=cyphertext, K=reused_pad, ⊕=XOR)
If we capture two ciphertexts that reused the same key
C1 = P1 ⊕ K
C2 = P2 ⊕ K
Then combining the ciphertexts cancels the key
D = C1 ⊕ C2 = P1 ⊕ P2
The resulting D is also the plaintexts XORed together. If you can guess any part of either plaintext - a standard header or commonly used words (like "weather" or "Heil Hitler") - then XORing that guess with D reveals part of the other plaintext at the same position. Once a plausible match is found, the rest of the decryption is relatively easy: zig-zaging guesses of neighboring words extending out from the original guess.
Professor Brailsford's explanation[1] of the method on Computerphile is nice introduction to this type of cryptanalysis.
It's not like it's hard to get K and K to cancel each other out using addition. Why are you generalizing from "the inverse of XORing K is XORing K again" to "the inverse of adding K is adding K again"? It isn't, but everyone knows what the inverse of adding K is.
C1 = P1 + K
C2 = P2 + K
C1 - C2 = P1 - P2
And if you can guess part of either plaintext, you'll see the same part of the other plaintext, and know what the key was, exactly the same as for XORing. As somebody else already pointed out, that's because XORing is a variety of addition anyway.
"Can" as in "you could possibly do it, yeah", and "shouldn't" as in "really, really, REALLY shouldn't". The Venona cable decrypts were made possible by one-time pad reuse.
I'm not sure, definitely not if you use XOR-OTP, not so sure about additive-OTP, see my reply here[0] for reasons.
Would like to know, if my assessment on additive OTP is correct, if anyone knows?
EDIT: I mean, of course you still shouldn't, but where the XOR catastrophically fails after just a single reuse, the additive one should be more robust, even with reuse. The Venona Decrypts were additive-OTP, I postulate, the decryption rate would've probably been higher with XOR-OTP.
I thought the notion of OTP is that there is absolutely nothing to decode, prove or disprove ciphertext?
Like if ciphertext is "supersecret" then it feels like plaintext can't be more obvious and OTP must be "00000000000", but there's nothing anywhere to logically/mathematically support it. OTP could be something else and plaintext could be "hackernews" or "ycombinato".
In reality a simple XOR with a random sequence preserve enough entropy of data that I've heard you could make out voices if used on media, so payloads must be scrambled, but that's not recovery, only a guess. ANY reuse breaks that notion and make it not an OTP cryptography.
If yor rot13 your OTP the second time around, it's probably fine. In the same way that turning your underwear inside out is probably fine. As long as nobody sniffs...
This happens with some regularity in the crypto scene. Someone comes up with a cipher and they think it must be awesome! because they came up with it. Not being cryptographers, these ciphers always fall quickly, then the authors get defensive. I want a doll with Alan Turing's likeness that says "cryptography is hard" when you pull the cord.
That was one of my takeaways from reading Applied Cryptography. It has come in handy when evaluating crypto-based software. “Rolled their own crypto... avoid.”
I remember on an old crypto mailing list some lowly PHP programmer asking if instead of a grab bag of primitives there was some library where he could just encrypt/decrypt a random object. Without having to build a Frankenstein thing out of random cryptographic parts.
Everyone peed on him and he never came back.
The poor sap was right though. The proven algorithms tend to fail catastrophically when used or implemented incorrectly.
Ironically now PHP is one of the few high-level languages doing this right, with built-in libsodium support and an excellent idiot-proof library available in Halite on top of that.
Turing is my hero as I'm sure he is for a lot of us I sometimes apply the reasoning of what would Alan Turing say? And not only what would he say but how would he say it?
I'm fascinated by his gentle nature as an algorithmic thinker who applied computer science to all aspects of his experience
The author reminds of some religious people I've had debates with in the past. They would make some fundamentally flawed argument, get shot down, and so they would move on to other arguments, which is not great but valid. But then, later, I'd see them making the exact same argument to someone else.
To people like this, arguing is like punching. If you punch someone and they get up, they're a "tough opponent". That doesn't mean that punching is ineffective on other people. They just don't understand at their core that once an argument is shown to be false, it can never be reused. Their mind just doesn't work this way. They think "Well, left-hook didn't work on this guy, but maybe it'll work in that guy" or "This guy didn't fall for this argument, but maybe that guy will."
It's deeply dishonest, but I'm not convinced they're even aware of this.
So, just from a cursory look at his code, the glaring problems I see are:
1) It'll crash if you feed it more than 256MB because of the way the C# BitArray class works. He's also using 32-bit ints in several places that have similar issues. These are implementation details, but it just goes to show how thoughtless the "reference implementation" is.
2) Similarly, the reference implementation copies the entire data into the BitArrary (and then back into a byte array) for each "round". This is spectacularly inefficient. He never mentions throughput in terms of MB/s/GHz or any such thing. I wonder why...
3) It's not a stream cipher. You need to encrypt the entire file. Again, he could come up with some sort of streaming version by feeding in the last few KB of the previous encrypted chunk into the next chunk, but he hasn't. As the long history of various streaming ciphers have shown, this is actually a hard problem to solve efficiently and securely. This is why AES-GCM is the hot new thing: it's both.
4) If either the key or salt values are all 0s then the encryption does nothing. AES for comparison will still encrypt your data with some level of security even if the IV initialisation is not perfectly random or skipped.
5) He's calculating the offsets as an "int" from a byte multiplied by a byte. Hence the maximum shift is 65536 positions. This is just big enough to exceed L1 data cache on most platforms, but have a high hit ratio. Whether you hit the L1 cache or not depends on the Key & IV values, so this is a recipe for timing-based side-channel attacks. Constant-time cyphers are basically mandatory these days...
6) Related to the above: The bit shifts are only in an 8KB window, but the byte shifts are in an 64KB window. I wonder if there's some interaction here where this might make some keys insecure.
7) Despite this windowing, the default "protocol" wraps around the end of the file to the beginning using a modulo the data.Length, so it can't be used as a streaming cipher. To do so, he'd have to introduce a breaking change.
8) The encryption is defined in terms of bit-by-bit and byte-by-byte long-range operations, so there's basically no hope of ever making this efficient. E.g.: with SIMD or similar many-bytes-at-a-time instruction sets. The "state" that would have to be kept in CPU registers is over 64KB, so this is just never, ever, EVER going to compete with something like AES-NI.
I could go on, but I'm wasting my time. This is wasting everyone's time.
The author clearly has no interest in producing something that is secure and usable. He's just enjoying the arguments. He's even posted some of the feedback on his GitHub, proudly showing off the debates he's felt he's won.
> But then, later, I'd see them making the exact same argument to someone else.
When someone continues to cargo culting useful fragments of knowledge instead of actually learning how/why those fragments actually work, the cargo cult behavior tends to incorporate increasing levels of magical thinking.
A good example of an extreme form of this are the "sovereign citizens" that try to use their "creative" reinterpretations of legal code and procedure almost as an incantation or spell. The like to read from Black's Law Dictionary as if it was a grimoire.
> If either the key or salt values are all 0s then the encryption does nothing. AES for comparison will still encrypt your data with some level of security even if the IV initialisation is not perfectly random or skipped.
Not experienced in this field but this seems like a good thing to me. Even just using crypto properly appears to be hard, and this 'do nothing when given 0' trait makes the vulnerability very obvious to a quick basic security examination (or even just looking at it in Wireshark), vs needing an expert to do lengthy analysis and figure out that you subtly fucked up the AES initialization and had been running unsafe code for years.
> They just don't understand at their core that once an argument is shown to be false, it can never be reused.
I'd frame it as that it doesn't matter to them. People who aren't in a feedback reward loop with the reality or with others feel no consequences, need of change or improvement. An engineer like you would generally want things to be correct regardless of outcome, but for general internet idiots or lots of startup self-appointed C-classes, sadly that is often not the case.
You quite naturally see a discussion of a cipher as an attempt to find the truth, and it barely registers for you that it might not be. They probably see a discussion of their cipher as an attempt to persuade, so they use the most effective argument they can think of at a given time, and it barely registers for them that the discussion could be an unbiased attempt to find the truth.
That's an interesting article, but I think it's oversimplifying things, which I suppose makes me a Mistake Theorist. 8)
I prefer to think of people's traits not in simple binary terms, but more in terms of high-dimensional attributes. Think: word2vec.
In practice, I find that people are messy. They have their own interests, and hence there's conflict, but they're also lazy, hence the mistakes. There's plenty of room for several kinds of suboptimal behaviour in their squishy meat brains.
Sometimes I feel like I'm The Man From Earth, watching this craziness unfold from the outside.
It's important for different people to have excessive faith in different beliefs. If everyone was just about the right amount of reasonable, then small errors in judgement would cause them to give up on an idea that might turn out to be valuable. People believe things for emotional reasons they don't understand and the logic is just an attempt to justify it. They might still have valid emotional reasons behind it. In the case of religion, it does have value even if it's wrong.
One interesting take on this is that certain attitudes such as "authoritarian" or "egalitarian" both have value in the sense of the "selfish-gene" evolutionary theory.
So for example, imagine a primitive tribe of humans facing a drought. If they spend too much time discussing options democratically, if they don't all agree, or if they waste time by changing their mind half-way to a source of water, they'll all die. But if they follow an authoritative leader -- even if makes bad arguments or none at all -- they might all make it to the mountain spring in the distance and survive.
The gist of this is that in times of hardship, authoritarian, conservative, or "right-wing" styles of though/argument/politics/whatever can win the day. But in times of relatively low stress, more democractic/left-wing/progressive attitudes can help the tribe discover even more resources than they would have if they always followed the same instructions to go to the same sources. Innovation and exploration can turn the merely adequate into a bountiful plenty, resulting in more healthy babies, etc...
So these attitudes and personality traits are important to have present in the gene pool to make the species as a whole robust against a wide range of challenges and threats.
On the other hand, I feel like the elitist, alienating "don't even try to learn, just obey us" attitude of much of the cryptography/security community could also be making the situation worse. I mean, a key-and-data-dependent permutation is how a lot of ciphers work; that's not a fundamentally flawed idea. The details of the implementation are a different matter, however.
That's like saying "being a vehicle is how a lot of cars work"; the reasons on why the cipher is flawed are a bit more specific than that.
As for the elitist arrogance, whilst I'm sure these exist in other security-related interactions across the world, the interaction that's the subject of this thread involves plenty of polite comments that
- point out author's lack of background knowledge
- refer to short-cut theorems that help evaluate a cipher, implying the author should learn about these theorems
- explicitly tell the author to learn about these theorems
In response to this, the very first reply the author gives is "It almost sounds like you have had your soul crushed by bureaucracy over the years and have lost all passion for this field. I hope that's not true."
Yeah, however much "arrogant" you interpret the security people in this thread to be, it's only fair to interpret the OP (Mark McCarron) to be 2-3x as arrogant.
I am honestly very triggered by any tone of elitism in the sphere of intellectual or professional pursuits. It's not just in tech - just try googling 'DIY HVAC Repair'. Now, I don't advocate partaking in activities which are legitimately dangerous without training and certification (e.g. charging an HVAC system with freon or operating a crane), but when all you are trying to do is replace a contactor in your HVAC system and you get a wall of "DIY is not allowed here" when asking basic questions, it really puts some hate into your soul. Cryptographers give me the exact same sensation with their 'no DIY' rhetoric.
At the end of the day for me, it mostly boils down to some ulterior agenda which seeks to keep competition or perceived threats to the incumbents' control at bay.
Does the NSA have a vested interest in most technology workers being able to competently roll their own encryption schemes? From my very jaded political perspective, I would say absolutely not. It arguably makes their mission exponentially harder if they have to deal with novel approaches to securing communications. They would probably prefer everyone just use ECC with their hand-selected curve parameters.
> It arguably makes their mission exponentially harder if they have to deal with novel approaches to securing communications.
NSA and GCHQ would absolutely love it if people used novel approaches to cryptography and started kludging together dumb crypto systems. This makes their stated mission considerably easier.
Earlier today I was watching a YouTube video somebody referenced in my social media feed about Fake Martial Arts.
Mostly these aren't just heavily stylised exercises that have limited practical value as fighting styles, they're woo like energy blasts or psychic power that can't work at all. Practitioners wave their hands or say magic words and seemingly defeat groups of skilled foes. Except it's bullshit.
The video includes some unpleasant though relatively brief excerpts showing what happens when a practitioner of such a fake won't back down and fights someone who knows what they're doing. They generally seem initially very confident and then within seconds they're on the ground just trying to keep from getting further hurt. That they'd show up and fight rather than make thin excuses and vanish suggests these people are delusional rather than (or as well as) crooks.
Fighting and cryptography are both disciplines where it isn't just a matter of opinion whether you're right.
I'd add the following three generic warning signs of crackpots, all of which are on display here.
1. Person doesn't know mailing list etiquette, for example top-posting and failing to send hard-wrapped plain text messages. Also not understanding the purpose of a mailing list intended for something else.
2. Person claims that others are rejecting basic principles of open-mindedness and making trivial errors in reasoning, or that they don't understand foundational elements of their field. Accusations of rudeness and threats to professional reputation.
3. Person uses just enough technical jargon to give the impression that they have some fluency in the relevant field. When corrected, they often seem to just barely misunderstand the correction, in order to give correspondents the false hope that they are open to being shown their errors.
Thanks for the Trisector link, it's been a couple years since I've read that one.
Reading this reminds me of situations for Anti-vax, religion vs science, red vs blue, flat earthers vs.... the rest of us?
Whatever their "idea" is, is infallible. Minds cannot be changed with logic or debate when clearly wrong/false.
A belief is unbreakable if the person holding it wishes it to be so. Discussing such topics over the internet will never lead to yielding any ground.
Perhaps at the core of it is the fear of being rejected or wrong. Or that their world as they know it is crumbling and keeping it together is of the highest requirement + cost.
I keep hope for these discussions in that people can change. As cheesy as it is to say, I've only seen people really change when it's with love. That we reach out the other side and simply love the person first. Daryl Davis is my hero on this[0].
Unrealistic in a forum about cryptographic schemes, but it hurts for me to read the circles the author plays himself into.
Has anybody ever really debated a "flat earther" let alone found one? As-in found somebody who actually believed it and wasn't just screwing with you to get a reaction?
I keep hearing about these people who believe the earth is flat but I've yet to ever come across anybody who actually believes that.
I’m pretty sure I’m related to an honest-to-goodness flat earther. The first time he brought it up I countered by professing a belief in Last-Thursdayism[0] as a sort of absurdist ploy to get him arguing for the more rational position. I think I converted him to Last-Thursdayism instead.
So you argued for a position you didn't actually believe just to see how he would counter? I suspect this is what most "flat-earthers" do. I know multiple people who independently joined a "flat earth" Facebook group for entertainment purposes. It is just trolls trolling trolls.
Here's a video of three flat-Earthers discussing (arguing) about it with three round-Earthers: https://www.youtube.com/watch?v=Q7yvvq-9ytE. They appear genuine, and remind me quite a bit of the anti-evolutionists I sometimes argue with (and who are definitely genuine).
And then there's "Mad" Mike Hughes, who just managed to kill himself in a rocket he was hoping to fly high enough to see that the Earth wasn't curved... or something. https://www.bbc.com/news/world-us-canada-51602655
This. The idea of actual flat-earth believers should be met with at least the same degree of scepticism as they supposedly demand of round-earth believers. Mind that merely having someone claim to be a flat-earther or arguing in favor of flat earth does not distinguish between them actually believing it and engaging in some sort of Socratic rhetoric (so as not to say trolling)
I've come across people who believe the earth is 5000 years old, even though they live right under a cliff where the exposed strata clearly show that can't be true. I didn't try arguing with them but they certainly exist.
A number of people genuinely believe in "young earth" creationism. But flat earth is at a different level since it is disproven by everyday phenomenons like people flying around the earth.
Well in my childhood I realised the religion my father was involved in, there was a common belief the moon was made of cheese, and that there were 'scientific evidence' for this in the scriptures. This also nicely corroborated the conspiracy theories that were commonly believed in, like NASA landing on the moon.
The short summary is that flat earthers are probably seen as a rejection of established institutions (like capitalism/neoliberalism) that the (correctly) see as a major source of the problems in their life. Unfortunately, lacking the social/political background required to actually understand these problem, they work with what they can understand.
"The government/etc has been consistently lying to us for decades while our jobs were shipped overseas and our quality of life keeps sliding lower and lower. If they are consistent liars, why should I believe anything they've said?"
People can only change when they themselves want to change. To make them want to change, you have to find a reason for them to want to have already changed. If you find a good enough reason, they will then slowly convince themselves that the change is good.
That psychological tricks like this are necessary to make people see that 1+1=2 makes me feel hopeless that humanity can ever reach the level of Star Trek without some kind of genetic engineering or letting AI take over. The vast majority of people are like this, incapable of reasoning through anything, only defending the views they grew up with or that they ascribe to their "tribe" by any means possible, never taking a Planck unit of time to self-reflect. There's some respite on select internet corners but even those have their pockets of crazies.
I recently had to regrettably leave a fandom Discord server because a sizable portion of the populace including the admin devolved into screeching banshees with no capacity for rational thought at some /slightly different/ view than they preferred being posted in their politics channel, and I could no longer justify expending any energy trying to engage in a reasonable fashion with these people or supporting a server with such a rotten administration I'd lost all respect for. I wonder if those of us who can consider ideas without throwing ourselves on the floor in a tantrum over the fact someone disagreed with us should just move to another planet then come back and conquer Earth in the five years it would take us to develop the capacity to do so without such people in our way.
You have, I think, accurately identified the problem which religions try to solve.
Also, please don’t think yourself above, as you say, “The vast majority of people”. You yourself are like this too (and me). It’s just that you have different things which you think are important. It’s only when we aren’t attached to any particular view that any of us can reason with any semblance of logic. Which is why, I guess, that some religions urge detachment.
Is there evidence many religions "try" to do anything rather than simply being amalgamations of stories? I heard once some developed more as a means of control by kings but I don't have any source. The only religion I know of that I'd say really gives a good counter to the problem in question is Buddhism. And frankly I would consider anyone able to carry out these virtues to be "above" a lot of people in a certain way.
Nobody can read minds, and certainly not the one of an entire religion, and by its very nature this is something which could never be spoken about publicly by its representatives. So we have only the actions of religions to go on. If we discount all the actions which can readily be explained as the kind of random madness stemming from any cult-like organization and/or culture, I get the impression that many religions actually do try to reign in the population at large and keep them in reasonable behavioral patterns.
"Thirdly, we deliberately introduced confusion over the systems architecture. This was not to protect any secrets we had, it was just another tactic in the controversial marketing tactic."
So it's not that he can't explain how it works, he deliberately introduced errors in the explanation, which allows him to conveniently claim that people poking holes in his crap don't actually understand the system, because he hasn't accurately described the system! Haha! Gotcha! Therefore the system is perfect!
"I had done it, I was the first in the world to prove, beyond any doubt, that the pyramids of the Giza Necropolis were, in fact, a scale representation of the three inner planets."
Wow. Just wow. That took a turn.
Extensive pattern matching is a marker for mental illness, and is no joke. That explains so much about the author.
To be fair to Mr. McCarron, having muddled my way through his accusations against ASRG and life history, the basic idea of his scheme¹ is simple and appears sound; when DKIM was later standardized it would achieve the same result (albeit by cryptography rather than a simple token check).
Unfortunately, he seems to have missed the idea of just adding a new envelope header and SMTP command, and instead gone the route of throwing the baby out with the bathwater and reinventing not only the wheel, but quite possibly the concept of circularity itself. Not only was GEIS to have replaced SMTP (in an entirely incompatible way), but he even went so far as to declare that “all 'GIEIS' servers will run on a separate transmission protocol (not TCP/IP).”
Add to that protocol megalomania some impenetrable architecture diagrams, a curious bit of Egyptology, self-admitted arrogance, and a very suspicious bit of sock-puppetry, and it's no wonder that the Register gave him such a send-up.
¹(include a nonce when sending an email, which the recipient can check with the originating server to confirm that the email is genuine)
It's not the first response he received. At the beginning they treated him respectfully, but the man was obnoxious to the extreme. I had great fun reading this.
Between the github account and theregister article posted in the comments here it is clear that the person in question is psychologically unwell regardless of whether he believes his own assertions or acts them out in some performative capacity. Using it to draw morals about thickness or inability to address criticism is not very helpful here.
If anyone wants another rabbit hole, reading through the Open Street Map Foundation resolution for the indefinite ban of user sorein is a good evening read:
Wow, I imagine it takes a lot of work for your ban to get a 49-page PDF report written about it:
> In response, Harry Wood of the Communications Working Group tries to reason with Mr. Acela (Appendix A.3), and Sorin continues his rude replies, claiming that the redaction process was STUPID and that the community members [...] are terrorists (Appendix A.4).
> [...]
> It never came to a Skype session because mediation requires a certain base level of respect and understanding between the parties.
It's a funny coincidence that I was just reading this book (https://toc.cryptobook.us/), Figure 4.5 on book page 102, that talks about this exact issue.
I doubt the author realises that the system must be resistant against a class of attacks (in this case, I believe a chosen plaintext attack). From reading the thread, and comparing it to the example in the book, it seems like the non-uniform patterns in the output highlight a possibility of a CPA.
And of course, the author wants a fully implemented + concrete attack instead of pointing to what's an obvious flaw to the crypto community.
At some point in the thread the author complains that it isn't fair to point out the repetitive patterns "because the clear-text is all zeroes and this would be masked by actual data"
So the author tried to encrypt the null byte, repeated thousands of times? That seems like the most pointless test I could dream up for any processing algorithm.
Computes all the primes less than 0? Already.
Achieve compression approaching 100% on a string of zeroes? Easy.
Losslessly compress an image with all pixels having opacity of 0? Easy
If an all-null message results in a simple/repeating ciphertext, it means that there exists at least one message whose ciphertext leaks information, and that in turns means that there are probably more messages like it, and it's also probable that all messages leak info in varying degrees.
"I'm somewhat disappointed in your reply, as I presumed that someone with a stated interest in ciphers would be eager to investigate anything new to pop up that didn't have obvious holes in it."
This is totally, utterly and critically wrong. Ciphers with "no obvious holes" are dime a dozen. Nobody is interested in looking at your cipher unless you both have strong evidence that it's beats the existing ones in at least one area and that you did your homework to check for known weaknesses.
This might actually be good to included as required reading in first year computer science courses. It's an excellent example of "how not to conduct yourself".
Actually in any science. He was surrounded by knowledgeable people pointing the flaws in his work and he didn't learn anything from it. Maybe it’s one form of lunacy.
I remember similar stuff from the data compression usenet groups years ago. So if anyone is interested, I have software that’ll compress megabytes, gigabytes, terabytes of data down to a kilobyte! And it can encrypt your data so that not even state actors can recover!
Sure, you can’t recover the data, but it’s compressed and encrypted so that your adversaries also can’t.
It gets so frustrating when people with obviously crazy ideas get upset if you don't engage with them and tell them why they are wrong. Some ideas are just so crazy they aren't worth wasting time engaging on them.
I've read through some of the mails and the description of the algorithm on Github.
This gives me some serious Kryptochef vibes.
If you haven't come across that name before: way back when, some guy was trying to sell his "Vollbitverschlüsselung" (Full-bit-encryption) software which he touted as the most secure in the world with an utterly bizarre explanation on how it was supposed to work. I'm not absolutely sure to this day if it was an elaborate hoax or whether he was actually serious.
It seems inevitable that every so often some person shows up with zero math/crypto credentials who claims they've invented a new miracle cryptosystem. It's always eviscerated by the professionals.
The problem isn't necessarily having no math/crypto credential, but falsely believe one has unique insights, while refusing to learn even the basis of a subject. It's more of an issue in psychology, than an issue of knowledge.
To be fair you can use AES to produce repeating patterns as well. Just look up Tux encrypted with any modern block cipher using ECB: https://blog.filippo.io/the-ecb-penguin/
That’s true. The analysis isn’t wrong, but it isn’t an algorithmic analysis. It’s just a quick statistical analysis that detects inherently bad properties for crypto. That was more my point. This kind of analysis isn’t bad but it’s just a sniff test.
Look up “cryptographic right answers” and dig up the threads from HN on it. Basically, don’t do the crypto yourself (even using with algorithms/constructs)
I think "poseur" is less-suitable, since it implies the person is motivated by the glamour of the role. Not many people--crank or otherwise--go into cryptography for the fame.
Interestingly these same sorts of visual patterns can be found when generating samples using poor pseudo random numbers like C rand() function. When using C++ mersenne twister algorithm the obvious coherency patterns go away.
If you read through the thread, it becomes clear that mental illness is a factor here. With so many such cases over the years, it's as though cryptography has its own flavor of Jerusalem Syndrome. It's such a magnet for this. The overconfidence of some of these people is really impressive. Plot twist: while being wrong about almost everything technically, perhaps he's right about the NSA wanting the conversation confined to a small, well understood set of algorithms. But still, ugh, no thank you to what he's offering.
it looks less like airbourne influenza and more like airbourne HIV, at least from the fact that people are testing positive weeks after treatment [0]...
See also: https://github.com/mmcc1/crystalline http://maldr0id.blogspot.com/2015/05/crystalline-cipher-and-...
The authors inability to take any criticism is stunning, sometimes I wonder if a fragile ego isn't one of the greatest barriers to cyber security..