the NSA themselves are concerned that quantum computing will be a great threat to encryption in the near future.
Keep in mind that the NSA and god knows who else are storing encrypted communications to break them later.
Quantum computing will defeat RSA, DH, ECC, asymmetric crypto, but it will only weaken symmetric crypto (eg. AES) by a factor of two.
So according to my Internet research: if your symmetric crypto is twice as secure (key size) as needs be, it is future proof.
Also (and please correct me if I'm wrong) I believe the triple encryption Serpent(Twofish(AES)) available in VeraCrypt (TrueCrypt fork) even protects against weaknesses which may be discovered in any of these cryptosystems: they would have to defeat all three.
How would you propose to securely exchange the keys for the several algorithms? If there is enough meta or implied information for you to know what algorithms are used, an attacker would also know what algorithms are in use.
The obstacle isn't lack of randomness or an excess of decipherability but not using a quantum resistant algorithm.
Ignoring quantum computing, doubling up algorithms doesn't really protect you against unknown unknowns all that much as there is a good chance that a massive flaw in one of them could apply to the others (since they are all somewhat based on the same "problem").
When you start layering lattice based crypto with traditional crypto the payoff might change and make it more viable.
There could be double-tree PKI, where nodes in the tree are represented by two key pairs in different kinds of key spaces, with the different signatures terminating in two ephemeral key pairs, which are then used for key derivation, and the subsequent secret perhaps concatenated and boosted into a higher key space? The complexity would be mind boggling. And, weaknesses in one half of the tree could translate into a security reduction on the final symmetric key, so it would have to be way over powered to be effective.
Not a simple problem to solve.
If you're using encryption to keep your data safe (e.g. local files, full disk encryption), then you don't actually need asymmetric encryption.
Additionally... unless you generated all the bytes you are encrypting yourself without transmitting them over a network at any time... e.g. video/pictures you took yourself and then stored to your own encrypted disk, your data could be compromised by quantum-breakable encrypted communication protocols at the time of transmission. I would wager that most data worth surveilling goes over the network at some point using TLS.
Done right, this means quantum won't get you access to dropbox-like stuff. (Dropbox itself probably doesn't work like that, it'll store the symmetric keys server side somehow.)
There's no encryption there at all, except TLS in transit.
Eg think you want to encrypt your logs: with asymmetric encryption you can store the public keys on all your servers, encrypt your logs as you generated them and forget the plain text as soon as possible. You keep your private key hidden away on a secure server, and only use it when you actually want to look at your logs (which doesn't happen all that often).
It's really cheap to store ALL phone calls these days, and just "playback" when you get the legal search warrant.
Contrast with deriving a key using cryptographic primitives, which can accept low-quality randomness (as long as there is sufficient entropy), that can be easily and transparently collected.
But that was never the problem. The problem is, now what? To use this OTP you need to securely deliver pads to everybody you'll ever send a message to. So, OTP is practical for a handful of secret agents who'll receive messages of a few dozen words per year from a single controller, and useless for most of us in the real world.
_This_ problem is why we have public key cryptography.
The only thing that OTP buys you is that you can exchange the pads at your convenience any time before.
With a few GB (a couple bucks in a supermarket will buy a 8GB usb stick) you can communicate in text about 16 thousand books worth of words.
In other words, to deplete the pad you would have to write sixteen thousand books.
I think that's pretty convenient, as far as literally unbreakable encryption goes!
It's just that key-sharing is basically the most complicated and vulnerable part of modern cryptography.
We are of two minds about IPsec. On the one hand, IPsec is far better than any IP security protocol that has come before: Microsoft PPTP, L2TP, etc.
On the other hand, we do not believe that it will ever result in a secure operational system.
It is far too complex, and the complexity has lead to a large number of ambiguities, contradictions, inefficiencies, and weaknesses.
It has been very hard work to perform any kind of security analysis; we do not feel that we fully understand the system, let alone have fully analyzed it.
Doubled key size does not mean twice as hard to break. Algorithm weaknesses are a thing.
This WP is pretty accurate though. It describes the world we think will exist after legitimate quantum computing.
Sort of like trying to measure a building to the nearest inch is harder than nearest foot, and every added digit of accuracy is even harder.
It's not just the number of bits that affects signal to noise ratio, though... It's also the complexity of the calculation; more gate operations and longer storage in memory leads to more quantum decoherence. And AFAIK, each type of calculation must be implemented in hardware, because reversible quantum gates must be used. This means you would need a specialized chip for each algorithm. I haven't confirmed this understanding with someone that specializes in QC, but I don't see a way around it, other than to combine many algos onto a single chip, sharing as many gates as possible, but degrading S/N further.
Something tells me, for an intelligence agency, building a quantum computer only for the purpose of breaking cryptographic keys would be a worthwhile investment.
An investment they have already made:
The effort to build “a cryptologically useful quantum computer” -- a machine exponentially faster than classical computers-- is part of a $79.7 million research program called “Penetrating Hard Targets.”
Scott Aaronson's blog has some good background information for the layman.
For example the NIST standard for the controversial dual eliptic curve deterministic random bit generator standard, involves two numbers, p and q, which are 100 digit long. To date, largest integer factored in a quantum computer is reported to be around 200,000...
New attacks are discovered from time to time. It is unlikely someone will try to bruteforce your encrypted data. Much more likely that some vulnerability will be discovered in AES or the way you generate keys for it.
EDIT: nevermind, i've now read through the thread more and have found answers. :)
 https://en.wikipedia.org/wiki/Post-quantum_cryptography .
pardon my ignorance. but, isn't this an inevitability? not just a possibility?
Firstly, quantum computing at scale _might_ be possible in our universe but it might not. One of the spookiest things that might still be true would be a thing called finite non-local hidden state. In this scenario the whole universe has some sort of hidden state, a bit like the seed value of a Minecraft world. Quantum computing in a universe with finite non-local hidden state just weirdly "doesn't work" when you scale it up, because it's basically using the universe's hidden state as magic working storage, and that runs out. Simulation nuts would tell you finite non-local hidden state makes it pretty clear we're in a simulation, but then they would say say that...
Secondly it says "near future". We have every reason to believe there will be serious obstacles to scaling up quantum computing. Somebody else gave the example of Babbage's engine. Just because we can conceive of fusion power generation doesn't mean it's going to happen next week.
Or at all. At the start of the 20th century some mathematicians thought all of mathematics could be formulated as a handful of assumptions plus a huge chain of inference. It was just a matter of writing it out precisely. Surely an inevitability. This project was begun by mathematicians and philosophers and it was going pretty well (proving that 1+1=2 for example) up until this chap Gödel comes along and straight up proves it's impossible with his Incompleteness Theorem, if you can do 1+1=2 then Gödel shows how you can write out effectively the equivalent of "This statement is false" and blow up your whole system.
So the nice thing about QC research is that we either get shiny new computers to play with, or we learn something new about Quantum Mechanics.
but i appreciate you reading between the lines and following up with a great response- i have a lot to Google. :)
If it's not the near future, we might move to post-quantum algorithms (or quantum cryptography, a field almost entirely unrelated to classical cryptography which involves carefully moving entangled pairs of particles around) before quantum computers become any good at cryptanalyzing realistically-sized encryption algorithms.
It is also extremely annoying that programs like gpg do not support the generation of large RSA keysizes such as 15360 bits which would require many more qbits in order to break (and since RSA 4096 and 2048 are much more commonly used, they might not even build a quantum computer capable of breaking them or they might create a much smaller amount of them).
That's quite a bummer. I finally managed to convince someone to use gpg and now I learn that this just makes sure your emails will be read/analyzed by anyone who has an interest in it in the future.
So it isn't necessarily about what I am communicating or to whom I talk but the fact that it seems impossible for society as a whole the hide the details of its functioning.
I viewed pgp/gpg as a tool that could counter this. Convincing people to use it would've been an uphill battle but there would have been a slim chance of succeeding to a useful degree.
All the headlines will be: "Heisenberg uncertainty principle violated; science, technology and life as you know it will change'.
This has nothing to do with 'engineering challenges'. You are dealing with hard cold physics limiting our understanding of the standard model, just because the barrier is there.
* If there is a weak layer in the stack, from the physical layer to to UI, then the system is not secure. Even if your messaging app is secure, your messages are not secure if your OS is not secure
* If the source code is not available for review, the software is not secure
* If you or someone you trust has not done a full and thorough review of all components of the stack you are using, the software is not secure
* Even if the source code is available, the runtime activity must be audited, as it could download binaries or take unsavory actions or connections.
* On the same note, if you do not have a mechanism for verifying the authenticity of the entire stack, the software is not secure.
* If any part of the stack has ever been compromised, including leaving your device unlocked for five minutes in a public place, the software is not secure.
I could go on, and I'm FAR from a security expert. People compromise way too much on security, and make all kinds of wrong assumptions when some new organization comes out and claims that their software is the "secure" option. We see this with apps like Telegram and Signal, where everyone thinks they are secure, but if you really dig down, most people believe they are secure for the wrong reasons:
* The dev team seems like honest and capable people
* Someone I trust or some famous person said this software is secure
* They have a home page full of buzzwords and crypto jargon
* They threw some code up on github
* I heard they are secure in half a dozen tweets and media channels
To me, security is not a binary property but rather a sliding scale. WhatsApp say they use end-to-end encryption and they have a strong financial incentive to be telling the truth. No hacker has demonstrated that WhatsApp are lying and the Wikileaks dump suggests the CIA has been unable to intercept messages in transit. Given this information I would rate WhatsApp at least 'reasonably secure'.
I'm not giving much to the various "whatsapp backdoor" allegations but I'm curious to why they'd have financial incentive to provide privacy.
Most of their userbase likely still doesn't care about security and they do belong to Facebook - so if anything, they'd have a financial incentive not to use effective crypto.
If hackers could get access to WhatsApp and dump all messages to Wikileaks it would make the company look very bad and a significant number of users would switch to something else. If security is not that important to users, why pretend to add end-to-end encryption at all?
Teen Vogue just suggested people should use WhatsApp instead of Snapchat because it does end-to-end crypto. I don't think it's true any more that the general public doesn't care about security, if it was ever true.
Facebook cares mostly about penetration for Whatsapp, to ensure that no other messaging app takes over.
Free product advertising worth $N targeted to the more influential product adopters, who will then amplify said advertisements.
That's my guess, anyway.
If you want a completely free software smartphone experience, it is simply not possible at the moment. Even Replicant still hasn't cracked the baseband puzzle (and is still struggling with the firmware for a couple of phones).
So no, Android is definitely proprietary -- even if some parts are not.
If you want a completely 'free' (as in GPL) cell phone experience, you can setup a OpenBTS transmitter and transmit at the 900mhz range which is commons property. To stay legal in the US, your antenna has to put out less than a watt, but the setup allows you to even use off-the-shelf phones and trunk into normal phone lines via standard POTS software. Your device would have to be something a-la http://alumni.media.mit.edu/~mellis/cellphone/ (just a janky setup, but just a proof-of-concept -- you can patch together components from DigiKey pretty easily these days; if you want free-silicon, I think the closest you're going to get is https://en.wikipedia.org/wiki/OsmocomBB or maybe some soft cores, but if you're actually going to take that soft core to tape-out, you're probably going to be running 6 figures just for masks...)
Initially looking to reuse old phones with the Calypso chipsets, the project is now working on producing their own. Design files are completed; funding for the dev boards is about 66% complete.
Mailing list is fairly active too.
While (AFAIK) there isn't a regulation stopping someone from selling radios that have completely free software basebands, you can bet that the manufacturer will be prosecuted if users suddenly start outputting radio waves that don't follow regulations (suing users is harder than suing a manufacturer). As a result, there's a disincentive for manufacturers to ever sell free software radios (because by definition they would have to allow modification).
Even if manufacturer provides the code, it can preinstall additional closed source programs. For example, Facebook app or some "telemetry" app that are closed source. My chinese noname phone contained an app that was trying to send my phone number and other identifiers to China as a part of a "sales report" (exact URL was http://bigdata.adfuture.cn
/reboot/salesCountInterface.do ). And one can only guess how many data does Facebook collect.
What the end user gets is a phone with a binary blob inside.
I think there should be a strict requirement banning collecting any data without consent from user. No "anonymous" "analytics" and telemetry, no crash reporting, no advertising ids, no checkboxes checked by default. There can be only legal solution to the problem of mass surveillance by software companies. Every byte your device sends to network can end up in the hands of the hackers from developing countries or NSA.
- Signal code: https://github.com/whispersystems/
Telegram has had known flaws, which have been discussed in part here:
- Telegram protocol defeated. Authors are going to modify crypto-algorithm https://news.ycombinator.com/item?id=6948742
- A Crypto Challenge For The Telegram Developers https://news.ycombinator.com/item?id=6936539
- Telegram (initial discussion) https://news.ycombinator.com/item?id=6913456
In fact, Facebook Messenger's implementation of Signal has very questionable security right out of the box, because if one party "reports" an encrypted conversation, the whole thing is decrypted and sent to facebook support staff.
The Signal Protocol provides end-to-end encryption so you don't have to trust the intermediate parties/servers involved in relaying the message (e.g. you don't have to trust Facebook's servers), and to protect against the other person reporting and revealing your conversation to someone else, the Signal Protocol provides message repudiation , which effectively gives the sender plausible deniability because the receiving party cannot prove to a third party that a message came from you.
Let's not make the perfect the enemy of the good.
There's few of the big corps I trust as little as Google.
If you were really worried about what a particular binary would do, trusting that the binary matched the source and studying runtime behavior would both be a waste of time compared to fully analyzing the binary in question.
If you treat the software as a black box and only study run time behavior, you have no idea if you have tripped a countermeasure that silences the malicious behavior; if you study the control flow directly, you can look for such countermeasures.
It would be great to find such a countermeasure, and be able to trigger it reliably, or assert the behavior on a permanent basis. Considering that particular weakness of such countermeasures though, wouldn't the safest [for the attacker] default countermeasure likely be to simply crash the device?
A user that knows about malicious code (which you would have to in order to trigger it to go silent) in a binary just shouldn't use the binary at all though.
The broader point is more important: compiled software isn't a black box, treating it like a black box is not the only or best way to analyze it.
But this sort of pessimism isn't really useful. The attitude that "anything is insecure if there is any closed source software anywhere in the stack" means that it's impossible to advance security, because it's almost impossible to avoid binaries (i. e. firmware).
Apple, for example, has done a few things that are laudable in this field – i. e. risking a public court fight with the FBI to keep the iPhone secure. If we say that such actions are meaningless because they ship binaries, they have no incentive to do such things. Just rolling over and giving the US gov big-pipe-access to everything like yahoo did becomes the better business proposition.
Similarly, what do you answer when a friend who works at the EPA asks you how to securely contact a journalist? If it starts with ordering a custom open-firmware mainboard from somewhere in China, your advice will be ignored.
You can't just insert yourself in the message stream since the client and server use pinned, mutual certificate authentication. So you have to start from first-principles and step through decompiled code.
I'm not sure what you mean here. It's easy to identify where the key comes from and whether the ciphertext is what you'd expect it to be in that case.
> they aren't sending other data over unannounced side-channels.
It's not straightforward to determine that even if you do have the source - you could imagine an implementation that deliberately leaks information through timing details without that being obvious from the code. At some point you have to trust that authors aren't doing something awful.
> So you have to start from first-principles and step through decompiled code.
Well no, because the first thing you can do there is just disable certificate pinning. But really, the difficulty of stepping through decompiled code is vastly overrated.
As for using the correct key, dismantle the signal message envelope until you get your blob of encrypted message. Then see if the same blob appears on the target device. Multiple keys? I imagine either correlating message size and network traffic (encrypting stuff twice could well show up), or going at it with a debugger.
Which is really the answer to all of these questions instead of any network shenanigans. You root your phone and attach a debugger, then step through what signal is doing.
Not a security researcher, never reverse engineered anything for security reasons in my life.
Facebook distributes your 2MB pic to many people, does it technically require more than 2MB of your upload bandwidth? No. You only need to upload it once to their server.
I agree that it's weakened but I think it's still meaningful. If someone is choosing between WhatsApp and Allo the former is more likely to be properly encrypted.
After all, even though we can't verify it WhatsApp has strong incentives to implement it properly, and OWS has strong incentives to only endorse WhatsApps use of their protocol if they are convinced that it's done properly.
I really think "completely meaningless" is dangerously misleading hyperbole.
Open source is required not just for apps, but also for:
- operating systems
Then we can start to talk about privacy.
Unless you're a security expert with plenty of time to comb through someone else's code, you're still relying on others to be truthful and competent. Even then you're relying on layers upon layers of software and hardware. Far too much for an individual to verify.
A peer reviewed distributed trust net is much more trustworthy.
As much as the law might try to pretend, companies are not people. Uh, except in the sense that they are compromised of people. So, they literally are people, people combined with capital.
Keep in mind that reviewers of free software are also often employees and may have some agenda beyond pure altruism, even if the software isn't copyright of the employer. Open source is big business these days.
Well, at least it could be considered tainted by self-interest if you're paranoid. My point was that commercial interest does not prevent quality/security.
There's an entire industry dedicated to reverse engineering software and studying its security properties. We call it the security industry. (Not every gig is white-box!)
Note this is an AP article.
Ashcroft does X, it's evil and given intense scrutiny. Holder does X, it's mostly given a pass by the msm.
Bush does X, it's evil. Obama does X (eg regime change in Syria; what, no million person protests?), it's mostly given a pass by the msm.
That's how the media has functioned for decades. They'll get extremely loud during Trump's Presidency about domestic spying abuses, after eight years of giving the Obama Admin a sizable pass. The same will hold true about the egregious abuses directed at the press under Obama, when Trump does the same thing it'll be the end of the world.
I'm curious, because I haven't heard of these before. Examples?
Hopefully one of the messages that emerges is that encryption is not scary and another good one would be that privacy is not deceitful.
The ability for the state to coerce you isn't going away though.
Nope, that was about setting precedent using a case that is very hard to argue against morally, so that they can erode privacy and protections on a wider scale.
Ironically enough, he's promoting them while saying "Americans don't have absolute privacy."
Yes, we know. That's why we're trying to use encryption more...But thanks for reminding us, James.