Hacker News new | past | comments | ask | show | jobs | submit login
What the CIA WikiLeaks Dump Tells Us: Encryption Works (nytimes.com)
663 points by kungfudoi on March 11, 2017 | hide | past | favorite | 259 comments



I don't see any mention of quantum computers in here so I thought I'd mention:

the NSA themselves are concerned that quantum computing will be a great threat to encryption in the near future.

Keep in mind that the NSA and god knows who else are storing encrypted communications to break them later.

Quantum computing will defeat RSA, DH, ECC, asymmetric crypto, but it will only weaken symmetric crypto (eg. AES) by a factor of two.

So according to my Internet research: if your symmetric crypto is twice as secure (key size) as needs be, it is future proof.

Also (and please correct me if I'm wrong) I believe the triple encryption Serpent(Twofish(AES)) available in VeraCrypt (TrueCrypt fork) even protects against weaknesses which may be discovered in any of these cryptosystems: they would have to defeat all three.


Symmetric encryption not being broken doesn't really help you if the encryption key has been exchanged using a (presumably quantum-breakable) form of asymmetric encryption. Most encryption in the wild works this way.


True. Note that some "post-quantum" key exchange schemes already exist (based on lattice cryptography, for which there are no known poly-time quantum attacks), e.g. https://eprint.iacr.org/2015/1092 . But I haven't heard of it being used anywhere.


Google has been experimenting with it in chrome https://security.googleblog.com/2016/07/experimenting-with-p... though it is just an experiment in Canary.


The sad truth is, until we've spent a lot more time analysing and attacking those algorithms, they aren't as secure as what we've got.


There's an extremely simple method which is to onion multiple algorithms. If you want to add the promised security features of a new algorithm that hasn't been battle-tested it's well worth the effort.


Your proposed use seems intended to add randomness. However, Schor's algorithm (for example) doesn't attack the randomness of a message but factors keys.

How would you propose to securely exchange the keys for the several algorithms? If there is enough meta or implied information for you to know what algorithms are used, an attacker would also know what algorithms are in use.

The obstacle isn't lack of randomness or an excess of decipherability but not using a quantum resistant algorithm.


I know next to nothing about this subject, but this suggestion sounds like the kinds of things that amateurs suggest but experts scoff at. So my question to any actual experts: is this legit? Note that breaking A+B is not necessarily as hard as breaking both A and B, but it certainly seems likely to be true most of the time.


I'm far from an expert, but I believe the big worry with layering systems like this is you double the amount of key management and implementation attack surface (which is often the weakest part of a cryptosystem), and traditionally it's not for much benefit.

Ignoring quantum computing, doubling up algorithms doesn't really protect you against unknown unknowns all that much as there is a good chance that a massive flaw in one of them could apply to the others (since they are all somewhat based on the same "problem").

When you start layering lattice based crypto with traditional crypto the payoff might change and make it more viable.


I've never heard of such technique, what does onioning do?


I don't think there is an exact term (and if there is I'm all ears), so the closest one I found pertains to onion routing[1], except instead of each layer being a different key, each layer is a different algorithm and a different key. I.E if you wanted to use algorithm X and algorithm Y, you could encrypt your plaintext with cyphertext_x = X(x_key,plaintext), and then cyphertext = Y(y_key,cyphertext_x). If X is terrible, Y still protects your data. if Y is terrible, X still protects your data. Note that the order in which X and Y are applied doesn't matter.

[1] https://en.wikipedia.org/wiki/Onion_routing


Algorithms composition? (similar to function compositions[1], since algorithms are just function that take an input and return an output.)

[1] https://en.wikipedia.org/wiki/Function_composition


Use of new encryption algorithm on the ciphertext encrypted by the old algorithm.


Sounds good for symmetric crypto, but not for public key exchanges, which is what quantum compute attacks are so far all about.

There could be double-tree PKI, where nodes in the tree are represented by two key pairs in different kinds of key spaces, with the different signatures terminating in two ephemeral key pairs, which are then used for key derivation, and the subsequent secret perhaps concatenated and boosted into a higher key space? The complexity would be mind boggling. And, weaknesses in one half of the tree could translate into a security reduction on the final symmetric key, so it would have to be way over powered to be effective.

Not a simple problem to solve.


Why wouldn't that work? Exchange all n keys with their respective methods in parallel?


Lattice based schemes are roughly as old as ECC. They also enjoy something called worst-case to average-case redictions for certain parameter ranges, which gives us confidence in their strength.


Google's canary test also does regular ECDH requiring it to be broken as well


TLS implements PFS (perfect forward secrecy) by generating an ephemeral asymmetric key pair for the session, usually through DH. In other words, breaking the session key pair will only reveal the plaintext of the current session for the current client. No previous communication is broken, so the NSA would have to traverse the history for each client.


And for a target of interest, given quantum computing, it would not really provide any real protections. I could see how recalculating the key would be well worth it, if you really want the data.


Agreed, it won't help much in that case.


This case is a targeted breach of confidentiality not unlike a phone wire tap, still not allowing bulk surveillance. Not bad altogether.


That's understandable, since the point of asymmetric encryption is communication. But people can use symmetric encryption when it's about your own data security (i.e. you encrypt it, you decrypt it).

If you're using encryption to keep your data safe (e.g. local files, full disk encryption), then you don't actually need asymmetric encryption.


Agreed. However, the kind of mass surveillance people are worried about isn't really concerned with personal data at rest. That data already requires targeted surveillance to get at.

Additionally... unless you generated all the bytes you are encrypting yourself without transmitting them over a network at any time... e.g. video/pictures you took yourself and then stored to your own encrypted disk, your data could be compromised by quantum-breakable encrypted communication protocols at the time of transmission. I would wager that most data worth surveilling goes over the network at some point using TLS.


There is still tremendous value in symmetric encryption for off-site storage, especially cloud-based storage. This allows storage to be off-loaded wherever without needing the storage location to be trusted. All that needs to be stored locally is the key.

Done right, this means quantum won't get you access to dropbox-like stuff. (Dropbox itself probably doesn't work like that, it'll store the symmetric keys server side somehow.)


Dropbox de-duplicates your files with other users who have the same ones, and are capable of serving them all to you through a web site.

There's no encryption there at all, except TLS in transit.


Found that hard to believe, so I googled it. Not positive this is a trustworthy source, but it sounds like Dropbox does use encryption for files at rest and in motion: https://www.virtru.com/blog/dropbox-encryption/


Dropbox is known to have intergration with PhotoDNA, a state program that matches file signature with a database of known illicit images. This would be impossible if the encryption used is irreversible and as far as I know dropbox never claimed otherwise.


Couldn't dropbox hash the images on the client, and upload the encrypted images as well as the hash? No need to upload an unencrypted file to do matching.


Depends on what kind of hashing they do: a scheme like yours using a cryptographic hash would be defeated by just randomly changing a single bit in the image (eg appending some garbage bytes at the end); or re-encoding the jpeg. Of course, it could still catch non-techy people.


I'm not claiming it is end-to-end, nor did I expect it to be, but perhaps some people would assume that. I thought you were claiming they don't use encryption at all.


It's effectively nothing when Dropbox themselves have access to all your files whenever they want.


And whoever hacks dropbox, yours or all of them.


They do, but Dropbox controls the key and their services can decrypt customer data, so the value of the encryption is limited at best.


You shouldn't be trusting your commodity cloud storage provider for that stuff anyway. Use your own client side encryption program to encrypt whatever you need to. Arq, borg backup, zip files with a password, sparse encrypted dmg directories, etc.


Even for your own data, asymmetric encryption can be useful.

Eg think you want to encrypt your logs: with asymmetric encryption you can store the public keys on all your servers, encrypt your logs as you generated them and forget the plain text as soon as possible. You keep your private key hidden away on a secure server, and only use it when you actually want to look at your logs (which doesn't happen all that often).


One thing you could do is to exchange the keys in a nonstandard way. Phone or whatever. Obviously still breakable but at least not by a standard dragnet.


I wouldn't use the phone for exchanging plaintext keys. You're forgetting about ECHELON, wiretaps, and the dragnets where LEA are ignoring (or purposefully misinterpreting) the law.

It's really cheap to store ALL phone calls these days, and just "playback" when you get the legal search warrant.


It's only about avoiding automatic dragnet, so making a person listen to the conversation and figure out the key counts as success.


Automatic voice transcriptions has been a technology for greater than two decades.


And it doesn't count as surveillance until someone looks at it...


Exchange many keys.


If you're exchanging keys in person, I suggest looking into OTP, currently the only known mathematically unbreakable form of encryption, and so simple it has been used since at least World War I.


Mathematically unbreakable if your pad is truly random


Sure, but if your RNG is compromised, there's no point talking about encryption in the first place.


That's not really the case. OTP requires a large amount of perfectly unbiased randomness, implicitly from a hardware randomness source (otherwise what you have is more accurately a stream cipher).

Contrast with deriving a key using cryptographic primitives, which can accept low-quality randomness (as long as there is sufficient entropy), that can be easily and transparently collected.


Yes, in theory OTP requires perfectly uniform uncorrelated random input. However, in practice, you can use a randomness extractor… at which point, yes, we can't prove that someone with unbounded computational power wouldn't be able to crack it. But all known computational power is bounded.


Right. So now you have to figure out how to get a secure RNG into people's hands, so they can create pads. How does that work? Genuinely curious whether there's a reasonable solution.


Devices that produce a trickle of truly random numbers can be produced for a few bucks. They're included in modern CPUs, for example.

But that was never the problem. The problem is, now what? To use this OTP you need to securely deliver pads to everybody you'll ever send a message to. So, OTP is practical for a handful of secret agents who'll receive messages of a few dozen words per year from a single controller, and useless for most of us in the real world.

_This_ problem is why we have public key cryptography.


Snarkily: a one time pad reduces the problem of exchanging secret messages to exchanging secret keys of the same length..

The only thing that OTP buys you is that you can exchange the pads at your convenience any time before.


You only need to meet once.

With a few GB (a couple bucks in a supermarket will buy a 8GB usb stick) you can communicate in text about 16 thousand books worth of words.

In other words, to deplete the pad you would have to write sixteen thousand books.

I think that's pretty convenient, as far as literally unbreakable encryption goes!


Oh, definitely. The convenience of being able to secure your communication in advance is great.

It's just that key-sharing is basically the most complicated and vulnerable part of modern cryptography.


I saw that strongSwan had already supported two post-quantum key exchange algorithms (NTRU and NewHope) for IPSec IKEv2. Good.

https://wiki.strongswan.org/projects/strongswan/wiki/IKEv2Ci...


If IPSec is a complicated, commitee-designed and NSA interfered technology, and moreover not recommended by information security researchers, isn't a detail such as support for post-quantum algorithms irrelevant?

http://www.mail-archive.com/cryptography@metzdowd.com/msg123...

https://www.schneier.com/academic/paperfiles/paper-ipsec.pdf


From Schneier's IPSec paper

Conclusions

We are of two minds about IPsec. On the one hand, IPsec is far better than any IP security protocol that has come before: Microsoft PPTP, L2TP, etc.

On the other hand, we do not believe that it will ever result in a secure operational system.

It is far too complex, and the complexity has lead to a large number of ambiguities, contradictions, inefficiencies, and weaknesses.

It has been very hard work to perform any kind of security analysis; we do not feel that we fully understand the system, let alone have fully analyzed it.


Yes. https://en.m.wikipedia.org/wiki/Post-quantum_cryptography

Doubled key size does not mean twice as hard to break. Algorithm weaknesses are a thing.

This WP is pretty accurate though. It describes the world we think will exist after legitimate quantum computing.


There is someone evidence that QM will result in weaker signals to the point where you can't use it to break real world crypto. It's something of an open question at this point.


Could you elaborate? I'm not sure I follow. It's been a while since my CS Theory classes.


Basically, you can do 64 bit calculations on a 16 bit traditional computer, it just takes more steps and some RAM. However, to do 64 bit quantum calculations you need a 64 quibit quantum computer. Now, we may be able to build a 64 QC, bout even that is useless for real world encryption. Further, it while it does not take more time to run a 64vs 4096 quibit QC, it does become exponentially harder to build.

Sort of like trying to measure a building to the nearest inch is harder than nearest foot, and every added digit of accuracy is even harder.


Good analogy; and quite accurate.

It's not just the number of bits that affects signal to noise ratio, though... It's also the complexity of the calculation; more gate operations and longer storage in memory leads to more quantum decoherence. And AFAIK, each type of calculation must be implemented in hardware, because reversible quantum gates must be used. This means you would need a specialized chip for each algorithm. I haven't confirmed this understanding with someone that specializes in QC, but I don't see a way around it, other than to combine many algos onto a single chip, sharing as many gates as possible, but degrading S/N further.


Re: need custom hardware for each algorithm

Something tells me, for an intelligence agency, building a quantum computer only for the purpose of breaking cryptographic keys would be a worthwhile investment.

An investment they have already made:

The effort to build “a cryptologically useful quantum computer” -- a machine exponentially faster than classical computers-- is part of a $79.7 million research program called “Penetrating Hard Targets.”

article

https://www.washingtonpost.com/world/national-security/nsa-s...

leaked document

http://apps.washingtonpost.com/g/page/world/a-description-of...


Yes, that's one of the big questions in QC. If people manage to build good error correction for QC some day, scaling will be less of an issue.

Scott Aaronson's blog has some good background information for the layman.


Current quantum computers lack the amount of qubits required to implement practical quantum integer factorization (e.g: Shor's algorithm) at the scale required to break encryption.

For example the NIST standard for the controversial dual eliptic curve deterministic random bit generator standard, involves two numbers, p and q, which are 100 digit long. To date, largest integer factored in a quantum computer is reported to be around 200,000...


200,000 what? Digits? Bits?


the largest number was 200,000 i.e. 6 digits or about 21 bits


20 bits > 1,000,000


> So according to my Internet research: if your symmetric crypto is twice as secure (key size) as needs be, it is future proof.

New attacks are discovered from time to time. It is unlikely someone will try to bruteforce your encrypted data. Much more likely that some vulnerability will be discovered in AES or the way you generate keys for it.


Considering DH is often used along with AES, I'd assume many encrypted communication streams implementing these that have been captured will likely be broken when this becomes a reality. Anyone care to shed some light on this? As far as ECDH goes... (afaik) most ECC implementations are vulnerable to quantum computing attacks...are there any decent post-quantum key exchange algorithms / protocols?

EDIT: nevermind, i've now read through the thread more and have found answers. :)


Its more of a problem if you have key exchange (like RSA); here [1] it says that with AES is supposedly secure against quantum computers with 'sufficiently sized key sizes'; For key exchange they are trying to adapt elliptic curves.

[1] https://en.wikipedia.org/wiki/Post-quantum_cryptography .


Elliptic curves are broken in a quantum setting.


There is an approach (called Supersingular isogeny Diffie-Hellman or SIDH) using certain types of mappings between elliptic curves being researched that _may_ be secure in a quantum setting. (edit to add:) I think this may be what the grandparent poster is referring to by "trying to adapt elliptic curves".


> the NSA themselves are concerned that quantum computing will be a great threat to encryption in the near future.

pardon my ignorance. but, isn't this an inevitability? not just a possibility?


No. For two reasons, a general reason and a specific reason.

Firstly, quantum computing at scale _might_ be possible in our universe but it might not. One of the spookiest things that might still be true would be a thing called finite non-local hidden state. In this scenario the whole universe has some sort of hidden state, a bit like the seed value of a Minecraft world. Quantum computing in a universe with finite non-local hidden state just weirdly "doesn't work" when you scale it up, because it's basically using the universe's hidden state as magic working storage, and that runs out. Simulation nuts would tell you finite non-local hidden state makes it pretty clear we're in a simulation, but then they would say say that...

Secondly it says "near future". We have every reason to believe there will be serious obstacles to scaling up quantum computing. Somebody else gave the example of Babbage's engine. Just because we can conceive of fusion power generation doesn't mean it's going to happen next week.

Or at all. At the start of the 20th century some mathematicians thought all of mathematics could be formulated as a handful of assumptions plus a huge chain of inference. It was just a matter of writing it out precisely. Surely an inevitability. This project was begun by mathematicians and philosophers and it was going pretty well (proving that 1+1=2 for example) up until this chap Gödel comes along and straight up proves it's impossible with his Incompleteness Theorem, if you can do 1+1=2 then Gödel shows how you can write out effectively the equivalent of "This statement is false" and blow up your whole system.


Our current understanding of Quantum Computing implies Quantum Computing is possible.

So the nice thing about QC research is that we either get shiny new computers to play with, or we learn something new about Quantum Mechanics.


yeah- my response neglected "near future"- i don't believe our current encryption methods are "near" broken. to that end, i was obviously wrong.

but i appreciate you reading between the lines and following up with a great response- i have a lot to Google. :)


So... how come they are selling them?


It's a possibility that it'll be a great threat in the near future. Remember that Babbage's Analytical Engine was designed on paper in 1837, and nothing similar was constructed for over a century.

If it's not the near future, we might move to post-quantum algorithms (or quantum cryptography, a field almost entirely unrelated to classical cryptography which involves carefully moving entangled pairs of particles around) before quantum computers become any good at cryptanalyzing realistically-sized encryption algorithms.


I would argue no... there's nothing inevitable about technology. Doubly so for quantum computing. Others will likely disagree.


Maybe it's a lesson that security itself is a flawed approach and we need a fundamentally different system


And the NSA apparently doesn't use Asymmetric Crypto for anything for this reason and perhaps others.


Their most secure devices use FIREFLY, a variant of Photurius protocol.


> Quantum computing will defeat RSA, DH, ECC, asymmetric crypto, but it will only weaken symmetric crypto (eg. AES) by a factor of two.

That's quite a bummer. I finally managed to convince someone to use gpg and now I learn that this just makes sure your emails will be read/analyzed by anyone who has an interest in it in the future.


"anyone who has an interest in it in the future" is a very vague term. I'm confident lots of things will happen in crypto centuries after I die, but that's not in my threat model.


Neither do I care about what happens in a couple of centuries. My main worry is that in a somewhat foreseeable future data of electronic communications can be used to construct a model of how a society of interest works to a degree useful enough to attack/control it.

So it isn't necessarily about what I am communicating or to whom I talk but the fact that it seems impossible for society as a whole the hide the details of its functioning.

I viewed pgp/gpg as a tool that could counter this. Convincing people to use it would've been an uphill battle but there would have been a slim chance of succeeding to a useful degree.


You seen to be so sure quantum computing will actually be done.


It's already been done, just not on very many qubits. Scaling up to more qubits is a tractable engineering challenge, and there's plenty of money on the table, so we have every reason to expect that practical quantum computers will emerge in the near future.


If, and that is a big if, quantum computing was here, you won't be reading about qbits.

All the headlines will be: "Heisenberg uncertainty principle violated; science, technology and life as you know it will change'.

This has nothing to do with 'engineering challenges'. You are dealing with hard cold physics limiting our understanding of the standard model, just because the barrier is there.


I find it weird that hash signatures are not commonly used since most asymmetric hash signature schemes (if not all) are provably safe as long as the hash function used is safe and they are also quantum-safe and very trivial to implement.

It is also extremely annoying that programs like gpg do not support the generation of large RSA keysizes such as 15360 bits which would require many more qbits in order to break (and since RSA 4096 and 2048 are much more commonly used, they might not even build a quantum computer capable of breaking them or they might create a much smaller amount of them).


A signature is a trap-door, encryption is reversible. So they are two entirely different things, you can't use a signature hash for encryption.


You are correct, but we're talking about asymmetric key crypto in general, not just encryption. Presumably the GP is referring to quantum-resistant hash-based signature schemes such as https://en.wikipedia.org/wiki/Merkle_signature_scheme


I am not claiming that you can though.


Then you probably meant to start a new thread. The one you are replying to is about encryption exclusively.


Nope, the poster I replied to talked about asymmetric crypto in general, which includes signatures.


I think that was in reference to public key cryptography, not in reference to hashing, but I'll leave it to the OP to clarify, that's at least how I interpreted it.


The article mentions WhatsApp multiple times as a service that successfully made the transition to end-to-end encryption, but it always seemed to me that this claim is rather meaningless when we don't have the possibility of auditing their source code.


It seems that most people are completely in the dark when it comes to security, including myself, but there are some principles that should be unwavering that regularly get ignored again with every new iteration of "secure" software:

* If there is a weak layer in the stack, from the physical layer to to UI, then the system is not secure. Even if your messaging app is secure, your messages are not secure if your OS is not secure

* If the source code is not available for review, the software is not secure

* If you or someone you trust has not done a full and thorough review of all components of the stack you are using, the software is not secure

* Even if the source code is available, the runtime activity must be audited, as it could download binaries or take unsavory actions or connections.

* On the same note, if you do not have a mechanism for verifying the authenticity of the entire stack, the software is not secure.

* If any part of the stack has ever been compromised, including leaving your device unlocked for five minutes in a public place, the software is not secure.

I could go on, and I'm FAR from a security expert. People compromise way too much on security, and make all kinds of wrong assumptions when some new organization comes out and claims that their software is the "secure" option. We see this with apps like Telegram and Signal, where everyone thinks they are secure, but if you really dig down, most people believe they are secure for the wrong reasons:

* The dev team seems like honest and capable people

* Someone I trust or some famous person said this software is secure

* They have a home page full of buzzwords and crypto jargon

* They threw some code up on github

* I heard they are secure in half a dozen tweets and media channels


I think you are being too strict in your definition of 'secure'. 99.99% of devices run Android, iOS or Windows which are closed source and therefore not 'secure'.

To me, security is not a binary property but rather a sliding scale. WhatsApp say they use end-to-end encryption and they have a strong financial incentive to be telling the truth. No hacker has demonstrated that WhatsApp are lying and the Wikileaks dump suggests the CIA has been unable to intercept messages in transit. Given this information I would rate WhatsApp at least 'reasonably secure'.


> WhatsApp say they use end-to-end encryption and they have a strong financial incentive to be telling the truth.

I'm not giving much to the various "whatsapp backdoor" allegations but I'm curious to why they'd have financial incentive to provide privacy.

Most of their userbase likely still doesn't care about security and they do belong to Facebook - so if anything, they'd have a financial incentive not to use effective crypto.


I guess I would point to the Yahoo security breach as an example of how poor security can have an adverse financial effect on a company valuation and its CEO's remuneration. The Sony hack also comes to mind as something which damaged it's reputation and had an indirect affect on their financial position.

If hackers could get access to WhatsApp and dump all messages to Wikileaks it would make the company look very bad and a significant number of users would switch to something else. If security is not that important to users, why pretend to add end-to-end encryption at all?


And even if Facebook / WhatApp's customers don't care about security, their hired geeks do; and being seen as caring about having topnotch security might help with attracting and retaining talent.


> Most of their userbase likely still doesn't care about security

Teen Vogue just suggested people should use WhatsApp instead of Snapchat because it does end-to-end crypto. I don't think it's true any more that the general public doesn't care about security, if it was ever true.

http://www.teenvogue.com/story/how-to-keep-messages-secure


Based on the behavior after these leaks, there's moderate evidence to suggest that users care somewhat about privacy if it can be accomplished without any sacrifices in existing ease-of-use.

Facebook cares mostly about penetration for Whatsapp, to ensure that no other messaging app takes over.


HN, CNET, Motherboard, Slashdot, Ars, TC, NYT tech etc.

Free product advertising worth $N targeted to the more influential product adopters, who will then amplify said advertisements.

That's my guess, anyway.


That's the point. Android and iOS are not secure. Now the question is always, secure for what ? Against an average attacker, they are secure enough. Against the CIA, they are not. They can just threaten several people inside Google or Apple, get a back-door, and you can't check if they did.


https://source.android.com/ The source code for Android is open under the Apache 2.0 license. Of course, iOS and Windows are closed source.


You're ignoring that most drivers for android phones are proprietary, the baseband is entirely proprietary (required by law), the Google services are proprietary (which many apps use) and most apps are proprietary (including Google's replacements for the AOSP apps).

If you want a completely free software smartphone experience, it is simply not possible at the moment. Even Replicant[1] still hasn't cracked the baseband puzzle (and is still struggling with the firmware for a couple of phones).

So no, Android is definitely proprietary -- even if some parts are not.

[1]: http://www.replicant.us


Basebands fall under exactly which entities juristiction such that they can regulate a baseband to be 'entirely proprietary' ? I mean, BB + superhet. mixer => IF => carrier wave envelope containing your data. How do you even regulate a concept of physics? If you're paying the proper fees as a subscriber to $provider_foo you could even design your own receiver off the public standards documents.. (used to be a popular project for 4th year undergrads to do on FPGAs for the CE's who wanted to get closer to the silicon but MOSIS project space was reserved for only the EEs).

If you want a completely 'free' (as in GPL) cell phone experience, you can setup a OpenBTS transmitter and transmit at the 900mhz range which is commons property. To stay legal in the US, your antenna has to put out less than a watt, but the setup allows you to even use off-the-shelf phones and trunk into normal phone lines via standard POTS software. Your device would have to be something a-la http://alumni.media.mit.edu/~mellis/cellphone/ (just a janky setup, but just a proof-of-concept -- you can patch together components from DigiKey pretty easily these days; if you want free-silicon, I think the closest you're going to get is https://en.wikipedia.org/wiki/OsmocomBB or maybe some soft cores, but if you're actually going to take that soft core to tape-out, you're probably going to be running 6 figures just for masks...)


On the hardware side, there is a project "Free Calypso" to produce a completely libre (software, firmware, baseband, & hardware) "dumbphone" using the Calypso chipset.

Initially looking to reuse old phones with the Calypso chipsets, the project is now working on producing their own. Design files are completed; funding for the dev boards is about 66% complete.

https://www.freecalypso.org/fcdev3b.html

Mailing list is fairly active too.


what is the regulation that mandates proprietary baseband?


The FCC has requirements for manufacturers to make sure that their radios output to-spec EMR. In addition to this, they've been working toward trying to stop people from being able to arbitrarily modify their radios.[1]

While (AFAIK) there isn't a regulation stopping someone from selling radios that have completely free software basebands, you can bet that the manufacturer will be prosecuted if users suddenly start outputting radio waves that don't follow regulations (suing users is harder than suing a manufacturer). As a result, there's a disincentive for manufacturers to ever sell free software radios (because by definition they would have to allow modification).

[1] https://www.infoq.com/news/2015/07/FCC-Blocks-Open-Source


Google has moved a lot of the platform code to their closed source parts. So for the purposes of security your typical Android phone is just as closed source as Windows. These days, unless you're running a classic Linux distro or BSD the OS code isn't auditable. And even if you were running Ubuntu on the phone the baseband is completely closed and almost always has memory access with no MMU so you should never trust a phone with anything important.


This is only the base code. The manufacturer modifies this code when building a ROM and can add anything. It should provide the modified sources but many chinese vendors do not do it.

Even if manufacturer provides the code, it can preinstall additional closed source programs. For example, Facebook app or some "telemetry" app that are closed source. My chinese noname phone contained an app that was trying to send my phone number and other identifiers to China as a part of a "sales report" (exact URL was http://bigdata.adfuture.cn /reboot/salesCountInterface.do ). And one can only guess how many data does Facebook collect.

What the end user gets is a phone with a binary blob inside.

I think there should be a strict requirement banning collecting any data without consent from user. No "anonymous" "analytics" and telemetry, no crash reporting, no advertising ids, no checkboxes checked by default. There can be only legal solution to the problem of mass surveillance by software companies. Every byte your device sends to network can end up in the hands of the hackers from developing countries or NSA.


The modifications installed by your phone company, etc. are not open source. The baseband chip's firmware is not open sourced. I've even heard of DMA being allowed over baseband as part of the Lawful Intercept Protocol.


Well my keyboard app is closed source, and I even imported the binary from the US to my country. Naturally I consider my phone compromised.


Large parts of iOS is available at opensource.apple.com, including the kernel, Objective-C runtime, and CoreFoundation. LLVM, Clang, and Swift are also open source.


Right. Security threats are never eliminated, only mitigated, relative to the cost of mitigation.


Based on my (admittedly limited) understanding of the human condition, it seems like it would be more accurate to say "WhatsApp say they use end-to-end encryption and they have a strong financial incentive to be _lying_."


The Signal protocol (https://en.wikipedia.org/wiki/Signal_Protocol) has been vetted, and the code is online available to be audited:

- Signal code: https://github.com/whispersystems/

Telegram has had known flaws, which have been discussed in part here:

- Telegram protocol defeated. Authors are going to modify crypto-algorithm https://news.ycombinator.com/item?id=6948742

- A Crypto Challenge For The Telegram Developers https://news.ycombinator.com/item?id=6936539

- Telegram (initial discussion) https://news.ycombinator.com/item?id=6913456

https://hn.algolia.com/?query=telegram&sort=byPopularity&pre...


How much of that Telegram info applies to anything in the last 2 years or so?


They still haven't open sourced the server afaik. The thing that is a problem with Telegram is that they market on their security, but it's not secure by default, or on group chats. Moreover, it stores chat histories, and if you add someone to a group chat they get access to the historical data. A lot of folks use it without thinking about this, or understanding the implications.


The protocol is secure, but we have no idea if the implementations are secure (except Signal itself), because we can't audit them.

In fact, Facebook Messenger's implementation of Signal has very questionable security right out of the box, because if one party "reports" an encrypted conversation, the whole thing is decrypted and sent to facebook support staff.


So someone reporting a message to Facebook would be the equivalent of that person (either Alice or Bob) reporting and sending the content of the other person's encrypted conversation to a third party.

The Signal Protocol provides end-to-end encryption so you don't have to trust the intermediate parties/servers involved in relaying the message (e.g. you don't have to trust Facebook's servers), and to protect against the other person reporting and revealing your conversation to someone else, the Signal Protocol provides message repudiation [1], which effectively gives the sender plausible deniability because the receiving party cannot prove to a third party that a message came from you.

[1] https://en.wikipedia.org/wiki/Signal_Protocol#Properties


Yes, my concern is that this functionality is baked into the client and is at a high risk of being executable remotely.


It's not just about the protocol though, it's the whole stack, and the OSes that it runs on are frequently not secure. Also, the Signal app on Google Play requires your phone number and a Twilio API call to function. No thanks.


That's something we can't do anything about though.


We can't do anything about the phone number requirement?


The phone number is not required as per the Signal protocol -- it's an implementation detail -- another token could be used. At this time, phone number verification is used in the initial authentication flow to protect against someone else spoofing your phone number and pretending to be you.

See https://github.com/WhisperSystems/Signal-Android/issues/1181


Actually magic link sent to your email would be MUCH better. Also central authority but Gmail >>> any telecom


It also doesn't have to be a central authority, since "anyone" (meaning: anyone who can afford to operate a mailserver, which is actually a surprisingly-high number) can be such an authority for one's own mail.


Practically nobody is running own mail server these days. Email is extremely centralized


one single actor < a few large actors you can choose from < lots of actors to choose from < medium size organizations can and do often run their own < individuals can ran their own (Email Is Here) < individuals commonly run their own

Let's not make the perfect the enemy of the good.


I still run my own for personal email (postfix + dovecot). Runs off a Linode VM atm and has no problem getting through to Gmail, Hotmail etc. users.


"Most people don't know rocket science" "Actually I know some rocket science!"


Just because practically nobody is doesn't mean practically nobody can't, which was my point.


That's true, but not a given.


I used to run my own but I can't find anyone to relay the mail anymore.


> Also central authority but Gmail >>> any telecom

There's few of the big corps I trust as little as Google.


Source code isn't required to study what software may do.

If you were really worried about what a particular binary would do, trusting that the binary matched the source and studying runtime behavior would both be a waste of time compared to fully analyzing the binary in question.

If you treat the software as a black box and only study run time behavior, you have no idea if you have tripped a countermeasure that silences the malicious behavior; if you study the control flow directly, you can look for such countermeasures.


>If you treat the software as a black box and only study run time behavior, you have no idea if you have tripped a countermeasure that silences the malicious behavior; if you study the control flow directly, you can look for such countermeasures.

It would be great to find such a countermeasure, and be able to trigger it reliably, or assert the behavior on a permanent basis. Considering that particular weakness of such countermeasures though, wouldn't the safest [for the attacker] default countermeasure likely be to simply crash the device?


Could be.

A user that knows about malicious code (which you would have to in order to trigger it to go silent) in a binary just shouldn't use the binary at all though.

The broader point is more important: compiled software isn't a black box, treating it like a black box is not the only or best way to analyze it.


Security is not a binary. "Secure" is not something that can be evaluated without context, ie, a threat model. Security is something that comes with tradeoffs. If you require a 100% certainty that no adversary, no matter how well resourced, can obtain electronic communications from you when conducting active surveillance, your only defense is to stop using computers. Most of us don't live with that sort of a constraint, and thus can evaluate things that increase our relative security given a threat model of mostly passive surveillance by state actors and active malicious attacks from private parties that mostly just want our credit card numbers.


This post is grey, and I'm not quite sure why. It's a bit on the "pessimistic" side, but... that philosophy is actually spot on IMO when it comes to security. So why downvote this? I'm honestly a bit new to this community but to me this sceptic perspective as it pertains to software security is ... well, actually it isn't even enough. Is this a weakness w/HN where even justified pessimism is eschewed?


Oh, HN is plenty pessimistic...

But this sort of pessimism isn't really useful. The attitude that "anything is insecure if there is any closed source software anywhere in the stack" means that it's impossible to advance security, because it's almost impossible to avoid binaries (i. e. firmware).

Apple, for example, has done a few things that are laudable in this field – i. e. risking a public court fight with the FBI to keep the iPhone secure. If we say that such actions are meaningless because they ship binaries, they have no incentive to do such things. Just rolling over and giving the US gov big-pipe-access to everything like yahoo did becomes the better business proposition.

Similarly, what do you answer when a friend who works at the EPA asks you how to securely contact a journalist? If it starts with ordering a custom open-firmware mainboard from somewhere in China, your advice will be ignored.


Practical security is all about risk management. And the first step is understanding what your risks are - not assuming or pretending they don't exist. Depending on the nature of the secrets your friend wants to share and who they are trying to hide from, advising them to avoid phones altogether might not be a bad idea. And falsely assuring them something is secure when that can't be confirmed could cause somebody a world of hurt.


It's not hard to demonstrate that apps are performing end-to-end encryption even if you don't have access to the source code. Reverse engineering this stuff is really pretty straightforward.


It's not just that they're performing encryption, but also assurance that (1) they're using the keys they declare and (2) they aren't sending other data over unannounced side-channels.

You can't just insert yourself in the message stream since the client and server use pinned, mutual certificate authentication. So you have to start from first-principles and step through decompiled code.


> they're using the keys they declare

I'm not sure what you mean here. It's easy to identify where the key comes from and whether the ciphertext is what you'd expect it to be in that case.

> they aren't sending other data over unannounced side-channels.

It's not straightforward to determine that even if you do have the source - you could imagine an implementation that deliberately leaks information through timing details without that being obvious from the code. At some point you have to trust that authors aren't doing something awful.

> So you have to start from first-principles and step through decompiled code.

Well no, because the first thing you can do there is just disable certificate pinning. But really, the difficulty of stepping through decompiled code is vastly overrated.


Unannounced side channels seems like by far the easiest thing to deal with there; send a 2mb file, observe network patterns, raise an eyebrow if 2mb gets sent over a channel that you didn't expect.

As for using the correct key, dismantle the signal message envelope until you get your blob of encrypted message. Then see if the same blob appears on the target device. Multiple keys? I imagine either correlating message size and network traffic (encrypting stuff twice could well show up), or going at it with a debugger.

Which is really the answer to all of these questions instead of any network shenanigans. You root your phone and attach a debugger, then step through what signal is doing.

Not a security researcher, never reverse engineered anything for security reasons in my life.


What if it only does it at a much later point in time, or slowly via adding data to other comms channels? What if it only does it for small payloads by padding packet sizes to 1k? There are so many ways to get around this, unless we have open source and reproducible builds.


Right, which is why I said those blackbox methods were pretty rubbish. You step through it with a debugger.


The point isn't to nab every thought-to-be-encrypted conversation on Signal. The point is to evade detection while compromising high-value message streams. Stepping through with a debugger is never going to execute the feature-flagged "pwned" mode.


>Unannounced side channels seems like by far the easiest thing to deal with there; send a 2mb file, observe network patterns, raise an eyebrow if 2mb gets sent over a channel that you didn't expect.

Facebook distributes your 2MB pic to many people, does it technically require more than 2MB of your upload bandwidth? No. You only need to upload it once to their server.


This is actually an area where web apps have some advantage. You can inspect the network traffic using developer tools. You have cross domain rules that restrict traffic. And the encryption is performed by the browser rather than in the apps black box.


You can do whatever you want with your system, if you've complete control. So even pinned certificates are not a big deal. You can read the messages unencrypted or you can remove the certificate checking in the code for example.


It's much harder to demonstrate the app doesn't have a backdoor, or doesn't leak your data in some obscure way, or doesn't weaken the entropy when creating the keys.


While you can verify that the software is in fact encrypting the data and not sending anything it should not send, you can never know who has access to the encryption keys. They are created by WhatsApp, who knows how and where they store them and who can access them.


Why do you think it is completely meaningless?

I agree that it's weakened but I think it's still meaningful. If someone is choosing between WhatsApp and Allo the former is more likely to be properly encrypted.

After all, even though we can't verify it WhatsApp has strong incentives to implement it properly, and OWS has strong incentives to only endorse WhatsApps use of their protocol if they are convinced that it's done properly.

I really think "completely meaningless" is dangerously misleading hyperbole.


> this claim is rather meaningless when we don't have the possibility of auditing their source code

Open source is required not just for apps, but also for:

- operating systems

- drivers

- firmware

Then we can start to talk about privacy.


You forgot

- hardware


Have you tried learning assembly? It's not hard or anything.


How can you personally verify that any 3rd party service is doing what it claims?

Unless you're a security expert with plenty of time to comb through someone else's code, you're still relying on others to be truthful and competent. Even then you're relying on layers upon layers of software and hardware. Far too much for an individual to verify.


You rely on a net of diverse independent reviewers instead of a single entity with a particular interest.

A peer reviewed distributed trust net is much more trustworthy.


That single entity consists of many diverse individuals with differing ethics.

As much as the law might try to pretend, companies are not people. Uh, except in the sense that they are compromised of people. So, they literally are people, people combined with capital.


People in a single entity are by definition not independent.


In that sense, no one is independent. We've all got friends and family, or at least people we know. By definition, if you've heard of someone else's software, that person had a social network by which they distributed the software to you.


Social dependency is not binary. Developers in a company a much more dependent from each other and the boss, than reviewers of free software.


Very true. Maybe I misread your previous comment as being more binary than you intended.

Keep in mind that reviewers of free software are also often employees and may have some agenda beyond pure altruism, even if the software isn't copyright of the employer. Open source is big business these days.


Maybe Open Source is big business, but not free software.


It all is. The person contributing to gcc needs to make money somehow.

Well, at least it could be considered tainted by self-interest if you're paranoid. My point was that commercial interest does not prevent quality/security.


There's no such net, nobody is signing the binaries.


Signing is not necessary. Only a comprehensible bug report or patch.


Without signing 1000 people can say it is secure but the maintainer can still send whatever they want


We don't need to audit the source code. Just grab the binary, reverse engineer it, and study that.

There's an entire industry dedicated to reverse engineering software and studying its security properties. We call it the security industry. (Not every gig is white-box!)


There are no secure smartphones (devever.net)

https://news.ycombinator.com/item?id=10905643


if the cia has to bypass it by hacking users phones it seems to apply the encryption itself is solid.


no. it means there are far easier ways to spy stuff, thats all.


Hacking an individual's phone isn't "easy". It means we've made everything else harder.


Puff and bluff. Assume that they are trying to save their asses only.


What a welcome shift in public sentiment. Mainstream media is starting to recommend end-to-end encryption, without back doors, for everybody. (Though the New York Times might represent the leading edge of the change in popular opinion.)


(Though the New York Times might represent the leading edge of the change in popular opinion.)

Note this is an AP article.


You tend to see that sort of change every eight years or so. The fake liberals come back around to pretending to support all civil liberties again.

Ashcroft does X, it's evil and given intense scrutiny. Holder does X, it's mostly given a pass by the msm.

Bush does X, it's evil. Obama does X (eg regime change in Syria; what, no million person protests?), it's mostly given a pass by the msm.

That's how the media has functioned for decades. They'll get extremely loud during Trump's Presidency about domestic spying abuses, after eight years of giving the Obama Admin a sizable pass. The same will hold true about the egregious abuses directed at the press under Obama, when Trump does the same thing it'll be the end of the world.


> egregious abuses directed at the press under Obama

I'm curious, because I haven't heard of these before. Examples?


I remember reading a poll where European Parliament members claimed that encryption (esp. HTTPS and E2E) was the biggest obstacle to espionage. I'm glad to see that this is a widespread sentiment.


The CIA WikiLeaks dump might only tell us the breaking through other parts of the communication chain are easier than decryption.


Yes, this has always been the case: defenders put a foot of armor plating on all of the doors and windows, and attackers look for the key hidden in the fake rock outside, and simply come in the front door.


that's literally the point


Original article on AP News in case your NY Times free article count is up:

https://apnews.com/cf84bf54c2954de8baaa5fb6931a84d0/What-the...


Please consider subscribing to high quality newspapers, donating to NPR, or otherwise supporting journalists. IMHO the last few months have driven home the vital public service they provide. I recently read that Tom Hanks sends an espresso machine to the White House press corps every year, while I've been reading their articles in incognito mode.


Thanks, that's a good message. In this case, though, the original article came from AP News, so I don't think there's anything wrong with reading it there.


Another thing is important: trust. As a naive user I have no idea what's going on on my phone, hardware or software wise We are essentially trusting these companies with everything. Encryption is no good if Apple and Google provide backdoors to their systems to the CIA or NSA.


Encryption in transit defeats dragnet surveillance. Forcing the NSA et al to actually break into the phones they're interested in substantially reduces the amount of information they can actually collect. They can't just tap internet backbones and read everything, like they do with plaintext communication.


But they can say "Hey, Google/Apple/Microsoft/Facebook/etc, give us access on your end for our dragnet to work after it comes in from encrypted transit. Also, this is a NSL so neener neener."


That's not what it tells us. Encryption works to the degree it has always worked -- and cracking ciphertext has never been the weakest link -- but that's not the message we should be taking. The real message is often only evident in hindsight, years later, after it has been shaped, after the effects have percolated through the system and the effect on behavior becomes evident. It's non-linear system dynamics. The cause and effect are rarely obvious.


However, it is getting at least easier to for a degree to opt-in to communicating in private over the Internet. HTTPS is easier to deploy than it ever has been, the Signal protocol seems to be holding up to scrutiny.

Hopefully one of the messages that emerges is that encryption is not scary and another good one would be that privacy is not deceitful.

The ability for the state to coerce you isn't going away though.


So the same thing the Snowden leak told us years ago. The fact that governments can't break state-of-the-art encryption itself shouldn't come as a surprise nowadays.


"Encryption has grown so strong that even the FBI had to seek Apple's help last year in cracking the locked iPhone used by one of the San Bernardino attackers. "

Nope, that was about setting precedent using a case that is very hard to argue against morally, so that they can erode privacy and protections on a wider scale.


It also tells us that Wikileaks is (still) important


Or is that what they want us to think


Using wikileaks to leak falsified documents to make us feal secure would be so evil.


Crypto Won't Save You Either, Peter Gutmann: https://www.youtube.com/watch?v=_ahcUuNO4so


It's not just software but hardware. There have been persistent controversies about encryption, random number generators, standards, organized infiltration and things like Intel ME and basebands in phones.

It's simply not possible for individuals or groups to vet this against nation state adversaries on an ongoing basis. I think its high time technologists accept this instead of trying to lull themselves and others into a false sense of security.

There are multiple layers of social trust in action which are broken because security services are now brazen and face no consequences.

There is no 'hacking' your way out of this. The solution is to try to restore the social trust by first understanding why its suddenly ok to run mass surveillance operations in a 'free democratic country' and refusing to accept it. And then try to restore some of the trust by making sure there are consequences, proper oversight and due process.


It's possible defeating encryption is the responsibility of another top secret department whose work remains unleaked.


Breaking pgp like encryption would require a huge mathematical breakthrough in how prime numbers work and their discovery. It would also require unknown math geniuses to work for said secret department which is unlikely, most of the best mathematical geniuses are already known and tend to work at universities/public research/private research and publish their work there. Not saying this is impossible but it's really highly unlikely borderline crazy conspiracy.

Also, the fact that most governments tend to hack at the pre-encryption level and use social engineering to hack devices on encrypted networks kind of confirms they do not have the capability to break encryption.


The NSA is the single largest employer of mathematicians in the US. The Snowden leaks did not include any of the data from the departments actually involved in attacking cryptography. All we really know is that they don't have any breaks for their low-level mass surveillance tools.

Likewise with the CIA. From the 1% of the documents leaked so far, there's no evidence that they have the ability to crack modern encryption. But the documents leaked seem to come from a contractor (or were shared with a contractor) so there are likely internal-use-only tools with greater capability than shown in this leak.

That's not to say that they definitely have ways to break modern crypto, just that we can't prove that they don't given the material publicly known so far.


It's fairly difficult to do anything useful with ultra-secret magic decryption tools. You can passively observe, but taking any action, even subtle indirect action, means you're eventually going to alert a target to your capabilities.

Anyway, these are extraordinarily difficult problems which may not have good solutions outside of quantum computing. You can throw all the smart people and computing power in the world at a problem and still come up with almost nothing.


I'm glad finally someone notifies it. Thank you and +1.


Does it? Maybe it's a huge false flag campaign.

You should always operate under the assumption that "they" can see everything they want to see on your internet connected device if they deem you important enough.

For example, what's with that one news story about government agencies being unable to break TrueCrypt. How did that get out? Sounds like a huge bullshit campaign to me, aimed at creating trust in TrueCrypt! (Yes thank you very much I know about VeraCrypt)


Cryptanalytic capabilities of academia are on par or are ahead of the government's abilities. In the past it might have been the case that governments were agreed of the curve, but academic crypto has progressed immensely in the past couple of decades.


Does anyone here have an air-gapped computer setup?

I'm thinking of doing something with raspberry pi.

I'm stuck at the part where it communicates (for my purposes, small amounts of ascii) with a non airgapped computer without using USB or networking.

I'm thinking about giving both machines a little speaker and microphone and using high frequency pulses to transfer the text.

Why, you may be wondering?

1. Airgapped system is impervious to penetration

2. Can be used for literally unbreakable communications.


That doesn't make sense. Once you connect it to the rest of the network, it's no longer "air gapped".

The phrase "air gapped" doesn't signify something special about air, it refers specifically towards breaking the connection (which is typically electrical) between that system and the rest of the network. Perhaps it is not the best term, and "completely isolated" would be better.

If you set up an IR LAN, or use sound, or whatever, then the system is no longer air gapped and you have created a potential vector for information leakage and potential penetration. Sure, probably nobody is going to bother, if the implementation is unique and nothing you have on that system is of particular value, but there have been a number of high-profile compromises of "air gapped" systems and networks (e.g. Iranian nuclear production facilities), that show it can be done even without an intentional connection if the desire is really there.

There are scenarios where partially isolated systems can offer a real benefit, though. I have periodically seen ideas for logging systems that use a 100BT (not Gigabit) Ethernet connection with the Tx pair cut, so that traffic can only ever go INTO the system and never back out again. The system sits on the far side of this one-way hardware gate, listening and logging, and is extremely difficult (although not impossible) to compromise because of the lack of feedback. Note if you want to do this, you need to use old 10 or 100BT network cards that don't have GigE capability, because I believe GigE uses all the pairs in the Ethernet cable in unpredictable ways; you don't have the old Tx pairs / Rx pairs / shield pairs like you used to be able to count on (and selectively cut). I think you'd need to make sure the cards didn't support auto MDIX as well.


> I have periodically seen ideas for logging systems that use a 100BT (not Gigabit) Ethernet connection with the Tx pair cut

That would only work with unidirectional protocols. Scratch TCP/IP and probably even UDP since it would need to figure out the MAC address of the other side.

If you're going to do this you essentially have a one-way high speed serial link without the ability to error-correct.


Checksums and redundant data would mostly solve that. Look what we do for deep space transmissions for example, where latency makes it basically one-way. I just feel like there are too many variables still uncontrolled to make it worthwhile.


You can use something called a data diode. These are used in high security networks to provide secure one way communication (insecure->secure). The idea is that if there _is_ communication it's not possible to exclude compromise completely. However if you do get compromised no information can escape a network with one way traffic, so an attacker might only destroy your information.

Funnily enough, someone else thought to use a raspberry pi for this purpose too: https://www.raspberrypi.org/forums/viewtopic.php?t=58957&p=5...


Check out Google Tone: https://chrome.google.com/webstore/detail/google-tone/nnckeh...

"Broadcast any URL to computers within earshot. Google Tone turns on your computer's microphone (while the extension is on) and uses your computer's speakers to exchange URLs with nearby computers connected to the Internet. You can use Google Tone to send the URL for any web page, including news stories, pictures, documents, blog posts, products, YouTube videos, recipes—even search results. Any computer within earshot (including over a phone or Hangout) that also has the Google Tone extension installed and turned ON can receive a Google Tone notification."


If it's airgapped, how do you use it for communication?


I'm thinking about giving both machines a little speaker and microphone and using high frequency pulses to transfer the text. (modem)

I don't know enough about computer security to understand whether a specially crafted piece of morse code audio (transferred through actual sound waves) could be used as an exploit, but I'm leaning towards "implausible".


I think that what imaginenore is suggesting is that if you are communicating between your public network-connected machine and your air gapped machine, there is no longer an air gap (barring some very specific use cases). The technique you're describing is actually an attack used to defeat security of machines not on untrusted networks[0].

[0] https://en.wikipedia.org/wiki/Air_gap_malware

EDIT: This is specifically the method I was thinking of: http://www.jocm.us/index.php?m=content&c=index&a=show&catid=...


Ahh.. remember BadBIOS? :) It did need USB flash drives to spread though. (See BadUSB.. and weep!)


I do remember BadBIOS, though looking it up not it seems it might have been a hoax(?) or at least vaporware. Personally when I think of expertly hopping air gaps, stuxnet always comes to mind.


Have you considered light instead of sound, a simple optical link either through air or fiber?

And it can be exploited in the same way it happens on any computer, somewhere in the path from reading the input through processing the message to sending the response is a bug and a specifically crafted sequence of inputs causes unintended things to happen, possibly allowing an attacker to execute arbitrary code.

However, if you specifically build a very simple interface and application, then you can keep the attack surface small as compared to your average computer. There will be no programmable NIC, no hardware driver, no TCP/IP stack, and probably only a single application processing messages and using the interface.

But this of course also depends heavily on the implementation. Controlling the GPIO pins of a Raspberry Pi with a Python application build on top of a Python library, Python, a GPIO driver and Linux provides quite a bit of attack surface. If you use a microcontroller and a hand full lines of assembly to poll input and drive output pins, then the attack surface becomes really small.


Yes indeed! A few years ago I read about a project using infrared "lasers" for a LAN. (Can't find it now since the concept has become quite popular.)

I only wish I was that badass... I can barely do C, let alone asm. I imagine attempting those now would only make the system less secure! haha. I'm just getting started with Arduino. I think I'll keep it simple for now :)

I do plan to learn them (c/asm) some day, then I will be able to make the system even more secure. Thanks for the tips!


Then my notebook is 'airgapped'. It connects to internet via wifi :). More seriously, if it connects through non popular means (such as a protocol over sound) it is just security through obscurity.


If andai is just connecting a full-function laptop to the internet with some ethernet-over-audio bridge, then you're right.

But don't you think he has a point if he's talking about a simple, single-purpose device like an arduino - say - that can't connect to the internet, but allows keyboard entry of plaintext, then encrypts it, sends ciphertext to a PC via audio?

I which case I think there are some advantages in using audio; if the device used wifi / ethernet / USB then you have to trust a lot of code to be exploit free:

- ethernet driver code - wifi chip firmware - kernel network code - USB drivers etc

Whereas if he's written his own simple bytes-to-audio converter in a few hundred lines of code that can be audited, then I can be be more easily convinced that it's not possible for an adversary to remotely install a keylogger or extract keys from my little encryption device.


Airgaps are not just the physical separation of systems, but also auditing and filtering of what goes on to the airgapped machine.

One can get many of the properties of an air gapped system by compartmentalizing ones activities. Only playing media in the media-vm, only surfing the web in the web-vm, and only doing banking and high security stuff in the security-vm. Many of these systems wouldn't need persistent storage. And each one has different requirements, for the VM doing banking one could remove the root certs for everything except the sites you care about. Each VM could be on its own VPN, for the security-vm one could choose a network exit that was closest to the desired endpoint.

Copying media files from a guest to the host would compromise the air gapped machine. The host should not do any general purpose computing and nor should there be an account with special administrator privileges.

Look at https://www.qubes-os.org/


Airgapped means it has no communication.

But yes, in practice it is often useful to move data to and from. In a "perfect" airgap this would probably be you typing and reading the screen. However this isn't the most convient but the more things automatically accessing it the less security you will have.

If you are going to connect it I won't bother maknig your own system and just use something standard (like ethernet) but now you have a firewalled system then an airgapped one.


Yes, I also considered using text input, and then something like converting encrypted bytes into english words for ease of input.

But I've noticed that not all my friends are very good at typing, so an automated solution is more user friendly... and a security solution that actually gets used is more secure than one that doesn't :)


There are plenty of (proximity dependent) attacks that could leverage the monitor or keyboard you're using to passively collect information from the system.

The only completely secure airgapped system is the one that's never powered on ;) (and kept physically safe).


Peer-to-peer IrLAN [1] is less prone to interference then sound, and being directional is harder to attack if the air-gap is small. Nowadays its probably obscure enough to make creating an exploit prohibitively expensive, even more so if you write your own drivers. With some electronics knowledge you could probably build your own simple transceivers too.


How does an obscure transport help you if it's still just a LAN connection?


In the scenario I am describing it would only ever be used to transmit ASCII for a cryptosystem. I do not require networking or internet or file transfer, just a low bitrate modem for text input and output.


Yes, I am considering precisely these kinds of solutions! There was an IR LAN project a while back (ten years?) but I can't remember the name (and the concept has skyrocketed in popularity making Googling difficult). Excellent suggestion :)


Honestly a floppy drive that you only connect to the raspi to transfer stuff is the only secure-ish way I can think of transferring data to your non-networked computer. Floppies are nice and dumb.

If you want a persistent, networked connection then just use a serial cable, but then... you're not airgapped anymore.


Not to suggest this is what's happening in this particular case, but if the CIA (or any other intelligence agency) did figure out a way to break some particular encryption protocol, wouldn't it be in their best interest to create fake internal documentation claiming they couldn't break that form of encryption, and "leak" that to Wikileaks?


Considering that Wikileaks allegedly has the new malwares that CIA has been using, I think something like that would be part of the slew of documents in Vault 7. Of course, it might be confidential enough for only a select few to know, but I'm just saying it looks to me that the leak is revealing pretty hush-hush stuff.


It only tells you that encryption works against the tools included in the leak. It doesn't seem credible to me that the US government's tooling in this area is pretty much the same as what is generally available, given the billions invested in cyber stuff.

There are many reasons beyond self interest (like the viability of online commerce) that would lead an organization like the CIA to compartmentalize more advanced/strategic methods.


>It doesn't seem credible to me that the US government's tooling in this area is pretty much the same as what is generally available, given the billions invested in cyber stuff.

You can not throw money at mathematics and expect it to change. Some crypto techniques could very well be secure with the hardware available now. And it might also be true that quantum computers are not anywhere close to be useful.


The only encryption I truly trust is one-time pad. Discrete log may be NP-intermediate (if P!=NP, which is open), and we know from the Snowden disclosures that NSA was working with U Maryland on a quantum computer which will be a reality at some point (Shor's algorithm). With 'collect it all,' today's ciphertext is tomorrow's plaintext. Always be skeptical.


Your downvotes courtesy of NSA's botnet ;)

Do you use OTP? I've done some little experiments.

I'm surprised at the lack of interest in the only form of encryption that is literally unbreakable, in this age of surveillance paranoia.


I have not. I had discussed a theoretical len(CT) == len(key) system with a friend for about 30 mins as a thought experiment, but we immediately poked a number of holes in it -- not the least of which being that we couldn't say anything about the security of the system on which it was deployed. Other questions: what to do with the key material files (their remnants would no doubt be left intact in NAND by opaque eMMC and SD controller implementations - and if not, some signal processing on the charge of the cells themselves combined with the regularity of whatever language was being used would give it up anyway - encrypting the key material might solve this to a degree). Also: where to get quality key material in the first place, and how to exchange it (NFC was discussed). I'll certainly take Claude Shannon's word on the security :)


1. Security of the system: I'm thinking about this lately: below in this thread I mentioned a setup involving an airgapped non-intel (eg. Rpi) computer that communicates ascii in morse code. This machine would hold the keys and encrypt / decrypt.

2. What to do with the files? You'd need to keep them as long as you need to use them. In my scenario it would be on an SD card in the raspberry pi. Afterwards.. there are many creative ways to destroy them :) i think an advantage of SD over HDDs is they are small enough to reasonably melt.

3. Key exchange: exchange the key on physical media, in person. Ensure the key does not come into contact with a networked computer (ask your friend nicely) and keep it away from any untrusted USB devices.

4. Quality key material: Hardware random number generator (also done on an airgapped etc machine).

I think I've covered everything, the only thing that crypto people on IRC could really complain about (it seems they really don't like OTP?) was integrity: an attacker could modify the message if they guessed parts of it.

I'm still figuring out how that would work (they assumed it would be used for a standard protocol with something like "From: andai@andai.tv" at the start of every message).

Now I'm just trying to make it so that anyone could set this up, which is turning out to be the trickiest step.

For the simplest form of encryption there sure aren't a lot of implementations out there...


In theory, a reasonable hash or CRC _inside_ the OTP stream would prevent tampering.

TEMPEST and DPA are other things I didn't consider in our thought experiment, but if I really wanted to be thorough, I would have. (I suspect there's very little signal for either in the OTP scheme).

I think the key exchange (sneakernet) is what makes the OTP approach unwieldy. If the source of randomness is good, and keys are not reused, in theory, it's the highest quality system out there.


I am thinking about a system for transmitting keys through the mail.

A microSD card is small enough to conceal inside of something else. I'm thinking of some kind of packaging where you could easily tell if it has been opened, and it would be impossible to re-seal perfectly.

It does not matter if the key is intercepted, as long as the recipient knows this, and does not use the key.

--

As far as TEMPEST goes, I think at the point the adversary is physically near you, you've got bigger things than encryption to worry about.

You could wrap the raspberry pi's case in aluminium foil. I'm not sure if the usb power cable leaks any signals: wrap it too for good measure ;)


If you trust a one-time pad---which no one actually uses---you can trust stream ciphers, which are actually practical.


That doesn't follow at all...


Sure it does. A stream cipher is essentially the practical version of a one-time pad. Neither system is vulnerable to quantum attacks. For both, the weak link is in keeping the key secure. If you can keep a one-time pad secure, you can keep a 256-bit key secure.


By all means, continue using RC4, A5, etc.


Shouldn't the headline be that; if you have a safe system encryption can work?

Because as long as encryption that has been broken (or systems that have been compromised) is used what you are basically doing is giving mr V is a receipt that you have made a transaction...


The CIA Cyber Security division is a joke compared to the NSA and Mosad.


And the campaign against encryption Comey talked about last fall has begun. If the United States wants to maintain economic dominance over IT sector globally we must defend our data


If it didn't you wouldn't see Comey all over the place trying to promote his encryption backdoors.

Ironically enough, he's promoting them while saying "Americans don't have absolute privacy."

Yes, we know. That's why we're trying to use encryption more...But thanks for reminding us, James.


Conclusion: Goverments always must push for centralization, monopols, weak clients and against free software, to keep control. Thus free software projects can not benefit from any longterm relationship with even a democratic state entity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: