the NSA themselves are concerned that quantum computing will be a great threat to encryption in the near future.
Keep in mind that the NSA and god knows who else are storing encrypted communications to break them later.
Quantum computing will defeat RSA, DH, ECC, asymmetric crypto, but it will only weaken symmetric crypto (eg. AES) by a factor of two.
So according to my Internet research: if your symmetric crypto is twice as secure (key size) as needs be, it is future proof.
Also (and please correct me if I'm wrong) I believe the triple encryption Serpent(Twofish(AES)) available in VeraCrypt (TrueCrypt fork) even protects against weaknesses which may be discovered in any of these cryptosystems: they would have to defeat all three.
How would you propose to securely exchange the keys for the several algorithms? If there is enough meta or implied information for you to know what algorithms are used, an attacker would also know what algorithms are in use.
The obstacle isn't lack of randomness or an excess of decipherability but not using a quantum resistant algorithm.
Ignoring quantum computing, doubling up algorithms doesn't really protect you against unknown unknowns all that much as there is a good chance that a massive flaw in one of them could apply to the others (since they are all somewhat based on the same "problem").
When you start layering lattice based crypto with traditional crypto the payoff might change and make it more viable.
There could be double-tree PKI, where nodes in the tree are represented by two key pairs in different kinds of key spaces, with the different signatures terminating in two ephemeral key pairs, which are then used for key derivation, and the subsequent secret perhaps concatenated and boosted into a higher key space? The complexity would be mind boggling. And, weaknesses in one half of the tree could translate into a security reduction on the final symmetric key, so it would have to be way over powered to be effective.
Not a simple problem to solve.
If you're using encryption to keep your data safe (e.g. local files, full disk encryption), then you don't actually need asymmetric encryption.
Additionally... unless you generated all the bytes you are encrypting yourself without transmitting them over a network at any time... e.g. video/pictures you took yourself and then stored to your own encrypted disk, your data could be compromised by quantum-breakable encrypted communication protocols at the time of transmission. I would wager that most data worth surveilling goes over the network at some point using TLS.
Done right, this means quantum won't get you access to dropbox-like stuff. (Dropbox itself probably doesn't work like that, it'll store the symmetric keys server side somehow.)
There's no encryption there at all, except TLS in transit.
Eg think you want to encrypt your logs: with asymmetric encryption you can store the public keys on all your servers, encrypt your logs as you generated them and forget the plain text as soon as possible. You keep your private key hidden away on a secure server, and only use it when you actually want to look at your logs (which doesn't happen all that often).
It's really cheap to store ALL phone calls these days, and just "playback" when you get the legal search warrant.
Contrast with deriving a key using cryptographic primitives, which can accept low-quality randomness (as long as there is sufficient entropy), that can be easily and transparently collected.
But that was never the problem. The problem is, now what? To use this OTP you need to securely deliver pads to everybody you'll ever send a message to. So, OTP is practical for a handful of secret agents who'll receive messages of a few dozen words per year from a single controller, and useless for most of us in the real world.
_This_ problem is why we have public key cryptography.
The only thing that OTP buys you is that you can exchange the pads at your convenience any time before.
With a few GB (a couple bucks in a supermarket will buy a 8GB usb stick) you can communicate in text about 16 thousand books worth of words.
In other words, to deplete the pad you would have to write sixteen thousand books.
I think that's pretty convenient, as far as literally unbreakable encryption goes!
It's just that key-sharing is basically the most complicated and vulnerable part of modern cryptography.
We are of two minds about IPsec. On the one hand, IPsec is far better than any IP security protocol that has come before: Microsoft PPTP, L2TP, etc.
On the other hand, we do not believe that it will ever result in a secure operational system.
It is far too complex, and the complexity has lead to a large number of ambiguities, contradictions, inefficiencies, and weaknesses.
It has been very hard work to perform any kind of security analysis; we do not feel that we fully understand the system, let alone have fully analyzed it.
Doubled key size does not mean twice as hard to break. Algorithm weaknesses are a thing.
This WP is pretty accurate though. It describes the world we think will exist after legitimate quantum computing.
Sort of like trying to measure a building to the nearest inch is harder than nearest foot, and every added digit of accuracy is even harder.
It's not just the number of bits that affects signal to noise ratio, though... It's also the complexity of the calculation; more gate operations and longer storage in memory leads to more quantum decoherence. And AFAIK, each type of calculation must be implemented in hardware, because reversible quantum gates must be used. This means you would need a specialized chip for each algorithm. I haven't confirmed this understanding with someone that specializes in QC, but I don't see a way around it, other than to combine many algos onto a single chip, sharing as many gates as possible, but degrading S/N further.
Something tells me, for an intelligence agency, building a quantum computer only for the purpose of breaking cryptographic keys would be a worthwhile investment.
An investment they have already made:
The effort to build “a cryptologically useful quantum computer” -- a machine exponentially faster than classical computers-- is part of a $79.7 million research program called “Penetrating Hard Targets.”
Scott Aaronson's blog has some good background information for the layman.
For example the NIST standard for the controversial dual eliptic curve deterministic random bit generator standard, involves two numbers, p and q, which are 100 digit long. To date, largest integer factored in a quantum computer is reported to be around 200,000...
New attacks are discovered from time to time. It is unlikely someone will try to bruteforce your encrypted data. Much more likely that some vulnerability will be discovered in AES or the way you generate keys for it.
EDIT: nevermind, i've now read through the thread more and have found answers. :)
 https://en.wikipedia.org/wiki/Post-quantum_cryptography .
pardon my ignorance. but, isn't this an inevitability? not just a possibility?
Firstly, quantum computing at scale _might_ be possible in our universe but it might not. One of the spookiest things that might still be true would be a thing called finite non-local hidden state. In this scenario the whole universe has some sort of hidden state, a bit like the seed value of a Minecraft world. Quantum computing in a universe with finite non-local hidden state just weirdly "doesn't work" when you scale it up, because it's basically using the universe's hidden state as magic working storage, and that runs out. Simulation nuts would tell you finite non-local hidden state makes it pretty clear we're in a simulation, but then they would say say that...
Secondly it says "near future". We have every reason to believe there will be serious obstacles to scaling up quantum computing. Somebody else gave the example of Babbage's engine. Just because we can conceive of fusion power generation doesn't mean it's going to happen next week.
Or at all. At the start of the 20th century some mathematicians thought all of mathematics could be formulated as a handful of assumptions plus a huge chain of inference. It was just a matter of writing it out precisely. Surely an inevitability. This project was begun by mathematicians and philosophers and it was going pretty well (proving that 1+1=2 for example) up until this chap Gödel comes along and straight up proves it's impossible with his Incompleteness Theorem, if you can do 1+1=2 then Gödel shows how you can write out effectively the equivalent of "This statement is false" and blow up your whole system.
So the nice thing about QC research is that we either get shiny new computers to play with, or we learn something new about Quantum Mechanics.
but i appreciate you reading between the lines and following up with a great response- i have a lot to Google. :)
If it's not the near future, we might move to post-quantum algorithms (or quantum cryptography, a field almost entirely unrelated to classical cryptography which involves carefully moving entangled pairs of particles around) before quantum computers become any good at cryptanalyzing realistically-sized encryption algorithms.
It is also extremely annoying that programs like gpg do not support the generation of large RSA keysizes such as 15360 bits which would require many more qbits in order to break (and since RSA 4096 and 2048 are much more commonly used, they might not even build a quantum computer capable of breaking them or they might create a much smaller amount of them).
That's quite a bummer. I finally managed to convince someone to use gpg and now I learn that this just makes sure your emails will be read/analyzed by anyone who has an interest in it in the future.
So it isn't necessarily about what I am communicating or to whom I talk but the fact that it seems impossible for society as a whole the hide the details of its functioning.
I viewed pgp/gpg as a tool that could counter this. Convincing people to use it would've been an uphill battle but there would have been a slim chance of succeeding to a useful degree.
All the headlines will be: "Heisenberg uncertainty principle violated; science, technology and life as you know it will change'.
This has nothing to do with 'engineering challenges'. You are dealing with hard cold physics limiting our understanding of the standard model, just because the barrier is there.
* If there is a weak layer in the stack, from the physical layer to to UI, then the system is not secure. Even if your messaging app is secure, your messages are not secure if your OS is not secure
* If the source code is not available for review, the software is not secure
* If you or someone you trust has not done a full and thorough review of all components of the stack you are using, the software is not secure
* Even if the source code is available, the runtime activity must be audited, as it could download binaries or take unsavory actions or connections.
* On the same note, if you do not have a mechanism for verifying the authenticity of the entire stack, the software is not secure.
* If any part of the stack has ever been compromised, including leaving your device unlocked for five minutes in a public place, the software is not secure.
I could go on, and I'm FAR from a security expert. People compromise way too much on security, and make all kinds of wrong assumptions when some new organization comes out and claims that their software is the "secure" option. We see this with apps like Telegram and Signal, where everyone thinks they are secure, but if you really dig down, most people believe they are secure for the wrong reasons:
* The dev team seems like honest and capable people
* Someone I trust or some famous person said this software is secure
* They have a home page full of buzzwords and crypto jargon
* They threw some code up on github
* I heard they are secure in half a dozen tweets and media channels
To me, security is not a binary property but rather a sliding scale. WhatsApp say they use end-to-end encryption and they have a strong financial incentive to be telling the truth. No hacker has demonstrated that WhatsApp are lying and the Wikileaks dump suggests the CIA has been unable to intercept messages in transit. Given this information I would rate WhatsApp at least 'reasonably secure'.
I'm not giving much to the various "whatsapp backdoor" allegations but I'm curious to why they'd have financial incentive to provide privacy.
Most of their userbase likely still doesn't care about security and they do belong to Facebook - so if anything, they'd have a financial incentive not to use effective crypto.
If hackers could get access to WhatsApp and dump all messages to Wikileaks it would make the company look very bad and a significant number of users would switch to something else. If security is not that important to users, why pretend to add end-to-end encryption at all?
Teen Vogue just suggested people should use WhatsApp instead of Snapchat because it does end-to-end crypto. I don't think it's true any more that the general public doesn't care about security, if it was ever true.
Facebook cares mostly about penetration for Whatsapp, to ensure that no other messaging app takes over.
Free product advertising worth $N targeted to the more influential product adopters, who will then amplify said advertisements.
That's my guess, anyway.
If you want a completely free software smartphone experience, it is simply not possible at the moment. Even Replicant still hasn't cracked the baseband puzzle (and is still struggling with the firmware for a couple of phones).
So no, Android is definitely proprietary -- even if some parts are not.
If you want a completely 'free' (as in GPL) cell phone experience, you can setup a OpenBTS transmitter and transmit at the 900mhz range which is commons property. To stay legal in the US, your antenna has to put out less than a watt, but the setup allows you to even use off-the-shelf phones and trunk into normal phone lines via standard POTS software. Your device would have to be something a-la http://alumni.media.mit.edu/~mellis/cellphone/ (just a janky setup, but just a proof-of-concept -- you can patch together components from DigiKey pretty easily these days; if you want free-silicon, I think the closest you're going to get is https://en.wikipedia.org/wiki/OsmocomBB or maybe some soft cores, but if you're actually going to take that soft core to tape-out, you're probably going to be running 6 figures just for masks...)
Initially looking to reuse old phones with the Calypso chipsets, the project is now working on producing their own. Design files are completed; funding for the dev boards is about 66% complete.
Mailing list is fairly active too.
While (AFAIK) there isn't a regulation stopping someone from selling radios that have completely free software basebands, you can bet that the manufacturer will be prosecuted if users suddenly start outputting radio waves that don't follow regulations (suing users is harder than suing a manufacturer). As a result, there's a disincentive for manufacturers to ever sell free software radios (because by definition they would have to allow modification).
Even if manufacturer provides the code, it can preinstall additional closed source programs. For example, Facebook app or some "telemetry" app that are closed source. My chinese noname phone contained an app that was trying to send my phone number and other identifiers to China as a part of a "sales report" (exact URL was http://bigdata.adfuture.cn
/reboot/salesCountInterface.do ). And one can only guess how many data does Facebook collect.
What the end user gets is a phone with a binary blob inside.
I think there should be a strict requirement banning collecting any data without consent from user. No "anonymous" "analytics" and telemetry, no crash reporting, no advertising ids, no checkboxes checked by default. There can be only legal solution to the problem of mass surveillance by software companies. Every byte your device sends to network can end up in the hands of the hackers from developing countries or NSA.
- Signal code: https://github.com/whispersystems/
Telegram has had known flaws, which have been discussed in part here:
- Telegram protocol defeated. Authors are going to modify crypto-algorithm https://news.ycombinator.com/item?id=6948742
- A Crypto Challenge For The Telegram Developers https://news.ycombinator.com/item?id=6936539
- Telegram (initial discussion) https://news.ycombinator.com/item?id=6913456
In fact, Facebook Messenger's implementation of Signal has very questionable security right out of the box, because if one party "reports" an encrypted conversation, the whole thing is decrypted and sent to facebook support staff.
The Signal Protocol provides end-to-end encryption so you don't have to trust the intermediate parties/servers involved in relaying the message (e.g. you don't have to trust Facebook's servers), and to protect against the other person reporting and revealing your conversation to someone else, the Signal Protocol provides message repudiation , which effectively gives the sender plausible deniability because the receiving party cannot prove to a third party that a message came from you.
Let's not make the perfect the enemy of the good.
There's few of the big corps I trust as little as Google.
If you were really worried about what a particular binary would do, trusting that the binary matched the source and studying runtime behavior would both be a waste of time compared to fully analyzing the binary in question.
If you treat the software as a black box and only study run time behavior, you have no idea if you have tripped a countermeasure that silences the malicious behavior; if you study the control flow directly, you can look for such countermeasures.
It would be great to find such a countermeasure, and be able to trigger it reliably, or assert the behavior on a permanent basis. Considering that particular weakness of such countermeasures though, wouldn't the safest [for the attacker] default countermeasure likely be to simply crash the device?
A user that knows about malicious code (which you would have to in order to trigger it to go silent) in a binary just shouldn't use the binary at all though.
The broader point is more important: compiled software isn't a black box, treating it like a black box is not the only or best way to analyze it.
But this sort of pessimism isn't really useful. The attitude that "anything is insecure if there is any closed source software anywhere in the stack" means that it's impossible to advance security, because it's almost impossible to avoid binaries (i. e. firmware).
Apple, for example, has done a few things that are laudable in this field – i. e. risking a public court fight with the FBI to keep the iPhone secure. If we say that such actions are meaningless because they ship binaries, they have no incentive to do such things. Just rolling over and giving the US gov big-pipe-access to everything like yahoo did becomes the better business proposition.
Similarly, what do you answer when a friend who works at the EPA asks you how to securely contact a journalist? If it starts with ordering a custom open-firmware mainboard from somewhere in China, your advice will be ignored.
You can't just insert yourself in the message stream since the client and server use pinned, mutual certificate authentication. So you have to start from first-principles and step through decompiled code.
I'm not sure what you mean here. It's easy to identify where the key comes from and whether the ciphertext is what you'd expect it to be in that case.
> they aren't sending other data over unannounced side-channels.
It's not straightforward to determine that even if you do have the source - you could imagine an implementation that deliberately leaks information through timing details without that being obvious from the code. At some point you have to trust that authors aren't doing something awful.
> So you have to start from first-principles and step through decompiled code.
Well no, because the first thing you can do there is just disable certificate pinning. But really, the difficulty of stepping through decompiled code is vastly overrated.
As for using the correct key, dismantle the signal message envelope until you get your blob of encrypted message. Then see if the same blob appears on the target device. Multiple keys? I imagine either correlating message size and network traffic (encrypting stuff twice could well show up), or going at it with a debugger.
Which is really the answer to all of these questions instead of any network shenanigans. You root your phone and attach a debugger, then step through what signal is doing.
Not a security researcher, never reverse engineered anything for security reasons in my life.
Facebook distributes your 2MB pic to many people, does it technically require more than 2MB of your upload bandwidth? No. You only need to upload it once to their server.
I agree that it's weakened but I think it's still meaningful. If someone is choosing between WhatsApp and Allo the former is more likely to be properly encrypted.
After all, even though we can't verify it WhatsApp has strong incentives to implement it properly, and OWS has strong incentives to only endorse WhatsApps use of their protocol if they are convinced that it's done properly.
I really think "completely meaningless" is dangerously misleading hyperbole.
Open source is required not just for apps, but also for:
- operating systems
Then we can start to talk about privacy.
Unless you're a security expert with plenty of time to comb through someone else's code, you're still relying on others to be truthful and competent. Even then you're relying on layers upon layers of software and hardware. Far too much for an individual to verify.
A peer reviewed distributed trust net is much more trustworthy.
As much as the law might try to pretend, companies are not people. Uh, except in the sense that they are compromised of people. So, they literally are people, people combined with capital.
Keep in mind that reviewers of free software are also often employees and may have some agenda beyond pure altruism, even if the software isn't copyright of the employer. Open source is big business these days.
Well, at least it could be considered tainted by self-interest if you're paranoid. My point was that commercial interest does not prevent quality/security.
There's an entire industry dedicated to reverse engineering software and studying its security properties. We call it the security industry. (Not every gig is white-box!)
Note this is an AP article.
Ashcroft does X, it's evil and given intense scrutiny. Holder does X, it's mostly given a pass by the msm.
Bush does X, it's evil. Obama does X (eg regime change in Syria; what, no million person protests?), it's mostly given a pass by the msm.
That's how the media has functioned for decades. They'll get extremely loud during Trump's Presidency about domestic spying abuses, after eight years of giving the Obama Admin a sizable pass. The same will hold true about the egregious abuses directed at the press under Obama, when Trump does the same thing it'll be the end of the world.
I'm curious, because I haven't heard of these before. Examples?
Hopefully one of the messages that emerges is that encryption is not scary and another good one would be that privacy is not deceitful.
The ability for the state to coerce you isn't going away though.
Nope, that was about setting precedent using a case that is very hard to argue against morally, so that they can erode privacy and protections on a wider scale.
Ironically enough, he's promoting them while saying "Americans don't have absolute privacy."
Yes, we know. That's why we're trying to use encryption more...But thanks for reminding us, James.
It's simply not possible for individuals or groups to vet this against nation state adversaries on an ongoing basis. I think its high time technologists accept this instead of trying to lull themselves and others into a false sense of security.
There are multiple layers of social trust in action which are broken because security services are now brazen and face no consequences.
There is no 'hacking' your way out of this. The solution is to try to restore the social trust by first understanding why its suddenly ok to run mass surveillance operations in a 'free democratic country' and refusing to accept it. And then try to restore some of the trust by making sure there are consequences, proper oversight and due process.
Also, the fact that most governments tend to hack at the pre-encryption level and use social engineering to hack devices on encrypted networks kind of confirms they do not have the capability to break encryption.
Likewise with the CIA. From the 1% of the documents leaked so far, there's no evidence that they have the ability to crack modern encryption. But the documents leaked seem to come from a contractor (or were shared with a contractor) so there are likely internal-use-only tools with greater capability than shown in this leak.
That's not to say that they definitely have ways to break modern crypto, just that we can't prove that they don't given the material publicly known so far.
Anyway, these are extraordinarily difficult problems which may not have good solutions outside of quantum computing. You can throw all the smart people and computing power in the world at a problem and still come up with almost nothing.
You should always operate under the assumption that "they" can see everything they want to see on your internet connected device if they deem you important enough.
For example, what's with that one news story about government agencies being unable to break TrueCrypt. How did that get out? Sounds like a huge bullshit campaign to me, aimed at creating trust in TrueCrypt! (Yes thank you very much I know about VeraCrypt)
There are many reasons beyond self interest (like the viability of online commerce) that would lead an organization like the CIA to compartmentalize more advanced/strategic methods.
You can not throw money at mathematics and expect it to change. Some crypto techniques could very well be secure with the hardware available now. And it might also be true that quantum computers are not anywhere close to be useful.
I'm thinking of doing something with raspberry pi.
I'm stuck at the part where it communicates (for my purposes, small amounts of ascii) with a non airgapped computer without using USB or networking.
I'm thinking about giving both machines a little speaker and microphone and using high frequency pulses to transfer the text.
Why, you may be wondering?
1. Airgapped system is impervious to penetration
2. Can be used for literally unbreakable communications.
The phrase "air gapped" doesn't signify something special about air, it refers specifically towards breaking the connection (which is typically electrical) between that system and the rest of the network. Perhaps it is not the best term, and "completely isolated" would be better.
If you set up an IR LAN, or use sound, or whatever, then the system is no longer air gapped and you have created a potential vector for information leakage and potential penetration. Sure, probably nobody is going to bother, if the implementation is unique and nothing you have on that system is of particular value, but there have been a number of high-profile compromises of "air gapped" systems and networks (e.g. Iranian nuclear production facilities), that show it can be done even without an intentional connection if the desire is really there.
There are scenarios where partially isolated systems can offer a real benefit, though. I have periodically seen ideas for logging systems that use a 100BT (not Gigabit) Ethernet connection with the Tx pair cut, so that traffic can only ever go INTO the system and never back out again. The system sits on the far side of this one-way hardware gate, listening and logging, and is extremely difficult (although not impossible) to compromise because of the lack of feedback. Note if you want to do this, you need to use old 10 or 100BT network cards that don't have GigE capability, because I believe GigE uses all the pairs in the Ethernet cable in unpredictable ways; you don't have the old Tx pairs / Rx pairs / shield pairs like you used to be able to count on (and selectively cut). I think you'd need to make sure the cards didn't support auto MDIX as well.
That would only work with unidirectional protocols. Scratch TCP/IP and probably even UDP since it would need to figure out the MAC address of the other side.
If you're going to do this you essentially have a one-way high speed serial link without the ability to error-correct.
Funnily enough, someone else thought to use a raspberry pi for this purpose too: https://www.raspberrypi.org/forums/viewtopic.php?t=58957&p=5...
I don't know enough about computer security to understand whether a specially crafted piece of morse code audio (transferred through actual sound waves) could be used as an exploit, but I'm leaning towards "implausible".
EDIT: This is specifically the method I was thinking of: http://www.jocm.us/index.php?m=content&c=index&a=show&catid=...
And it can be exploited in the same way it happens on any computer, somewhere in the path from reading the input through processing the message to sending the response is a bug and a specifically crafted sequence of inputs causes unintended things to happen, possibly allowing an attacker to execute arbitrary code.
However, if you specifically build a very simple interface and application, then you can keep the attack surface small as compared to your average computer. There will be no programmable NIC, no hardware driver, no TCP/IP stack, and probably only a single application processing messages and using the interface.
But this of course also depends heavily on the implementation. Controlling the GPIO pins of a Raspberry Pi with a Python application build on top of a Python library, Python, a GPIO driver and Linux provides quite a bit of attack surface. If you use a microcontroller and a hand full lines of assembly to poll input and drive output pins, then the attack surface becomes really small.
I only wish I was that badass... I can barely do C, let alone asm. I imagine attempting those now would only make the system less secure! haha. I'm just getting started with Arduino. I think I'll keep it simple for now :)
I do plan to learn them (c/asm) some day, then I will be able to make the system even more secure. Thanks for the tips!
But don't you think he has a point if he's talking about a simple, single-purpose device like an arduino - say - that can't connect to the internet, but allows keyboard entry of plaintext, then encrypts it, sends ciphertext to a PC via audio?
I which case I think there are some advantages in using audio; if the device used wifi / ethernet / USB then you have to trust a lot of code to be exploit free:
- ethernet driver code
- wifi chip firmware
- kernel network code
- USB drivers
Whereas if he's written his own simple bytes-to-audio converter in a few hundred lines of code that can be audited, then I can be be more easily convinced that it's not possible for an adversary to remotely install a keylogger or extract keys from my little encryption device.
One can get many of the properties of an air gapped system by compartmentalizing ones activities. Only playing media in the media-vm, only surfing the web in the web-vm, and only doing banking and high security stuff in the security-vm. Many of these systems wouldn't need persistent storage. And each one has different requirements, for the VM doing banking one could remove the root certs for everything except the sites you care about. Each VM could be on its own VPN, for the security-vm one could choose a network exit that was closest to the desired endpoint.
Copying media files from a guest to the host would compromise the air gapped machine. The host should not do any general purpose computing and nor should there be an account with special administrator privileges.
Look at https://www.qubes-os.org/
But yes, in practice it is often useful to move data to and from. In a "perfect" airgap this would probably be you typing and reading the screen. However this isn't the most convient but the more things automatically accessing it the less security you will have.
If you are going to connect it I won't bother maknig your own system and just use something standard (like ethernet) but now you have a firewalled system then an airgapped one.
But I've noticed that not all my friends are very good at typing, so an automated solution is more user friendly... and a security solution that actually gets used is more secure than one that doesn't :)
The only completely secure airgapped system is the one that's never powered on ;) (and kept physically safe).
If you want a persistent, networked connection then just use a serial cable, but then... you're not airgapped anymore.
"Broadcast any URL to computers within earshot.
Google Tone turns on your computer's microphone (while the extension is on) and uses your computer's speakers to exchange URLs with nearby computers connected to the Internet. You can use Google Tone to send the URL for any web page, including news stories, pictures, documents, blog posts, products, YouTube videos, recipes—even search results. Any computer within earshot (including over a phone or Hangout) that also has the Google Tone extension installed and turned ON can receive a Google Tone notification."
Do you use OTP? I've done some little experiments.
I'm surprised at the lack of interest in the only form of encryption that is literally unbreakable, in this age of surveillance paranoia.
2. What to do with the files? You'd need to keep them as long as you need to use them. In my scenario it would be on an SD card in the raspberry pi. Afterwards.. there are many creative ways to destroy them :) i think an advantage of SD over HDDs is they are small enough to reasonably melt.
3. Key exchange: exchange the key on physical media, in person. Ensure the key does not come into contact with a networked computer (ask your friend nicely) and keep it away from any untrusted USB devices.
4. Quality key material: Hardware random number generator (also done on an airgapped etc machine).
I think I've covered everything, the only thing that crypto people on IRC could really complain about (it seems they really don't like OTP?) was integrity: an attacker could modify the message if they guessed parts of it.
I'm still figuring out how that would work (they assumed it would be used for a standard protocol with something like "From: email@example.com" at the start of every message).
Now I'm just trying to make it so that anyone could set this up, which is turning out to be the trickiest step.
For the simplest form of encryption there sure aren't a lot of implementations out there...
TEMPEST and DPA are other things I didn't consider in our thought experiment, but if I really wanted to be thorough, I would have. (I suspect there's very little signal for either in the OTP scheme).
I think the key exchange (sneakernet) is what makes the OTP approach unwieldy. If the source of randomness is good, and keys are not reused, in theory, it's the highest quality system out there.
A microSD card is small enough to conceal inside of something else. I'm thinking of some kind of packaging where you could easily tell if it has been opened, and it would be impossible to re-seal perfectly.
It does not matter if the key is intercepted, as long as the recipient knows this, and does not use the key.
As far as TEMPEST goes, I think at the point the adversary is physically near you, you've got bigger things than encryption to worry about.
You could wrap the raspberry pi's case in aluminium foil. I'm not sure if the usb power cable leaks any signals: wrap it too for good measure ;)
Because as long as encryption that has been broken (or systems that have been compromised) is used what you are basically doing is giving mr V is a receipt that you have made a transaction...