This guy is all about how much of a threat the expansion of Chinese hardware is but doesn't say a word about the same being true for American hardware. Chinese networking hardware may contain backdoors, American hardware has been confirmed to contain backdoors over and over again.
Re: the iPhone: when any country has the golden keys, every country has the golden keys. How hard is it to. Get that through your head. The moment the US demands golden keys, China is going to demand their own set of golden keys, followed by Saudi Arabia and any other foreign adversary. These keys are going to be used the same way the US is going to use them (spy on allies and enemies) as well as for seeking out minorities and problematic citizens.
As a foreigner, if something doesn't cross the US Senate of all bodies because of ethics or surveillance problems, that's really saying something. These are the people that brought us laws like the PATRIOT ACT and the CLOUD act, systematically expanding the reach of US law enforcement across borders. The EU, supposedly an ally, had to negotiate a quick, likely invalid or quickly overturned bandage treaty to make it legal to even just store personal data on US company servers. Is this what the US wants to become? A risk to their own allies?
Forcing weak encryption is not defence, it's offence. It's hard to believe that people like these aren't jealous of the spying network China has set up.
Backdoors have been revealed a few times in US products, but US companies have learned their lesson and appear to be pushing back. 9/11 is almost a decade in the past and the Snowden revelations embarrassed several companies. Things like warrant canaries are pretty common and companies like Apple have been publicly fighting government attempts to access Apple devices in court.
One thing the US has going for it is that unlike some countries, is that companies are more scared of their customers than the government.
Every three or four months Cisco or Juniper get caught having introduced a new backdoor. Hardcoded admin passwords, full system access backdoors following a specific port knocking sequence, you name it. Sure, some companies have started pushing back but the companies that produce critical network infrastructure can't be trusted.
A warrant canary is utterly useless as a defense.
Any secret legal order to alter IT systems (the specific threat model it is most often suggested for) can logically also include an order to maintain a fake warrant canary.
Part of the theory of a warrant canary is that compelled speech (and in particular a compelled lie) may be easier to challenge than suppressed speech. While that isn't definitive, there's some jurisprudence to back that theory. If you have a warrant canary, you should be prepared to challenge any such order in court and use that as the defense.
An important detail in the US juristiction certainly.
On a practical basis i cannot evaluate the jurisprudence involved and I would assume the number of people who credibly can is very small, especially in the context of "secret courts for national security reasons".
A useful test would be if any of those few had demonstrated a personal risk using this as a defense and succeeded.
The rest of us can only guess the risk based on the reputation of the entites involved.
How so? Censorship is explicitly forbidden by the US constitution, and even so it happens (in this case, "in the name of national security" or whatever).
The current interpretation says there are certain reasonable restrictions on the 1st for the public safety. Compelled speech isn't accepted as constitutional.
Remember the constitution only really, effectively, says whatever the current Supreme Court says it means. And really I don't think anyone want's the 100% literal 'shall make no law' interpretation of free speech; that would throw out any kind of labeling laws for starters, companies would have no government compulsion to accurately label drugs or food products for example.
is ordering silence and secretly seizing control of the publication technology (ie website) then maintaining a false warrant canary a way around compelled speech ?
if so then regular live press-conference/video appearances would be the only practical implementation method.
if they say nothing and exit then the canary is dead.
It could be, I'm not a lawyer. That seizure would be something you could fight in court too on two fronts: seizure of property and if the government takes over your means of communication and pretends to be you what separates that from directly compelling speech.
You threaten that and they say “go pound sand in the courts”. And even if you “win” in the courts you still lose because “the process is the punishment”.
(unless I'm missing the point and you are referring to some specific fall-out from that event that had particular significance at a point ten years after?)
It would be baseless if anyone had a single story of any Chinese company trying to go against the wishes of the Chinese government and not being either immediately arrested and "disappeared" or having the company nationalized by the government.
His whole point is to not weaken encryption but to strengthen it, despite the likely social costs that will incur. Have you read the conclusion of the article?
That is technically incorrect. As I understand it, the iPhones are generic (global), but the iCloud service is what the Chinese authorities have full snooping rights to.
Apple doesn't cut a custom firmware with special/additional golden keys or anything of that nature, as I understand it, for anyone.
The iPhone in this case is safe, assuming you don't use iCloud, in China.
But most people use iCloud, right? Apple forbids you from using competing full phone cloud backup services. The built in backup software continuously sends iCloud copies of most user data on the phone.
This is not correct. Apple's page explaining their iCloud encryption states that they use end-to-end encryption only for a few specific types of data, and only if you enable two factor authentication. iMessage in particular explicitly sends its encryption keys to iCloud (so that iCloud can unlock your messages if you lose your device, but of course this means iCloud can unlock your messages for any other reason without your knowledge). https://support.apple.com/en-us/HT202303
Well, China issues aside, that should be the case in every country -- such services should be local, governed by local laws, and not giving a free-pass to foreign countries/governments (foreign as to the users of the service) to enforce their laws/surveillance.
E.g. I would like the EU iCloud to be hosted in EU.
> that should be the case in every country -- such services should be local, governed by local laws
So a cloud service that's available worldwide should store and process data locally in every country (or perhaps even every sub-jurisdiction of every country) just so that that jurisdiction can serve warrants to it and others cannot? Or worse, people in multiple countries should have to use different services and hope those services interoperate with each other, just so that they can "shop local"?
The Internet does not and should not work that way. If we're going to go to the trouble of building interoperable, federated services, it should be to put them in the control of individual users, not in the control of governments.
>So a cloud service that's available worldwide should store and process data locally in every country (or perhaps even every sub-jurisdiction of every country) just so that that jurisdiction can serve warrants to it and others cannot?
Simply put, yes.
You are maybe concerned with the technical issues / problems to the service provider.
I'm more concerned with the decentralization, surveillance, and data sovereignty.
>Or worse, people in multiple countries should have to use different services and hope those services interoperate with each other, just so that they can "shop local"?
Yes. In fact, this decentralized nature, and resilience, was an early vision about the internet itself, and not just some hippie dream, even in its army-research origins... And of course all the way to ideas such as XMPP, Diaspora, and so on.
Nobody dreamed a Facebook silo somewhere gathering all the world's data...
>The Internet does not and should not work that way.
That it does not, it's obvious. That it should not, less so.
(And of course, if one's county is the one doing the data-gathering/policing of data for the rest of the world, it's "naturally" all A-OK to them that it is so).
>If we're going to go to the trouble of building interoperable, federated services, it should be to put them in the control of individual users, not in the control of governments.
Notice how I didn't propose putting them "in control of governments".
They already are in control of at least one government (the one of the country of Facebook, Google, MS, Apple, Twitter, etc).
So what I proposed is already de-centralized: putting each users data under democratic control in the places where they themselves are (and vote, have rights, etc), as opposed to a central place, where they don't vote, don't have any right or resource as foreigners and are "fair game" to the whims of both the service-origin government and the service company.
If they're going to go to control of individual users, even better. But stopping the control of a single foreign government is already a good first step.
(Exceptions could be made for non-democratic countries -- no reason to give control of a service's local data to a dictatorship).
> You are maybe concerned with the technical issues / problems to the service provider.
Not just that (though I certainly don't consider it reasonable to expect a service to have thousands of servers in thousands of jurisdictions and deal with thousands of legal systems; frankly, I want services to expose themselves to as few jurisdictions as possible).
I'm concerned about the usefulness of the service to its users. As a user of a service, I will not accept partitioned and walled-off services where I cannot interact with people elsewhere in the world. That's my choice, and the choice of people and projects I collaborate with, and I choose to use services that allow me to collaborate with those people and projects.
> And of course all the way to ideas such as XMPP, Diaspora, and so on.
I did specifically say that:
If we're going to go to the trouble of building interoperable, federated services, it should be to put them in the control of individual users, not in the control of governments.
If you have the capability of interoperability and federation, then where you host your data should have nothing to do with jurisdiction, and everything to do with who wants to store and control the data.
Well, I don't propose or expect companies to volunteer doing this.
My point is that EU (for one) should mandate them, and if service providers like FB, etc, don't like them, they could skip the 500m market -- and just be careful not to let the door hit them on their way out...
>Not just that (though I certainly don't consider it reasonable to expect a service to have thousands of servers in thousands of jurisdictions and deal with thousands of legal systems; frankly, I want services to expose themselves to as few jurisdictions as possible).
As long as it's your jurisdiction? (assuming you're in the US, since it says on your HN profile that you work for Intel).
Or that's a happy accident (for you) that is not really relevant to your point, but others should be fine with?
I'd prefer the services I use to be under the control of my country's laws and my democratic vote -- not under what some third country dictates and controls.
Besides, hyperbole much?
There aren't "thousands of jurisdictions and legal systems". At worst, they are like 150 or so, as many countries. And some could get together and accept a single country as the host and set their common rules (like the EU could do for EU member states).
Major services already have tons of global CDNs servers, even on small countries.
And if there was a mandate, there could easily be an infrastructure and common services to deploy to span the globe (e.g. turn-key Amazon provision for sharding your data into multiple data centers per jurisdiction).
It doesn't even need to be all players, could be mandated on some size and above -- and surely Google, Facebook, Apple, etc scale.
No, not at all. I expect it to be the jurisdiction of whoever runs the service. That jurisdiction will necessarily have control over the authors of the service; there's no getting around that. (The authors can try to build the service with themselves as a threat model, which few services do, and even then that may not work.) Unless you want to mandate that people can't use services from outside their country (and enforce that with a country-wide firewall blocking access to the real Internet), then you're never going to get around that.
Also, you seem to be treating "store and use data locally" as a thing that protects the citizens of a country, rather than a thing that threatens the citizens of a country. Many countries want data stored locally so that they can seize it, and want services hosted locally so that they can block those services or make them consistent with the country's propaganda.
Also, you're assuming that data is nicely partitioned by user. For many useful services, it isn't. Just for the simplest case, consider collaboratively-edited works by multiple users.
> There aren't "thousands of jurisdictions and legal systems".
Tell that to states and equivalent sub-jurisdictions within countries. Tell that to many large cities and their local regulations. Thousands is if anything an underestimate.
> there could easily be an infrastructure and common services to deploy to span the globe
That sounds like a great way to introduce security holes and a vastly expanded threat model.
Also, to comment on something you edited into a previous comment:
> (Exceptions could be made for non-democratic countries -- no reason to give control of a service's local data to a dictatorship).
Who gets to decide that? Obviously not the countries themselves. That just leaves the people building the service and the people deciding which services to use; those are the same parties who already get to decide that today.
This is fundamentally a fight about the autonomy of the human mind, and what the government can command you to do with your own brain.
If you and a cohort were the last survivors of a dying tribe, with your own special language that no one understood, and you criminally conspired with them via written word, the government could not compel you to translate your messages just because they cannot understand them. That's the 5th Amendment in action, as you'd be creating testimony against yourself. The prosecution will have to pin you on some other charge, maybe a crime that you actually physically committed.
It's the same exact thing with encryption. It's a language that the government cannot understand, and cannot compel you to translate.
Now, to be fair, there is some wishy-washy here because the government desires not only to have compelled decryption, but to force companies to wiretap their own users. The later is not strictly a 5th Amendment violation - but it's the principle of the matter.
One day we'll have the ability to interface directly with our minds, and we need to draw the "stay the fuck out" line pretty damned clearly in the sand before that happens.
What you say is close but not quite accurate. What you describe is what used to happen before mobile phones, people called each other on pay phones and used coded messages. You had to infiltrate these organizations to catch them. The same solution works today. It is just not as manpower and time efficient as back doors.
The reason this is a fight now is because you can target the chain. To use your analogy above: it is more like I know someone in that tribe and I send them a message in english and they translate and pass it along. On the other side is someone in the tribe who translates it back into english for the recipient. The message is never broken in its encoded form, only on the on/off ramps. Pressure can be applied to Apple or WhatsApp to provide access to the data pre/post encoding. This is what the Australia laws demands (and the inability to disclose the back door). The point remains the same, it is only as strong as its weakest link.
How can we know that WhatsApp hasn't been patched with a back door? What about the operating system? What about the secure enclave chip itself? The only hope we have here is that companies will say no and/or whistleblowers will speak out.
Well, you know, the 5th Amendment only works in the USA. In most other parts of the world, speech is regulated. I'm not sure whether that is a good thing or a bad thing. I guess it depends on the regulations.
> If you and a cohort were the last survivors of a dying tribe, with your own special language that no one understood, and you criminally conspired with them via written word, the government could not compel you to translate your messages just because they cannot understand them. That's the 5th Amendment in action, as you'd be creating testimony against yourself.
Ignoring the fact that it seems, uh, unwise to ask a defendant to provide an unverifiable translation of self-incriminating evidence (which is different from most encryption scenarios)...
I imagine that if this scenario were actually common and law enforcement was complaining about it, there would be a political debate, much like this one, except including "should we reinterpret or amend the 5th amendment to allow a court to order them to translate it", and a lot of people would agree with them.
Now of course many criminals in the past have written down coded notes that have been seized as evidence, but they're often somewhat interpretable or inferences can be drawn from them even if the defendant doesn't cooperate, which is not the same as your hypothetical, or something that's been AES or RSA encrypted.
I do take your point about how terrible things are going to get once we have brain-computer interfaces, which is a major reason why I think we shouldn't build them.
That's the problem with these rights: they're always up for interpretation. And the interpretation can be anything a handful of politically appointed people want.
That is also not a solvable problem. Someone's got to do the interpreting, and if your perfect legal system is so perfect as to not permit the occasional reinterpretation as circumstances and opinions change, it'll just collapse in on itself and be replaced by one that does.
By "cannot compel you to translate," you mean "must not compel" or "should not compel," because an entity that controls 11 aircraft carriers can compel you to translate whatever the hell they want you to.
What does 11 carrier strike groups have to do with the 5th amendment? The former is to project power beyond the borders of the United States. The latter is to protect you from an abusive government domestically.
Also to take it further, technically, the US Constitution does not apply beyond our borders (like any other US law, absent a bilateral treaty).
I fail to see the connection you are trying to make.
I'm confused at the assumption that you can prevent serious organized criminals from having access to strong cryptography simply by backdooring common communications apps.
The genie is out of the bottle: Powerful criminal enterprises will have no difficulty hiring people to build overlay tools that they can run inside their backdoored comms that will provide adequate (at least, if not effectively unbreakable) cryptography.
Even if we ignore the considerable first amendment barriers and there were an effort to outright outlaw strong crypto, steganography has become very strong and even if not quite sufficient finding out your comms were being monitored via a use of crypto charge is way better than having the attacker learn all your info.
In light of this, I find it difficult to see the goverment's position as seeking to widen infrastructure for the wholesale surveillance of the general public with claims of terrorist groups as a dishonest cover for the demands.
The old adage "If guns are outlawed, then only outlaws will have guns"-- has serious limits owing to the nature of guns as physical objects. The sentiment applies a million fold for cryptography, which is fundamentally a collection of ideas with a zero marginal cost of reproduction or execution, mass-less, volume-less, capable of being distributed around the world in a fraction of a second and implausible to silence even with great leap forward mass murder.
Actions to block access to encryption would severely degrade the security and privacy of the general public, on this point I agree with the author. But would it prevent pedophiles and terrorist from having access to encryption? Not likely.
Would pedophiles and terrorists occasionally screw up and use insecure means... sure, but they already do that today.
The NIBOR (Nothing Is Beyond Our Reach) proposal addresses the issue of cryptography inside backdoored devices by installing a virtual machine underneath.
I still want an answer from these people. How was crime solved before end-to-end encrypted mobile apps existed? Maybe it makes it easier but maybe it doesn't. These devices connect to WiFi and cell networks and have GPS. It seems like coded messages at pay phones were even more unbreakable.
> How was crime solved before end-to-end encrypted mobile apps existed?
One meaningful investigative tool was (and still is) the wiretap, where access to the physical channel gives unencrypted access to the contents. Phone encryption is possible, but was generally used rarely.
It sounds to me as if investigators want to preserve some equivalent of this tool, although its existence in the first place was more of a technical limitation than a legal one.
"Going dark" is a good thing... it limits the state's ability to abuse its enormous power (which is already routinely abused, as any student of FBI history or current events should know).
However, "going dark" is not what's happening. Only recently had government has this much power to examine, catalog, and track the masses. Instead of debating the ethics of encryption (and trying to outlaw math), we should be debating how best to curtail and audit government powers to prevent their abuse.
The problem was the fact that the Internet, in its romantic period, "went light" first. "Going dark" is just a return to the balance once knowledge critical mass was reached about the scale of the "went light" event.
I don't think the threats from people "going dark" justify weakening encryption. The potential for abuse is too real and there are already a lot of instances when governmental actors have acted in bad faith.
The text I would consider manipulative with its references of a terrorist bombing. State actors suppressing dissent isn't mentioned at all.
It is not a black and white issue, but in most parts of the world, people would benefit from strengthening encryption even further. And I don't see that changing anytime soon.
I was thinking about this subject today. If the US has laws that prohibit legitimate encryption and other countries do not... Is it fair to suggest that we are making ourselves easy targets to foreign nations who wish to practice privacy violations?
I don’t think about encryption as a method to protect myself from the US government (only) but from any entity that is local or foreign. I think everyone should have the right to privacy. To me this is more important than the right to guns. And no not because I want to have illegal content, I just want to keep my information secure from people who don’t have a right to my privacy.
What am I missing? Should we give up our rights because there are people who break the law? Aren’t they going to continue to break the law anyways? Including using encryption.
This is like CAPTCHAs. Punishing the majority for the crimes of a very tiny minority.
Is this really the only way to catch people who do illegal things or is this just the laziest way to do it?
Not only that, foreign businesses are less likely to trust anything stored with or touched by US companies. And that's not even getting into regulated industries where it might become illegal to do business.
> But going dark is broader than encryption; it involves the decreasing ability of the government to conduct effective lawful surveillance for many technical reasons, including but not limited to the widespread adoption of encryption technology.
The 'going dark' boogeyman - how predictable. Only in the mind of a spy agency shill does the rapidly growing number of surveillance cameras, facial recognition, flying surveillance drones with high-resolution cameras over cities, and interception of all communication metadata (if not the content), equate to decreasing ability. They won't be happy until we are stripped of even the last tiny scrap of privacy.
Have you read the whole (admittedly rather long) article? He has a pretty nuanced view of it imo, especially at the end and explicitly refers to the "golden age of surveillance"
He says "some argue that society is in a golden age of surveillance" exactly once, but repeatedly presents 'going dark' as fact. That the rest of his view is nuanced doesn't make that aspect any less of a lie.
>But going dark is broader than encryption; it involves the decreasing ability of the government to conduct effective lawful surveillance
That wasn't an ability of government in 1900 or 1800, so why should we worry that this new-fangled ability has been decreasing lately?
Who said it should be an ability for government, and much more, an eternal one, and not a temporary accidental capability due to a few technical developments that allowed it for a short span of time?
While I agree with you, I can also appreciate that intelligence/law enforcement agencies are not keen to go back to the alternatives. HUMINT is a nasty (in the moral sense), dangerous, and expensive enterprise. Would you rather train for a few years, then live a fake life with the constant threat of gruesome death, lying to everyone you know - possibly even your fake wife and kids or rather just tap a few phones/emails?
It's no wonder that most agencies have pretty much entirely dropped their expertise on this as soon as the alternative showed up. I don't think they are geared up to get back into that game.
>HUMINT is a nasty (in the moral sense), dangerous, and expensive enterprise.
Which they use all the time still, so that's not a real difference. It's not xor with digital surveillance.
Besides what you describe (double life, death, etc) would be a problem for the spy, not the general population. I could not care less about the spies.
But I wasn't talking about spying as much, but for government / police / etc surveillance. Where if they have to deploy humans to spy on some drug dealers or suspected murderer or whatever, is fine. And the fact that they're humans, makes it more costly to mass deploy (and thus serves as a natural check and balance).
Of course I'd rather have my privacy even if it means thousands of CIA agents get tortured and killed. They're the ones who signed up for that job, a job that is likely unnecessary. And even the necessary spying would in no way justify that I and everyone else give up our privacy. Hundreds of thousands of Americans have already died so we could have that privacy in the first place. A few thousand more dying to keep it is a miniscule price to pay at this point, a negligible price. So are a few crimes being committed that could potentially have been stopped by back doors.
Because this article is an explanation to the anti-encryption crowd the author takes his time to lay out their arguments as a token of understanding their position. Still, i want to bitch about some of them.
> In my work at the FBI, I encountered directly how encryption makes it harder for law enforcement to detect, prevent and solve certain types of crime in specific instances.
Encrypted data is a known unknown. This doesn't work as an argument without speculating on the scope of unknown unknowns.
> I’m confident that this problem can be addressed from a technical perspective. In most cases, it’s just software, and software can be rewritten.
That works both ways. ( Source code for effective encryption is less text then this blog piece )
> the United States has not experienced a terrorist or other attack of sufficient magnitude where encryption clearly played a key role in preventing law enforcement from thwarting it
And i can't think of any realistic scenario in which it would.
The window of cause-and-effect is extremely small. The attack needs to hinge on the tools that the US has authority over. i.e. Why would Osama use WhatsApp on an iPhone in the first place?
> inherently vulnerable network of networks
This is the wrong way around. Its a 'network of inherently vulnerable networks'.
Which is the safest option. ( As the author later notes but doesn't reflect on )
Despite the length of this article, I see no plan for the obvious rebuttal of terrorists: if you make whatsapp or some other app not end to end encrypted, the terrorists will make their own app. Its not difficult to generate your own symmetric key and use openssl to encrypt and decrypt information sent with that key. Sure, you can force apple to reveal the contents of the hard drive, but whats the difference if apple is the one generating the ciphertext or the terrorists own app is? The difference is consumer protection has backdoors, but the terrorist protection does not.
Apple could, for example, switch out the version of an app on my phone with a backdoored version of the same app. The backdoored version could upload a copy of every message I send to law enforcement. It wouldn’t even be difficult - just a keylogger would do the trick, or the OS could periodically take screenshots of the app while it is open and send them somewhere nefarious. Those latter techniques would also work with web apps. If this sort of thing was done selectively, I doubt anyone would notice.
If the government passes laws requiring backdoors in WhatsApp, terrorists can use another app. If the government passes laws requiring Apple and google to add backdoors to their phones, it will be almost impossible to avoid government surveillance. (And if the approach of the Australian government is anything to go by, we won’t even be able to tell this has happened. Instead they’ll pass a law demanding backdoors on request, then legally forbid Apple et al from making any of the details public.)
This is one of the reasons I buy phones from Apple. Unlike Google, Samsung, Huawei, etc they have at least made their opposition to government meddling very clear. And as this article points out in detail, they have put their money where their mouth is. If some of the sales price of my iPhone pays for Apple lawyers to fight for my digital rights, that is a tax I pay willingly.
Ironically, using a corporate store for app deployment could become less safe than downloading something through your browser at that point.
I give Apple the benefit of the doubt and appreciate their stance, but with a legislative basis they too would have to comply.
I also think that professional criminals wouldn't use their phones in the first place if it is tied to their ID in any way. Sure, you might catch some of the riff-raff, but I think strong encryption is more beneficial for any form of security in the broader picture.
I dont quite buy the "switch the app" argument as code signing easily fixes that. Intercepting at a higher level such as a keylogger or screenshotter would work, but then you'd apply the same argument: the terrorists would just go one level higher. In fact they could go all the way to open source processors running their own code doing all the same things. Yes, it would delay them a few years to get there, but once there there would be little the US could do to stop it. At the network level, we simply dont sign hardware, so without some major refactoring to the core system, theres little to do there. Still though, at every level of this back and forth, consumers have to give up more and more privacy to allow the government to step in and get the same level of information they had before about terrorists until the terrorists will eventually win (or we completely lose privacy and trillions of dollars in infrastructure). So why play the game?
> In fact they could go all the way to open source processors running their own code doing all the same things. Yes, it would delay them a few years to get there, but once there there would be little the US could do to stop it.
well as long as you can't easily build private fab out there capable of building such processors, good luck with that. There are not that many fabs out there.
Fpga's aren't any better, since most of them you can't even make work without large quantities of proprietary blobs.
If your goal is to secure the entire internet via SSL or get people to use PGP signed emails in a mass market then the tremendous technical and cultural hurdles in place that create making a 'truly secure' implementation that gets widely accepted a near impossibility.
But if you goal is to secure the communication between trained people in a 'terrorist cell' or other small group then that is pretty easy. It's really as hard as you want it to be.
For example any decent programmer could write a program that utilizes a 'One Time Pad' of random data to encrypt communication. A program in a USB flash drive that is filled with randomly generated data and the program is all that would need to be exchanged ahead of time. The biggest challenge involved in that is making absolutely sure that the every section of the one time pad used is only used once and is destroyed afterwards. You don't need to really know anything about encryption or math or protocol details to make something like that work.
And if correctly done it'll be impossible to crack.
And we are dealing with threats and adversaries that are much more sophisticated then that. Even small terrorist groups are more often then not state-funded one way or another. Foreign threats are sophisticated enough to create their own encryption. Domestic threats and cartels are sophisticated to hire competent programmers to do work for them. Pedophiles are not idiots either. Many of them are talented technical people that will have no problem avoiding government backdoors in commercial software and hardware products.
The threats American face via encryption isn't that encryption is too strong. It's that Americans don't use it enough and don't use it properly.
I find the idea that Americans are under threat due to lack of backdoors a fallacious one. Legislating that backdoors need to put in place only increases threats.
The only thing that laws like that would accomplish is to make it illegal for Americans to be secure.
Take away legal ability to have secure software then it means that the only people who will have secure software is criminals. Criminalizing good software is never going to be productive.
USB stick? the entertainment industry provides an excellent source for the distribution of 1-time pads, just agree on some particular stream/CD/DVD/etc ("number 27 on this week's top 40") and use the LSBs in some agreed order
There are recurring patterns in a video or audio stream. More often than not these coincide with recurring patterns in the message, revealing information. For the sake of an example, take some pages of a novel as the one time pad. There will be statistical variations (not all ciphers in the ciphertext come with equal frequency) in the ciphertext, where the combination ocuring most will correspond to an "e" in the plain text and an "e" in the one time pad because both occur most often in English language. You can calculate the probabilities for each combination and thus deduce the plain text provided it is long enough. For a one time pad to be secure it must be truly random or pseudo-random.
As a way out you can agree to a seed to a secure pseudo random algorithm but that's commonly called a password or pre-shared secret. In fact it's more secure to use just the string 'The Nth 4096 bits from this week's mod(N,100) top video on YouTube' as pre-shared secret.
Maybe, but I wouldn't want to rely on it. I don't know anything about real world music encodings but for the analog signal you have zero-crossings at fairly regular intervals.
All public encryption algorithms have backdoors in their implementation and sometimes (as with Elliptic Curve standards from NIST adopted in the browser) in their spec.
You might get lucky if you have a cryptographer design you a custom algorithm but that's mostly security thru obscurity and if a state actor really wanted to defeat it they may just kidnap the cryptographer at gun point and have them reveal how to.
Cryptography as a weapon against state actors is NO LESS BRAINLESS than the right to bear arms to protect against the US gov. Just completely useless, if not brain dead.
Although you're right that anything but OTPs give you guarantees and that cryptography is still a black art (with lots of unrealistic conditional proofs), good cryptography can be completely open and will be no less secure if it is published, so there is really no need to kidnap the cryptographer.
There is also good reason to assume that if e.g. you make your own Feistel cipher out of existing cryptographic primitives without caring too much about performance, then it will be secure enough against state actors.
Nowadays side channel attacks seem to be the rule, and there is no way to secure the endpoints without developing the whole technology in-house - which is essentially impossible even for organized crime. So the whole discussion is essentially moot, the FBI can buy 0-day exploits on the black market or develop their own like everyone else.
>All public encryption algorithms have backdoors in their implementation and sometimes (as with Elliptic Curve standards from NIST adopted in the browser) in their spec
Encryption is not trivial to implement right, but it is also not impossible to defend against reasonable threat models. You make claims without giving any proof.
The whole point of a one time pad is that it is never used again and destroyed after use. Even the sick fucks in the CIA don't think they can get a key sequence from you with torture or threats after it has already been stomped and put in a microwave.
Whilst I strongly disagree with some of his viewpoints, we do agree on at least one major point.
> Many would disagree strongly with the attorney general’s assessment that an acceptable technical solution to law enforcement’s problem—one that appropriately balances all of the equities at issue—actually exists.
> But, for the reasons discussed above, public safety officials should also become among the strongest supporters of widely available strong encryption.
Encryption _should_ be continued to be used, and used in a more effective manner across all communications. The net benefits of encryption outweigh the risks that these intelligence agencies complain about.
I may think of it as absolutely necessary, and Baker may think of it as a necessary evil, but if we two from very different worlds can both agree it is necessary, then it should be fully embraced.
> The problem is that there is no law that clearly empowers governmental actors to obtain court orders to compel third parties (such as equipment manufacturers and service providers) to configure their systems to allow the government to obtain the plain text
That kinda sums the article, but US Gov always finds ways to compel anyone to anything. Raw power. Takeaway is to be your own equipment manufacturer and service provider, as in cheap mini x86 boxes running open source daemons, over proxychains and VPNs. Rotate equipment, hacked wifi APs and identities monthly, use Signal on iOS and hide in the crowd, because if you're on the watch list, game is over. Cyberpunk at its best.
The flaw in these over zealous mind- policemens arguments is that they say will be no other way to tell what someone is doing without reading their texts and phone data live.
A beat cop knows there are many "tells" about what someone is doing. Similarly, one has to commit several crimes leading up to a terrorist act. Catching those before hand crimes is an effective way at stopping the final act.
And who is going to be investigating all the people with keywords in their texts? There are not enough cops in the US to do that.
If we use the net, there's a government file with a rating on how dangerous we are.
The depressing thing is people with his views have been active in politics for decades (and not all in the political circles that currently run the government) - they're just finally getting their time in the sun, gathering together and doing whatever they want.
Even if some of this gets unraveled and the worst actors get kicked out, people like him will still be active in local and federal government trying to roll back civil freedoms. It's rough. Hopefully the events of the past few years will act as a wake-up call for people who were previously willing to ignore what was already going on.
If we make encryption illegal, only the criminals will have encryption. The criminals can just use real encryption inside of "backdoored" tech. USG can't change math.
tl;dr: First 5 pages to show he understands why law enforcement wants to weaken security, last 2 disagreeing because cybersecurity and nation state threats are the bigger risk.
I’m extremely skeptical about ANYTHING put out by this group, and Jim Baker in particular.
This guy was general counsel for the FBI at the same time they were abusing FISA warrants to secretly spy on Trump admins. You think they’ll stop at Trump? They’re just getting started.
We need to get back to our roots of being extremely careful about our intelligence agencies.
I know you’ll initially be turned off by the subject of the below article, but definitely read it. For the sake of our civil liberties, the leash of our intelligence agencies must be kept short.
This is all theater to give people a warm-fuzzy about the way things are.
Other than cock-blocking ISPs, what's the value in end-to-end encryption when one of those ends is a megacorporation that is A) super-friendly with the state security apparatus, and B) ready, willing, and able to sell you out to the highest bidder? E2E works great against basement-dwelling h4x0rs, not so much against people with actual power.
I know these companies will swear up and down how secure they are--but security of what? Who audits them? Last I checked, no one.
To be clear, E2E from your-own-server to your-own-server is wonderful. E2E from your google/apple/amazon surveillance device to google/apple/amazon servers is turf guarding.
Say I want to block such-and-such domain. With unencrypted DNS, I put a record in my own resolver, and problem solved. With per-app DNS over HTTPS, my infrastructure is out of the loop, and SV has total control.
Re: the iPhone: when any country has the golden keys, every country has the golden keys. How hard is it to. Get that through your head. The moment the US demands golden keys, China is going to demand their own set of golden keys, followed by Saudi Arabia and any other foreign adversary. These keys are going to be used the same way the US is going to use them (spy on allies and enemies) as well as for seeking out minorities and problematic citizens.
As a foreigner, if something doesn't cross the US Senate of all bodies because of ethics or surveillance problems, that's really saying something. These are the people that brought us laws like the PATRIOT ACT and the CLOUD act, systematically expanding the reach of US law enforcement across borders. The EU, supposedly an ally, had to negotiate a quick, likely invalid or quickly overturned bandage treaty to make it legal to even just store personal data on US company servers. Is this what the US wants to become? A risk to their own allies?
Forcing weak encryption is not defence, it's offence. It's hard to believe that people like these aren't jealous of the spying network China has set up.