Today many argue the state has a need to access such correspondence to prevent crime, but such a need is like the need of an addict: nothing good can come from it and the people should not enable these institutions to satisfy an ever growing demand for insight into their private lives. One must remember that democracy is founded on the believe that thoughts and words are not crimes and everyone must be free to express them-self in public, but even more so in private correspondence. A society that mistrusts its own citizens to a point where all those that whisper to each other are called criminals, dealers, traitors or terrorists is rotten at its core.
And yet some still say: but if the state can read all private correspondence it would be so much easier to catch criminals. And yes, it is true that these totalitarian methods ar efficient in fighting street level crime. However for society as a whole, such methods enable a terror of the state that is a crime against humanity itself. They say "but the state will never abuse its power" and i say: it did countless times before. Do not stray away from liberty and freedom for promises of safety made by those that profit from oppression.
I feel disregarding the usefulness of surveillance is part of the problem. We should not be arguing that that nothing good comes out of surveillance. It provides your opponent an easy strawman for a hollow victory. Because frankly, surveillance is a useful tool for law enforcement.
We need to rather argue that the moral cost and side-effects of public surveillance far outweighs its usefulness.
In this regard I recommend you go look into the evidence on mass surveillance. There have been several reports done on the mass surveillance programs that have been operating since 2001 and in report after report, the mass surveillance has been found not only to be ineffective at producing any tips, it commonly just tied up law enforcement resources that could have been spent on their legitimate tips.
Here is a very well sourced article referencing several FBI internal reports, a white house appointed review group, those of non-profits, and local police departments:
The actual, real point is that they're underestimating the downsides or surveillance, and that even if it would work, it would still not be worth it. That's the only argument that can hold, and the actual reason we're against it.
The problem is that we, the people, can never know what, if any, good is coming out of surveillance. Attorney General Barr admitted that in one of his speeches arguing for back doors in encryption. The government cannot reveal what is being discovered through surveillance without disclosing sources and methods that it (understandably) wants to keep concealed from adversaries. But without that information we and our elected representatives cannot exercise proper oversight. And without proper oversight any such capability will be abused.
Often that is way too much time--25 to 50 years in many cases, since those are the time frames for declassification of classified information--for such revelations to be useful for oversight, especially with the state of encryption as it is since computers and the Internet.
Before computers and the Internet, it was possible to have a reasonable tradeoff between strength of encryption and the ability of law enforcement to conduct surveillance, because perfect encryption was impossible and imperfect encryption got more expensive the closer you wanted it to be to perfect. So people were already making a cost-benefit tradeoff (difficulty of breaking the encryption and obtaining private data vs. cost), and it was reasonable for the government to ask that the potential benefits of surveillance be included in the tradeoff, since that would just adjust the balance of the tradeoff, and the adjustment could be periodically reviewed based on data on past surveillance that was revealed by things like FOIA requests.
But now, with computers and the Internet, perfect encryption is cheaper than imperfect encryption. Perfect encryption is just a mathematical algorithm, and it's straightforward to put that algorithm in computer code and verify that the code correctly executes the algorithm. Imperfect encryption requires adding code to that perfect algorithm, which adds cost, and also adds a risk that wasn't even there before, of whatever back doors are in the code being exploited. So now we users, to enable surveillance by law enforcement, would not be just making a small adjustment that could be periodically reviewed in a tradeoff we have to make anyway. We would be adding a new tradeoff that we have no other incentive to make, and thus taking on a new oversight burden, which is, if not impossible, at least extremely difficult to properly fulfill, that we have no other incentive to take on. That is simply not a bargain that free citizens of a free society should accept.
> embedding those checks/balances/required transparency in the surveillance processes in such a way that they cannot be circumvented by those in power.
The processes can't be transparent because, as I said, that would reveal sources and methods that should be concealed from adversaries. An application for a FISA warrant can't wait for the years it would take to allow a FOIA request to be fulfilled in the interest of transparency.
That's only part of what I mean about the goal to demand expanding oversight, maybe those timeframes are too long, but the point is that those time frames sometimes serve a useful purpose to slow things down for safety of parties involved or other reasons. A goal should be to find a healthy "medium" where "Surveillance FOIA 2.0" still allows for transparency/oversight/review without hobbling the process, and FOIA was just one example of an existing transparency tool to model from, it's not the only tool/model it was the first example to mind, but you would hopefully expand to a larger suite of transparency/sousveillance ("watch the watchers") tools.
I'm also not claiming that we shouldn't fight surveillance attempts, simply that where surveillance seems inevitable/a foregone conclusion/rough to fight that we also need to devote resources to fighting for increased sousveillance/transparency, because power will always abuse surveillance.
To me, breaking perfect encryption by putting backdoors in computer algorithms is precisely the kind of place where we should not think that surveillance is inevitable/a foregone conclusion, but should draw a line in the sand and say that no, we're not going to accept this, law enforcement simply needs to up its game and figure out how to operate in this new environment where anyone who wants to can use perfect encryption.
If X is the amount of utility coming out of surveillance, and you cannot know X, then you cannot argue that X = 0 or that X > Y (for any Y you want to pick, like downsides of surveillance) or that X < Y either.
Essentially it becomes impossible to rationally debate the issue on the basis of whether it is a net gain.
Which means you need to fall back on other forms of reasoning. A reasonable position is that freedoms shouldn't be sacrificed for something whose utility cannot be demonstrated. But that's an argument about what sorts of justifications are required for laws, not about how much utility the law would have.
The argument I make is that it is more cost effective to develop a society where one does not need to commit crimes to get by in the first place. Law enforcement is reactionary and can only punish when crimes are already committed. While we shouldn't get rid of law enforcement, because crime will always exist, let's look to societies that have low crime (and societies that have high crime) and see what we can learn from them (and improve upon). Such policy is far more advantageous for citizens.
Your idea seems predicated on people in general being benevolent towards others. That's not going to work, there's a significant motivated cadre who want to do terrible things provided they 'win'. You don't enlist Cambridge Analytica when you think you're right, you do that sorry of thing when you don't care about being right/moral/legal but only about subjugating others.
Be careful about how you frame that. While this is true of some people who engage in activities like this, there is also the "ends justify the means" group. The latter does believe what they are doing is right and moral, and that being right and moral justifies behavior that is illegal. It's easy to be cynical and assume that the latter group is just the former group deluding themselves, but there are people who genuinely think that way. Addressing them requires a different approach than addressing those who just want power and control by any means.
I think this is why we can see people gladly vote for those that they very much disagree with. I think this is why attacking someone's tribal leader makes them double down and strengthens their convictions rather than changing belief. I think the question is how to get people to realize that you have to fight fair to get others to fight you back with fairness.
> And yes, it is true that these totalitarian methods ar [sic] efficient in fighting street level crime. However for society as a whole, such methods enable a terror of the state that is a crime against humanity itself
However I think it’s a very serious fallacy to split surveillance into a ‘useful’ component and ‘side-effects’.
They aren’t side-effects. They are the effects.
Reduced crime may be a consequence of a surveillance society.
In such a society you may discover that discussing crime statistics in a negative light would reflect badly on the party bosses and must be done with caution.
I know this is Hacker News, but not every argument requires infinite nuance, we don't need to sit down and examine the pros & cons of torture or any other clear and obvious abuse of government power. We don't need to dignify the position of "read all citizens private correspondence" with a cost/benefit analysis. This practice provides legitimacy to clearly unconscionable actions. It is permissible, even strategically valuable, to have certain positions that we are absolutist on, policies that aren't tolerated under any conditions.
Same with banning secure private communication.
Studies show torture is simply not effective. Similar to how surveilling all communication is not effective.
Once they have this ability, it will be much harder to make them give it up.
>Once they have this ability, it will be much harder to make them give it up.
There's an argument to be made that this is already happening and, in fact, has been happening for decades.
I'm of course referring to the "War on Drugs."
There's quite a bit of analysis in the literature to show that restrictions on mind-altering substances was explicitly introduced to disadvantage particular populations.
What's more, despite popular perception, the "crime" rate is at its lowest levels in more than 50 years, yet we continue to fund law "enforcement" at levels even higher than when we were at the peak of the "crime" rate during that time.
So. Since crime rates have plummeted, yet we're spending more than ever, it's likely your paraphrase is already the case right now.
More's the pity.
(But the detective that put together the residential warrant "bundled" it with a bunch of non-residential warrants, nearly burying/hiding it, when it took it to the Judge to be signed, and then again the SWAT team now says that the residential warrant execution plan was buried in the same swamp of non-residential warrant executions, and it's really hard to keep from wondering if that was malice or incompetence as all these details come out. Was it personal? Or was it dumb luck? I can't even tell which is worse at this point, because either seems to show a lack of responsibility, and both are worsened by likely what will continue to be a lack of consequences or atonement.)
Notable examples are RICO Laws, FISA courts and the PATRIOT act. So. Yes.
Or, just as likely, "once they can catch the people they don't like, they won't need to bother catching the people that you think are criminals".
Using the invasion of Iraq as an example, there has been many years where public opinion was negative, or at least lukewarm, towards the invasion, though not violently so. Casually reviewing the polling history, this can be observed as early as in 2004.  But I think one can safely say that the war hasn't been at the forefront of most people's minds during the last few decades, except for the very beginning. But then, there has been very little mention of the financial cost of the war in the media, if any. And why would there be, as the media also earns a lot of money on these "safe" wars.
The video “Troops Versus Building --- an Iraq War tale” by soldier grunt Blacktail should give you a pretty hands-on idea of the financial cost of the war, however. 
: Troops Versus Building --- an Iraq War tale, 24 Nov 2009, Blacktail, https://youtu.be/2N-1E2F9pmc
The root of 99% of crime in the US is poverty. Not private communication.
Trying to solve poverty by spying on everyone’s data is like trying to cure cancer with Tylenol. Even if you temporarily prevent a symptom from occurring, you're still dying of cancer.
So much of political thought in the US is focused on the futile efforts of treating symptoms, and not curing underlying causes.
And the worst part is, if you look at statistics in the rest of the developed world, poverty in fact has a cure! Like in most things, the US is the head-in-the-sand stubborn outlier here.
Politicians in general have shown they don't have the moral probity to be trusted to direct a democracy.
We need a sort of reverse-Stasi. Everything a senior politician does should be reviewed and only closed if it is provably personal and without public interest.
Maybe our politicians need to wear bodycams.
You give too much credit to police states.
What happens in reality is that criminals with connections and a minimum of self restrain are folded into the "dark side" of the State, while their rivals are cracked down hard. In this way, a number of low-impact, high-revenue illegal activities are tolerated (in exchange of bribes), crime syndicates are expected to self police and not break whatever taboos were imposed from above; and then this "dark side" of the government do put a lid on top of the deviant side of society, diverting their energies into activities that do not challenge the status quo.
Does it make for a safer place to live for the common citizen? Maybe. While it may be less likely that you will be injured in an armed robbery, you will also be more likely to get your money swindled by this scheme or another... and you will have less chance of redress when this happens.
I'm not really sure what the EFF is unhappy with about this act, since their complaints don't seem to be reflected in the text.
From the act:
CYBERSECURITY PROTECTIONS DO NOT GIVE RISE TO LIABILITY.—Notwithstanding paragraph (6), a provider of an interactive computer service shall not be deemed to be in violation of section 2252 or 2252A of title 18, United States Code, for the purposes of subparagraph (A) of such paragraph (6), and shall not otherwise be subject to any charge in a criminal prosecution under State law under subparagraph (B) of such paragraph (6), or any claim in a civil action under State law under subparagraph (C) of such paragraph (6), because the provider—
“(A) utilizes full end-to-end encrypted messaging services, device encryption, or other encryption services;
“(B) does not possess the information necessary to decrypt a communication; or
“(C) fails to take an action that would otherwise undermine the ability of the provider to offer full end-to-end encrypted messaging services, device encryption, or other encryption services.”.
> Sen. Leahy’s amendment prohibits holding companies liable because they use “end-to-end encryption, device encryption, or other encryption services.” But the bill still encourages state lawmakers to look for loopholes to undermine end-to-end encryption, such as demanding that messages be scanned on a local device, before they get encrypted and sent along to their recipient.
The original Earn It Act was bad. But that bad stuff has been massively ripped out. Plus real protections for privacy added in. It's not the same as it was - look up the text and compare what's been struck through with what is left.
I think in the current form it's a definite win for privacy and common sense.
- HTTPS is more secure and private than HTTP
- Signal is more secure and private than SMS/Skype/Messenger
- Tor/Browser is more secure and private than Cloud/Chrome
- FileVault, BitLocker and LUKS are more secure and private than RAW unencrypted disk data
- 2FA and Hashing is better than raw Passwords
- List goes on and on.
All you have to do is: get out of your desperate situation where "everything's controlled", because it's not true, at least not everything. We don't have the best security and privacy but we have some of it AND we need to fight to keep and expand it.
Now go on and tell me how smart you are and how you see things different and generalize stuff again in replies. Or you can just shut up and stand up to contribute and improve it.
1) Yes, you can build better encryption and privacy preserving technology. That is a technology and adoption problem.
2) There is so much you can do once the law says you can’t encrypt certain kinds of things and if you do, the state will be after you. That is a social problem. Our elected leaders aren’t serving us.
Not mutually exclusive. We have to do both. Build better tech, educate others and esp our politicians who can change the fabric of society.
I hate this sooo much! Having children really makes some people entirely unable to think. I've heard people entirely willing to outlaw any form of encryption to "protect" their children from pedophiles despite the likelihood of getting their identity stolen, bankrupt and unable to feed their children being significantly higher than a pedophile targeting their child even now, and would increase by several orders of magnitude if we suddenly weren't allowed to encrypt things like e-banking and email.
In the real world, many states freely admit that they will fight against secure, private messaging between citizens (say, because law enforcement needs a backdoor to solve crimes). And while governments can, and do, make laws to that effect, improvement will be legislated away beyond a certain point. This also produces a chilling effect on engineering: why work on a technology that will likely be outlawed if successful?
In most cases when the government is making laws to criminalize X trying to overpower it with better engineering just does not work. My 2c.
It was a generation of pioneers who weren't quite so timid about working on new technology and fight absurd, archaic laws that built the foundations of modern consumer/commercial encryption. The U.S government especially tried very hard to chill these efforts too, and failed (so far at least).
What I don’t understand is why you seem to be suggesting that as an alternative to opposing legislation which would prohibit such work.
However, the second part (about the fait accompli) is a very important point.
You’d have to achieve widespread usage among the general population for this to be effective to be effective.
Treading shaky legal ground is not the same as circumvention.
This time around, if encryption is banned, they will do more than just hound Phil Zimmerman for years on end.
They’ll come after the end users, and ‘circumvention’ won’t help.
We today probably have the best privacy tools in all of recorded history. Modern encryption means that anyone on the planet can send a message to anyone else on the planet without fear of government decryption in transit (asymmetric public keys, Tor, PGP, pick your tools). Using freely available tools I can encrypt a file on a USB drive in a way that even an NSA data center running for a billion years wouldn't decrypt. Those sorts of things were not possible a hundred years ago. They weren't really possible only a generation ago.
The VPN’s people will actually use are a poor trade-off that will leave most people with a false sense of security at best, and probably with significantly less rights over what happens to their data regardless.
Don’t disagree with you on the rest of the above.
I don't have the technical skills to improve it. Someone I inform might.
None of your list is better than nothing, if the authoritarians want your data. Except, maybe, Tor, and only if people contribute to running exit nodes.
If it isn't end-to-end, and only you know and control your keys, you are already doomed. In other words, you cannot trust any service with your keys. That includes https and signal.
Because our priority is kids! Honestly! Seriously I mean it!
Not that that would be bad, it's just that there are norms for who-does-what in gov't, and overlap of responsibilities is usually bad.
This kind of rhetoric is so boring at this point. Trying to destroy a proposal via scope creep is intellectually dishonest and does little more than weaken your own sides position. If I was a senator on the fence about this bill and this was the best argument you could muster against it it would be an easy yea.
Nobody anywhere is challenging the laws again child abuse. What they are challenging is the laws attempting to decrease it or catch people doing it through means that don't actually work and harm other important things.
To use your example, it's like writing a law "against murder" that's written in a way that actually doesn't make murder illegal and also bans seat belts.
In fact, this "scope creep" you mention is exactly how laws work. First you make murder illegal (done), then you start looking at the common causes for murder and find ways to fix those.
I like the idea of federal legislature ceding power to state legislatures.
Additionally, it looks like encryption is offered more protections in this bill, Considering federal laws preempt state, especially with regard to telecommunications, I do not see what the risks are with passing this(regarding encryption).
The bill ammends Section 230(e) of the Communications Act of 1934.
> CYBERSECURITY PROTECTIONS DO NOT GIVE RISE TO LIABILITY.—Notwithstanding paragraph (6), a provider of an interactive computer service shall not be deemed to be in violation of section 2252 or 2252A of title 18, United States Code, for the purposes of subparagraph (A) of such paragraph (6), and shall not otherwise be subject to any charge in a criminal prosecution under State law under subparagraph (B) of such paragraph (6), or any claim in a civil action under State law under subparagraph (C) of such paragraph (6), because the provider—
Snowden is great, but do your own research.
Edit: I usually trust Snowden as well. I could be missing something in this bill. EFF did not provide specifics. Hopefully someone here can.
> Just a few months ago, Senator Lindsey Graham (R–SC) delivered an ominous threat to Apple, Facebook, and any other tech company that might refuse to kill encryption programs that prevent malicious hackers, law enforcement officers, and others from accessing our private communications systems: "You're going to find a way to do this or we're going to do it for you."
> Graham has authored the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019 — or EARN IT Act [...]
I looked and could not find this. That article did not offer any specifics either.
Scroll to summary, specifically "Section 230 immunity for CSAM can be earned via 1 of 2 “safe harbors”."
Lindsay Graham has repeatedly sought to weaken encryption and mandate backdoors and key escrow. Also in June this year following the EARN IT Act he introduced the Lawful Access To Encrypted Data Act (LAED) which would mandate backdoors:
LEAD is extreme and has little support. It is widely believed that the LAED was not intended to be passed but is meant to help pass the EARN IT Act by making the EARN IT Act seem like a more moderate and reasonable piece of legislation.
The EARN IT Act is really a ploy by Lindsay Graham and others to bypass Congress on this issue which they cannot otherwise get passed, and allow a small group of people who are not even security experts to develop regulation and mandates (which will probably be against encryption) under the guise of fighting child porn.
"Unpaid" here just means paid in other means, exchaging political favors, getting hired by the benefitted companies, etc.
Of course the same goes for paid commissions...
I've never heard the commerce clause explained that way. Very cool.
Maybe it has something to do with "Notwithstanding paragraph (6)"
“(6) NO EFFECT ON CHILD SEXUAL EXPLOITATION LAW.—Nothing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit—
“(A) any claim in a civil action brought against a provider of an interactive computer service under section 2255 of title 18, United States Code, if the conduct underlying the claim constitutes a violation of section 2252 or section 2252A of that title;
“(B) any charge in a criminal prosecution brought against a provider of an interactive computer service under State law regarding the advertisement, promotion, presentation, distribution, or solicitation of child sexual abuse material, as defined in section 2256(8) of title 18, United States Code; or
“(C) any claim in a civil action brought against a provider of an interactive computer service under State law regarding the advertisement, promotion, presentation, distribution, or solicitation of child sexual abuse material, as defined in section 2256(8) of title 18, United States Code.
> "(C) fails to take an action that would otherwise undermine the ability of the provider to offer full end-to-end encrypted messaging services, device encryption, or other encryption services.”
My guess is no since even with a key escrow scheme the messages can still be encrypted end to end. Its just that there is another party which may be able to decrypt it later.
I don't, because I'm sure most state legislatures are even less informed on the importance of online encryption than Congress is. Doubly so if you live in a red state.
They were easily forced to comply with wiretapping demands of the government.
The moment a central platform controlled (most of) our communication and was able to provide security to the users by using encryption, legislators realized the danger and also the solution. They would just need to force these central hubs with laws as they did back in the day. Legislation was/is needed to break users' encryption and give government the access to all communication that was lost with modern forms of communication and encryption.
So on one hand, it is "just" a return to a previous state of affairs. On the other hand, given for example today's methods of automatically listening in and transcribing voice to text, this would be a way more massive intrusion and control mechanism.
This is a needlessly negative outlook. Click the link, support the EFF. Educate your friends and family.
Defeatism is not a compelling philosophy.
And I see people suggest that we aren't completely screwed because we got 10,000 people to walk down a street one time. Its an insane position and we need dramatic measures that the human race in its current form are unable to enact.
* Climate change is going to happen because people like cars and supermarkets and nobody is asking anyone to _actually_ sacrifice anything. Much of the climate change movement is "someone else should do something about it" and those that cut themselves by going "off-grid" just make themselves quieter and are replaced by twenty more people with growing carbon footprints.
* Online security is fucked because the vast majority of people can't handle more than a modicum of detail. Its on the technical class (i.e. the 5%-15%) to protect it by hook or crook. It will lose in a democratic battle to preserve it because people aren't interested enough to care and its too complex.
My suggestion is that defeatism posits a better question to answer: "What do we do once we've lost?".
The answer to that question yields something useful instead of the suspension of disbelief that the human race will suddenly turn over a new leaf.
That's not to say its not worth trying but that we should assume failure, otherwise you end up like the UK's Brexit negotiators who were "all-in" on the "oh they'll buckle as the time runs out" strategy and have no plan if that doesn't pan out beyond "so what now?".
The basic strategy right now seems to run on the basic belief that human-kind will figure out its slowly destroying its habitat and change accordingly. I don't think we've even managed to all figure out the extent of the destruction let alone even start to change accordingly.
What habits have we changed since the 1990s within our population to cut global carbon emissions? I'm struggling to glue a couple of things together. If I'm not mistaken, global air travel has _soared_ within that time window and everyone is just sitting on their hands under the mistaken belief that renewables will bail us out while we all still drive in a car to go to the supermarket.
It's hard to make progress while denying the actual nature of the situation.
Not saying that's necessarily true in either of these situations, just that it's not a waste of energy to spread depressing truths.
It certainly is, if those "truths" are not absolutes. Find a way to redirect your -- and others' -- energies toward something positive and constructive. Spreading doom and gloom, full stop? Always a waste.
However, telling a sad or difficult truth and stopping there is still a big improvement on pretending there was no issue in the first place.
Saying negative things is just negative, full stop, as already stated. You can make this positive by suggesting a solution or asking someone qualified to suggest one and then helping them achieve it.
I dunno, I'm not sure that's always an absolute. Example: your boss says the budget has been slashed and the department head will be downsizing the whole department by 50% in the coming months. This is very obviously a negative thing, but is your boss doing you a disservice by giving you a heads up? "Oh but he should give me guidance that I should work diligently but also prepare resumes and look into other options." Of course they should and it would be really nice if they also could find out for sure if your head is on the chopping block- but let me ask you this: even if your boss doesn't offer this very obvious advice or have any insight into the department head's persona non grata list, is knowing about this negative truth a net-negative for any employee this concerns? Of course not.
Or even more relevant to HN: a very outgoing individual who loves the face-to-face environment of the office has just been told that the CEO has decided everyone will be 100% remote starting next week until 2022. Is being told this negative truth a net-negative for the social butterfly employee who now at least has a brief chance to wrap their head around the idea and mentally prepare for the change instead of being blindsided?
Sometimes simply raising awareness that a negative thing is happening isn't a bad thing.
This is very pessimistic. People still can understand the dangers (and are understanding) and switch to decentralized platforms, which by the way are actively being developed: https://joinmastodon.org.
Facebook and Google actually do a relatively good job of protecting our data. They have to live up to specific claims by government, which some people don't like, but otherwise, it's pragmatically safe.
'A million servers' here and there, without process, oversight, 'a lot to lose', lack of transparency, and it might be a whole lot worse for most people.
'Many leaks' at small companies may not have the public and regulatory impact as a 'single big leak' at G or FB.
For 'the tech literate' who know how to manage themselves, it may open up avenues of greater security, but for the net-plebes, not so much.
Think about the freedom that comes with ample food + health and lifestyle choices. Some people are incredibly more fit and healthy than any other people in history, but most of us are somewhat more sedentary, we eat to much, don't exercise enough.
If another user blocks you, that's on them but you have no right to push your messages into another user's home system.
If all else fails, you can host your own. It's ultimate free speech as intended and as someone who hosts a mastodon instance, it's a system that works pretty good, there is no noticeable hatespeech in my federated timeline.
edit: I might have misunderstood the GP, but I'll let the comment exist. Banning mastodon in the US is a non-concern since it's a french open source development.
But even if someone really wanted to get Mastodon banned and created a server full of illegal material to give politicians the fuel they'd need, the law enforcement answer there would be to go after that host. The overall network would not be liable because it wouldn't be hosting/distributing the material.
But I think GP might have meant banned at a government level?
Already happened in some countries.
B.t.w. someone wrote long ago here at HN that the 2nd amendment about weapons, nowadays should be about cryptography instead. Maybe:
> A well regulated Militia, being necessary to the security of a free State, the right of the people to [use end-to-end encryption], shall not be infringed.
Companies that are willing to play dirty are the ones that get investment and also are the ones who retain users. They have an inherent advantage over any platform that tries to be moral -- or alternatively platforms that try to be moral have an inherent disadvantage. As long as privacy and respectful user experience is on the bottom of the list of priorities of most users they are the ones who will be able to build momentum, and I can't imagine what could happen to change that.
Even the fact that this bill is proposed proves that government lost some of the control that it had over private conversations.
I installed Element, but it required username/password pair. I understand that it's a bit more secure than using phone number+email for first authentication, but it makes discovery of friends too hard. It trades too much UX for security, just like PGP.
Perhaps the creators of the app could partner with some big providers to allow the app to try creating your account on one of these providers at random (and keep trying different providers if your intended name is already taken on one that it tries).
I also agree with you about discovery of friends being hard unless users provide their phone numbers to a central server, so perhaps there should be an option for that when creating your account. This central database could run by an independent, audited, third party service. I'm not sure who could be trusted in that role (perhaps Let's Encrypt?), or how much it would cost, but it's an interesting thought experiment.
Thats similar to what eMail does, it trades UX for [nothing] (would place decentralisation there if it were not for gmail etc.)
U.S. users are leaving Facebook by the millions, Edison Research says (marketplace.org)
1293 points by rmason on Mar 6, 2019 | 616 comments
The questions isn't "What do 90% of people do?". The question is whether or not those of us that care about privacy have the ability to be private.
So stop being another brick in the wall: stop using the platforms just because it is convenient. For example:
If we're talking photos:
One day IPv6 will get rid of the platforms.
Previously I had “business class” internet but that is just the name of the plan. Anywhere there was cable I could get business class plans.
The one time I tried to get a business-class connection from Time Warner they refused to offer any business-class service to a non-commercial address except for their "Home Office" plan, which was basically just their top residential plan with a better SLA.
Well, you already control who you share it with, as you're the one initiating connections to sites like Facebook. It just happens people give a lot away to browse sites these days (admittedly exactly what they're giving away remains quite opaque).
Unfortunately even if we did move to one-platform-one-person, the question of data control remains as murky as ever. Suppose you are hosting a party so you send your street address to your friends so they know where to show up. Then they play a fun quiz game that tells them their Harry Potter patronus based on the street address of their friends (that means you). Suddenly some anonymous quiz maker (let's call them Oxford Synthetica) has access to your street address and at least one of your friends' info through no direct fault of your own.
It really is a case that if you give up then it will be over. Just make it a part of your life. I'm not asking you to rally every week. Just don't forget about it and positive change will come.
Perhaps - and most fall into this category.
However, you do have the ability to form a (simple, cheap) SCorp/LLC in the US jurisdiction of your choice and provision your own mail/dns/vpn/etc. by that corporate entity.
So now a corporate entity is the provider, and you are the customer, and notices/subpoenas/takedowns/requests will be seen by you and you will take action on them.
At the very least, you can self-provide your own VPN this way if you don't want to run your own mail services.
And if that is to the point that it scares you that they'd lock you up for such behavior then you need better government, not better privacy.
You are mistaken counselor (c) The Descendants. These three easy things improve your privacy by 10x:
- Incognito / Private browsing by default. Clears cookies used by trackers and platforms. Staying logged in is bad. Login, do your thing and close the tab. No Chrome, it has its own Id.
- VPN. Hides your browsing history from ISP and your IP address from trackers and platforms. Smartphone too, and disable Location and Background app refresh for most apps.
- Adblocker (uBlock Origin / AdGuard for Safari, iOS too). Prevents trackers and malware from executing.
I know where I'm watched and how not to be watched, I know how to very easily hide the odd activity. The average citizen no longer gets that for free and it really sucks for them but remember that's what this has always been about, trying to improve the lot of the average, not us. We're fine because we know and can do.
So its elliptic curve time for messengers but everyone that "knows" will get some European or underground US piece of software on their phone via an unofficial app store.
Freedom only works with some responsibility, but it seems to me that state level actors just try to keep up with the ad and tech industry in collecting data.
The EFF tries to blame it on Barr, but I think the wish for control is bipartisan, especially since a generation is in charge that doesn't really understand the problems of this data collection. Not that their younger compatriots seem more promising in assessing actual problems.
There might be a point where using a Asian or African social network would be preferable, but that is something not on the mind of common social media users. Sure, they do it if the product has appeal like TikTok. On bad days I wish users of social media (the self presenting kind) would be banned from all other sites.
Feinstein is always on the wrong side of copyright/patent/encryption bills. Just look up her voting record. She co-sponsored PIPA. See also https://www.wired.com/2016/04/senates-draft-encryption-bill-.... She co-sponsored the Sonny Bono Copyright Term Extension Act, too.
She's hopeless on this.
A lot of big companies have old and crusty leaders / owners, who do not understand the terms of this problem.
I wouldn't be surprised to see this bill pass and then get nerfed in a few years, once big corporations actually experience the operational cost and lobby to have it rolled back / made it optional for them.
The thing we have to come as a society is willing to accept that sometimes [very big bad] will happen, but a lack of back doors is more important.
Yes, there are insecurities and holes in the above, pending the methods of implementation, but it's one level of potentially many.
And whilst it's good to know that there are technical workarounds such as this, the real work, the real progress for society, is to make these technical workarounds unnecessary by, as the EFF says, contacting your representatives and letting them know what they're constituency thinks. Politics, sickeningly, is the only avenue for worthwhile change.
It seems that the world has somehow forgotten the lesson. People can only see the centralized systems and don't realize that the problem they are trying to solve is not solvable in general. Encryption is unstoppable.
Notarize your apps buddy and sign into your app account... ;)
You can make as many PGP identities as you want and associate them with any sort of identity you desire, so there might be a political point available with respect to anonymity as well in some situations.
If this passes you can be sure anything digital is no longer safe from alphabet agencies. Sadly the younger generation has no concern for privacy and the older generation likely has no clue their data is being siphoned. Advocates are too much of a minority to make a difference.
Yes, I'm aware of some of the issues surrounding IP copying. I won't call it theft because the word is contentious and there are those in this forum who don't think you can steal ideas.
I'm also aware of the US' own history of gathering IP to bootstrap its industrial efforts. I guess that "but you did it first" isn't a good defence in law (not really sure of that, after all IANAL) but it makes the US case weaker in my eyes.
Do you think that's related to China allowing zero domestic tech competition, but demanding that their companies be allowed to compete in the rest of the world?
The main issue with something like this though isn’t really about the rest of the world catching up to the US technologically. It’s a question of whether service providers would choose not operating in the US over following their rules. But given the size of the US market, you’d find a lot of service providers choosing to follow the rules.
Which is incidentally the same way EU nations enforce their laws on companies like Google. And how countries like France, Germany and UK would enforce the anti-encryption laws they’ve been trying for years to get on the books.
I wonder if anyone in the last decade has _ever_ walked into their rep's office, talked to them, and changed their vote on a key issue. I really doubt it.
The age of true democratic participation is over. Politicians have no need to care about the ordinary constituent to win elections.
They just need to win over the elite, and know how to deliver their scripts.
That can only be broken in one of two ways. Either people don't care enough or they grow so distrustful of it that they don't participate in it or keep it in check.
When things like that happen you get elected officials decided by less than 40% of the country, anti-maskers, and asymmetric warfare in the streets to combat "the man". It's untenable.
We should all have to log on with our identification through a blanket "know your customer" law like they do to combat money laundering in banks, and our country of origin at least should be displayed. Other than that... You want encryption? Fine, but if your id links you to 4chan offshoots planning domestic terror attacks i should be able to report your address to the authorities and there should be social repercussions for bad behavior. And I need to know if you're Russian or from the US and that you're human before I engage in US politics with you.
Music got worse since Napster, Sean Parker was the guy who got Zuckerberg funded, Google is evil, and John Perry Barlow is dead.
Others who know history much better than me are far better equipped to debate against your position. If you really want a challenge, I would invite you to reply to the top comment in this thread by Jon_Lowtek to help open a discussion on the deeper merits of your position. For me personally, your arguments aren't very persuasive.
I am aware of history as much to know that the Internet was designed by a bunch of people who talked on Ham radio and wanted a secure place to sell drugs to each other without being caught by the fuzz. They hated Ma' Bell. That's why it's designed the way it is. Then later it became the RIAA and the rest. I know all of that and if anything that's hardened my position on this, not decreased it. We don't live in the 1960s or the 1990s anymore, and what's worse is everything the people said during that time about how society would be if we went "full retard' into "fight the man" hipsterism and the subsequent deregulation of everything actually happened. Nobody laughs at Al Gore or Tipper these days, or Lars Ulrich for that matter.
Do you like musicians making a penny on a song leaving no room for art just lowest common denominator crap, the rise of the marxist/fascist camps that are hurling what was once a promising country that was getting over it's growing pains straight into 1920s/30s Germany at lighting speed, the totally unfiltered and unmoderated filth that is 4chan, etc? Or do you just ignore it because "some day blah blah blah dictator might take control" yet we are the closest we have ever been to a dictatorship because of faceless social manipulation technique where we have no clue who or where anyone we talk to is, whether they're paid, etc?
Not everyone is an engineer out looking for how things work. It's the same error the founding fathers made with the enlightenment thinking everyone is going to become smarter if you just opened everything up. It's complete baloney, people will just use the power vacuum to seize it for themselves because they make the most noise. Nobody cared about Aaron Swartz except the hacker community. Snowden sold out to the Russians from the beginning. And you're sitting there still in the year 2000 when all this stuff was new and the only ones who spoke were nerds and we all agreed on basically everything.
The point is that everything that this theory set out to solve, became worse. More nationalistic, more poor musicians with even bigger giant platforms controlling their art, more corrupt, more propagandized, etc. Not less. Everyone knows it. The other point is that instead of tearing stuff down and engaging in subversive activities for the same ends, maybe it's time to focus on making the systems you have better. Not turn it into some weird mix of The Matrix, Philip K. Dick, and William Gibson. They're just as bad of an instruction manual as 1984 was to be honest.
You know who benefits from VPNs, privacy, and security?
The proud boys. Russia. China. Steve Bannon. And their bots, that's who. They are a threat to civil society at present, and they're using that very privacy and security to organize to destroy the thing you're trying to "protect" by drinking Barlow's kool-aid.
- block access ala GFW, ensuring that most people will have difficulty accessing it or using it
- block access to any data you cannot decrypt or from an endpoint you cannot backdoor
- go after creators and ensure some kind of backdoor is inherent to the project
- shut down projects by exerting pressure on developers
- run some kind of propaganda campaign on the evils of using unsanctioned software (supporting terrorism etc.)
- do nothing, knowing that most users will avoid using anything that isn't one of the major web platforms
depending on sofistication of that solution you could imagine some forms of tunnelling to be efficient against that (IP-over-X). Then of course due to the complexity this workaround will be used by a tiny fraction of users.
>- block access to any data you cannot decrypt or from an endpoint you cannot backdoor
steganography would be a solution to this, you can decrypt the cat pictures I'm exchanging with friends but you may not be able to notice that those images have hidden content (which may also be encrypted)
>- do nothing, knowing that most users will avoid using anything that isn't one of the major web platforms
this seems to be a guaranteed-to-succeed solution. Probably much better than
>- run some kind of propaganda campaign on the evils of using unsanctioned software (supporting terrorism etc.)
since there is always a risk that this may backfire and encourage resistance in some groups
>shut down projects by exerting pressure on developers
The developer probably just going to the press with that, no need to damage your own private centric project, sure the NSA can say they never did that so the developers can ignore it OR they openly say it was them, then the project just can change the country.
>run some kind of propaganda campaign on the evils of using unsanctioned software (supporting terrorism etc.)
>do nothing, knowing that most users will avoid using anything that isn't one of the major web platforms
Those are probably the only viable solutions, but on the other hand it's pure marketing for the product, like banning telegram in Russia.
Because i don't like pressure from a Secret Services...do you?
OK with one exception...Xenia Onatopp
Plus if you really really want to know what's happening there, call the NSA. They'll send a few chaps to join the matrix dev community. Or they'll use other tools to access the machines in question. And you can do that to the 0.01% of people that actually switch after mainstream tools become insecure in a way you can't when there are billions of people using encryption by default.
Not sure if they give me any information as a Swiss citizen :-)
>They'll send a few chaps to join the matrix dev community
That's probably more on the James Bond side of reality
>Or they'll use other tools to access the machines in question
Yes that's what they probably will do, but that is targeting and not a Backdoor/break encryption by law.
They're also involved in the Coreboot alternative BIOS project:
One of the key revelations from Snowden was that all that stuff that seemed far fetched or paranoid to us, wasn't just happening, it was routine and much more advanced stuff was also going on.
It's also much easier to target Matrix etc when you don't have to target all of Facebook because you have a backdoor there. Reducing the NSAs workload by 99% makes the other 1% much easier because they have 100 times the resources to spend on it.
You should also consider as a swiss citizen that US law is increasingly world law. If you sit in Switzerland and contribute to a project banned by US law, they may well have you extradited to spend the rest of your life in a US prison. They're doing that to Assange right now. They've done it to others as well. Locking up foreigners is quite in-style in the US at the moment... I haven't read the bill, so I don't know what the criminal sanctions are for failing to provide backdoors...
Best of luck
>You should also consider as a swiss citizen that US law is increasingly world law
That's Bullshit, was probably half true in the Cold War era but for sure not today.
> They're doing that to Assange right now.
That's England, you know those little dogs of yours.
In the cold War it was true of the western world and the USs use was limited because the US wanted to look gentle and reasonable. Today it's true for most of the planet and the US wants to look tough.
If you don't like the Assange case (I agree the brits are lap dogs, but that case actually got kicked off in Sweden and was handled under EU law much more aggressively than the in UK), try the FIFA case. The US ordered Switzerland to arrest and extradite foreign nationals, many had never set foot in the US for trial, Switzerland obeyed...
I don't like any of this. But it's time to abandon our innocence. If you're working on any meaningful project in encryption, the NSA are at least aware of you. The US are happy to use covert methods to undermine you if they think its worth it (and the bar is low). If you're important enough (or a DA wants to expense some flights) they'll use overt methods. Proceed at your own risk and don't assume that America banning encryption doesn't make it illegal for you just because you're a Swiss citizen in Switzerland.
No problem with that, from the article:
>With wire fraud, one needs a wire that originates in the US
Sometimes we even ask for it...that's normal international business:
>Proceed at your own risk and don't assume that America banning encryption doesn't make it illegal for you just because you're a Swiss citizen in Switzerland
Yeah i stop here...
BTW from your first article about android:
>So, if it’s not looking to plant backdoors, what’s the NSA’s business with Android? Ironically, the agency has been working to make Android more secure.
>It is just as preposterous to think that the best way to gain access to any operating system is to publicly announce that you are contributing to the OS, and make the tainted code accessible to anyone with an interest in it.
So it was NOT Snowden, but NSA itself.
Second Article about Coreboot:
>Myers published a paper about STM last year on how NSA’s STM implementation could work. All Coreboot code, including all the STM contributions from the NSA, are open source, so anyone could verify that there is no backdoor in there -- in theory.
Even if the project is open source and development is distributed, there is often a major entity behind it driving it on which the requirements can be enforced.
Again i was talking about the opensource android, not the googlified closed version.
>for Matrix force the hosted Element.io instance to provide it
How? If they are not US Citizens?
The parts that control doing calls, connecting to cells, etc.
They are called "Radio" firmware.
This would allow law enforcement to track the spread of a known piece of content while avoiding breaking encryption. Perhaps it could be a compromise.
I'd say "how about having law enforcement do, you know, police work to catch the bad guys?"
Police can already get warrants for just about anything -- as long as they can convince a judge they have probable cause -- without too much of a hassle already.
Giving them keys to unlock everything is the wrong way to go about it.
Get enough evidence to convince a judge (not that hard) and you can get a warrant.
However, that doesn't mean anyone, even criminals, should be forced to make it easy for them.
Law enforcement obviously has way too much time on their hands, with the amount of lobbying they do to increase their ability to chip away at civil liberties and privacy.
Crazy thought: maybe they should use those resources to do real police work instead.
Remembering back to the PRISM disclosures, metadata alone is enough to build a surveillance apparatus. So I guess even without decryption of all objects, confirmation of the existence of known objects could still be enough to conduct mass surveillance or enable other kinds of abuse.
It is standard practice to make it so that it is impossible to detect identical plaintext. What you are describing would count as a backdoor. So you might as well instead make it explicit and save any sort of brute forcing (rainbow tables?).
Running Telegram, over multiple VPN's?
Accessing Gmail over multiple VPN's so Google doesnt get to know where you are 'really' logging on from?
Making your own VPN network over a combination of AWS, G-Cloud, Azure, DO and Aliyun to 'hide' your actual location?
Share your VPN with friends and family to provide some 'noise' (although that may be worth little overall - maybe VPN through friends and family home connections as well).
- 1. An individual attempting to perform an MITM attack on you. The classic free wifi adversary you've probably heard about. There's little risk of this individual using the sites you visit against you so you only care that they can't manipulate your usage of said sites: Use HTTPS and you'll be fine.
- 2. Your ISP. You don't want them to see where any of your traffic is going because you don't trust them: Use a VPN. Shift the trust to either a VPN provider or a cloud-hosting provider by running your own VPN.
- 3. Your Government. Let's assume they can see all of the traffic within the country and you don't want them to associate your traffic with you: This is the step where it becomes challenging, you want to blend in, not just add more security steps. Ideally you want your traffic to leave the governments jurisdiction and if needed reenter looking like normal traffic from other countries. Tor is a good option here, there's a reasonable amount of traffic on the Tor network to hide in and your traffic is almost guaranteed to leave your country at some point. Alternatively choose a VPN provider that resides legally outside of your country and choose a server that resides physically outside of your country. Both options will move your traffic outside of the jurisdiction of your government, so this should be sufficient within the confines of the current example.
- - What about a self-hosted vpn in a region outside my country? If you ever connect to a server inside your country the full path of your traffic will be able to be seen by your government.
- - What about multiple self-hosted VPNs outside my country? This is an improvement on the previous issue, but it's unlikely to prevent your traffic from being correlated to you on timing alone.
- 4. God's Eye. Your adversary can see all internet traffic everywhere on Earth: Good luck. Maybe use Tor over a popular VPN service to increase the difficulty of correlating your traffic to you? Hope the Nym mixnet becomes popular?
Some additional considerations:
- What if I don't trust a VPN provider? You probably want to hide your traffic in their traffic so pick a VPN provider that requires no user info to sign up and let's you pay with a cryptocurrency or cash. I know of Mullvad that fits this requirement, there are probably others as well. Self host a VPN and to the VPN service through your VPN, now neither the VPN service or the cloud provider has a full view of your traffic (wireguard makes multihop VPNs easy). You could do the same by use 2 vpn providers.
Again go ahead give me all the down it's.
I'd say that the UK, Canada, Germany, France and the rest of NATO are the US' biggest allies.
Followed by countries like Japan, South Korea, Australia and even Mexico that are much more important as allies to the US than either of those places.
Saudi Arabia and Israel are just incidental players for the US. Saudi Arabia for the oil (The Saudi state oil company is now called Saudi Aramco, but used to be called Arabian-American Oil Company), the military bases, and the arms sales. Israel mostly for the arms sales and to placate many (but certainly not all) Jewish folk in the US.
IIUC, for this particular bill, the next step in the Senate is to debate it on the floor and then vote on it.
The companion House bill has been introduced, but has not gone through committee or been voted upon.
The Senate would need to pass the bill. Then, separately, the House would need to pass their bill.
Then, the House and Senate would reconcile their two bills, and the resulting compromise bill would once again need be voted upon and passed by each chamber.
Assuming that happens, the President would need to sign it before it becomes law.
Most Americans (I hope) could tell you that. The links below detail the various actions taken by each chamber, as well as the sponsors and text of the bills.
A good question.
IIUC, none at all.
Once the full House and Senate have passed their bills, a conference committee will be convened to create a single bill to be voted upon by both the House and the Senate.
One would expect that members of the relevant Senate and House committees would be part of the the conference committee, but that's not necessary, and leadership generally chooses the members of a conference committee.
Once the conference report is complete, the bill then goes directly to the floor of each house for debate and a vote.
The specific committee (in this case the Senate Judiciary Committee) that approved each house's bill is not involved.
Of course the members of that committee get to vote on the bill from the conference committee just like every other member of that house.
Also, can someone come to me with a subpoena and a gag order for hosting email for my friends?
You're assuming that every citizen is either ignorant or disdainful of their responsibilities as citizens.
That's sad. I've never tried to get out of jury duty, because I see that as part of my responsibility to my community.
That said, in the half-dozen or so times I've been called, I have yet to be selected for a jury.
But I'll do the same next time, because it's the right thing to do.
I'd add that if I were involved in a trial, I certainly wouldn't want someone who would prefer to shirk their responsibilities as a citizen on my jury, that's for sure.
If someone gets charged for possession of random bytes, send the prosecutor and judge a bunch of random bytes and see if they're still intent on moving forward with charges.
What stops me from encrypting things?
The Trump angle will pick up mass media attention and the 'orange man bad' crowd will activate to make sure this doesn't get far.
There may even be debates on encryption and people may actually talk about encryption, privacy and policy.
"The Deep State wants to read your Facebook messages!", etc.
It seems like a very straightforward bill that makes things much better by explicitly removing any liability from companies for either having end to end encryption or for not creating backdoors.
Here's my breakdown of the text of the bill:
Create a commission of 16 people, half appointed by each party. Can only make recommendations that 14 of the 16 approve of.
4 shall have current experience in investigating online child sexual exploitation crimes, of whom, 2 shall have such experience in a law enforcement capacity; and 2 shall have such experience in a prosecutorial capacity;
4 shall be survivors of online child sexual exploitation, or have current experience in providing services for victims of online child sexual exploitation in a non-governmental capacity;
2 shall have current experience in matters related to consumer protection, civil liberties, civil rights, or privacy; and
2 shall have current experience in computer science or software engineering related to matters of cryptography, data security, or artificial intelligence in a non-governmental capacity; and
4 shall be individuals who each currently work for an interactive computer service that is unrelated to each other interactive computer service represented under this subparagraph, representing diverse types of businesses and areas of professional expertise, of whom 2 shall have current experience in addressing online child sexual exploitation and promoting child safety at an interactive computer service with not less than 30,000,000 monthly users in the United States; and 2 shall have current experience in addressing online child sexual exploitation and promoting child safety at an interactive computer service with less than 10,000,000 monthly users in the United States.
the Commission shall develop and submit to the Attorney General recommended best practices that providers of interactive computer services may choose to engage in to prevent, reduce, and respond to the online sexual exploitation of children, including the enticement, grooming, sex trafficking, and sexual abuse of children and the proliferation of online child sexual abuse material.
(A) preventing, identifying, disrupting, and reporting online child sexual exploitation;
(B) coordinating with non-profit organizations and other providers of interactive computer services to preserve, remove from view, and report online child sexual exploitation;
(C) retaining child sexual exploitation content and related user identification and location data;
(D) receiving and triaging reports of online child sexual exploitation by users of interactive computer services, including self-reporting;
(E) implementing a standard rating and categorization system to identify the type and severity of child sexual abuse material;
(F) training and supporting content moderators who review child sexual exploitation content for the purposes of preventing and disrupting online child sexual exploitation;
(G) preparing and issuing transparency reports, including disclosures in terms of service, relating to identifying, categorizing, and reporting online child sexual exploitation and efforts to prevent and disrupt online child sexual exploitation;
(H) coordinating with voluntary initiatives offered among and to providers of interactive computer services relating to identifying, categorizing, and reporting online child sexual exploitation;
(I) employing age rating and age gating systems to reduce online child sexual exploitation;
(J) offering parental control products that enable customers to limit the types of websites, social media platforms, and internet content that are accessible to children; and
(K) contractual and operational practices to ensure third parties, contractors, and affiliates comply with the best practices.
Amends Sec 230e so that child sexual exploitation law gets its own section, on the list of things that section 230 does not apply to, joining, among other things, all federal criminal law, all IP law, sex trafficking, and privacy laws. The text to be inserted looks similar to the current sex trafficking text?
Explicitly say that the entire list of exceptions to 230 does not create liability, either federal or state IF you are end to end encrypting and cannot decrypt.
Not creating backdoors does not create liability.
Go through all federal laws and change "Child Pornography" to "Child Sexual Abuse Material".
So overall I don't see the issue. Seems like a good law.
I guess the gotcha is in "preventing, identifying, disrupting, and reporting X", which seems impossible to do when communication is encrypted end to end.
There has to be a better trade-off here that minimizes the risks of 3rd party access, and gives law enforcement and the intelligence agencies the tools they need to do the best possible job.
If I had to choose, I’d rather my consumer endpoints be hardened but have vetted and protected exceptional access mechanisms on the encryption.
In practice, this bill is likely to lead to cut corners by big tech, who won’t be legally mandated to actually build increasingly responsible encryption recovery mechanisms for LEO. This will enable big tech to say, “I told you so”, because they were simply doing the minimum amount that was required of them legally.
At best it might help catch some street level crime but any serious organized crime (which arguably is a bigger problem because it's organized) will have the tools available anyway. Consider it this way. There's already a black market for security exploits with exploits frequently costing far more than they might actually otherwise be worth (there's a limited time utility before the exploit is patched). How much do you think e2e encryption would cost & do you think there's not going to be buyers & sellers for this? Especially since, unlike exploits, this is an infinitely distributable solution. I can sell to as many buyers as I want without risking my revenue stream.
On the technical side we've observed what happens with this stuff. We'd be one Snowden-style leak away from all websites instantly becoming vulnerable. Do you not think that might be valuable to adversaries of the USA?
True, you can’t stop math but you can try to police it. You can regulate consumer access. Doing so means one less “gone dark” area, which makes LE job easier.
To your point about low level criminals: Now that the cat is out of the bag, yes, surveillance worked way better when people didn’t know about it. Yes, more sophisticated criminals will try to employ their own encryption. If I were in LE or the IC, I’d still rather not deal with the oceans of data produced by essentially unbreakable encryption via big tech.
Will address point about uncapped value to an encryption exploit below.
Yes, which is why it is imperative to continually improve and audit such systems, including maybe removing such single points of failure as you noted, both from an insider threat perspective as well as from exploit discovery processes.
It would be helpful to consider how to build recoverable encryption in a way that minimizes the risks of the existence of the exceptional access mechanism, from all angles: technical, social, etc
Can you join me on a journey to build this hypothetical world to figure out how this addresses the Snowden leak?
Let's imagine a world where every single server had a registered backdoor key. This key also isn't the key itself. No, we're smart. It's instead used to sign one-time use, timestamped keys that give you access. We assume all these servers are also somehow always running the latest version of the software to implement the backdoor to address any exploits that may have been discovered.
We control access to this carefully so that you can only request a code & this is validated by all kinds of bureaucratic controls that are never violated for expediency & no mistakes ever needed. Also the system handing out codes itself doesn't even have the keys. It has a temporary key that can't generate valid signatures past its expiration. To regenerate, we go into a fortified secure vault that is air-gapped. This air-gapped system is used to generate a new key, burning it onto a CD-ROM. So your admin has to, on a monthly basis, go into the vault to generate some secret that can be used to continue backdoor access.
Now imagine your admin going into the vault on a monthly basis with a CD-ROM drive that gets burned is Snowden. You've now stolen the root keys for every machine out there.
Let's also remember a few things that are elided for this hypothetical world we've built.
1. I may have gotten some details wrong here, but this is really close to how OS updates are handled by Google & Apple. This is treated as one of the most secure ways to do software deployment at scale (we're not talking about one-off carefully controlled & vetted backdoors which are a wholly different problems).
2. Software deployment is hard. There's no world in which you will instantly deploy a security fix to your backdoor code. Some machines don't have good uptimes & others can be mostly invisible to the internet. Mobile operating systems are different as Google & Apple dictate the HW design. Google has struggled here more pulling vendors along to do the good security things. Are you proposing we standardize on Apple hardware for everything?
3. If you have the ability to deploy code to any random machine, that deployment mechanism is a target in and of itself. Since every US machine has to implement it in this hypothetical world, this is an attractive exploit. It's easier to secure but now the value of compromising it has increased exponentially. We haven't heard of any exploits of this but given the value already (& exponentially more if we're talking about every single system in the US), we're looking at threat actors with ridiculously deep bank accounts & access to technical expertise.
4. Timestamps are hard. You're talking about every single machine in the world. There's plenty running the wrong time. So someone changing the clock breaks your ability to backdoor (unless you ignore timestamps, but then your keys you're generating are reusable on that website at least).
5. Key rotation & management is insanely hard. You're talking about every machine in the US. Even every server. Mistakes will happen at this scale so your backdoor either won't work (best case) or you'll have unintended compromises (or likely both).
6. Complexity & security are diametrically opposed. The more complexity you add the less secure you are. Modern machines are already ridiculously complex.
7. Everyone outside of the US (including US companies that have servers abroad) will not implement the backdoors. But may implement the backdoors the other nation states will force them to adopt. Sure, it's great if you're the US forcing your way to gain advantage over other countries. How do you keep these systems segmented so that a backdoor from another country doesn't give you access to the US? Moreover, let's say the US implements an impenetrable system. Do you think other countries will care to do the same? Does the US share our tech with them at the risk of making it even easier to find flaws? Also how do we manage distribution of such software when there's a flaw?
No amount of advise to "invent better math" solves the fact that this isn't a technical problem. No amount of "build things better" solves the fact that software engineering is hard & we have 0 examples, even in "big tech" which invests billions here annually, of building truly secure systems that are actively trying to prevent any backdoor/exploit. Above all else, you're proposing a single point of failure for the entire US economy. You can use this to conduct industrial espionage at an even larger & easier scale than happens today or to take down critical infrastructure in a time of conflict.
Is there something I missed in my analysis? What part can we "do better" on that doesn't result in exposing a significant vulnerability?
Yes, it’s a difficult problem with social implications, and not simply technical challenges, as you noted.
Yes, Snowden shouldn’t have so easily been able to steal so much data. Apparently the IC has installed numerous checks and balances to help prevent another such insider threat.
The reason the Attorney General and the DOJ want some sort of access to communications is not to undermine free speech and to take over the world, it is because they know that without legal provisions technology can make it impossible to access communication during legitimate police investigations.
How do we solve this problem?
We don't, we accept that encryption is part of the modern world and learn to live with it. Because there's nothing else you can do about it.
See encryption is just math, and you can't really outlaw or limit how math is used.
If we have bad actors who want to encrypt their communication, they absolutely can with or without this bill.
Even if Whatsapp/Telegram/Whatever has to provide the US government with a backdoor to decrypt all messages, anyone can make their own communication platform and simply not give the government a back door. Implementing secure encryption isn't difficult and it's very easy to research how to accomplish it.
Grab a few devs and they can create a simple encrypted messaging app in a few days.
You don't even need to distribute it through official channels. Android allows you to sideload apps from anywhere and you can jailbreak iPhones to install apps from anywhere. So our bad actors can create secure encrypted communication platforms and distribute them without anyone ever knowing about it.
How will this bill prevent that? How will it prevent a few random developers from whipping up their own apps? How will it make it impossible for anyone, anywhere at any time to implement encryption into any app or platform?
A fundamental requirement of a free society is encrypted communication, always has been. Im amazed given how the police state has grown since 9/11 that there are 'poor police' arguments et all. Government always grows in scope. The secret fica courts that were intended for terrorism only ended up being 50% domestic drug cases.
and then the users of that platform would simply stand out in ISP logs making it actually easier to spot them. If this platform was a dedicated tool developed by/for a bad actor, then everyone working with/for that actor would be easily found.
Given that, it seems that steganography (combined with encryption) could be a solution with a "battle" between steganographic methods and algorithms to detect them
Encrypted data would still be flowing all over the place, if our bad actors use VPN's to hide their traffic then it would become impossible for ISP's to see what they're doing or using.
In addition, even if you can pinpoint who's using encrypted communications, unless you can prove they're actually engaged in some criminal practice, it won't do you much good. With EARN-IT the responsibility is on the encryption providers, so those two random devs who made the app. You can't tell what the users were talking about since communication is encrypted, you can't really prosecute any of the users for anything besides maybe using those apps if it becomes completely illegal or you can prove that the app is only used by criminals and no one else.
Now you can potentially go after the devs, assuming of course you can figure out who made the app, and assuming these people are in a place where US laws apply. The global nature of the Internet makes things very difficult. If a Swedish team develops and encrypted communication app and distributes it on their website, are they still required to comply with US laws? If they prevent US citizens from downloading the app with geoblocking but people get around it with VPN's, are they still required to comply with US laws?
you just transfered a problem from ISP level to VPN operator level. While you could argue that using multiple VPNs from different countries could make this somewhat harder, the problem still exists. Especially if you consider metrics other than IP, for example specific packet sizes or timing patterns (for example, instead of users connecting to given IP, the adversary would look for users sending 640 byte packets every 300 seconds).
While the arguments that encryption of messages makes it impossible to know the contents of messages (and thus using the contents as evidence), however the ability to uncover the members/employees/cooperators of bad actor would make it easier to investigate them and/or use other means of targeted surveilance to obtain evidence. Also this would make it easier to infiltrate bad actor, since one of the uncovered users could be then coerced into cooperation.
(All above assumes that the app/platform is used only by members of "bad actor" and noone outside that organization is using the app. It is completely different if there are other users, perhaps even bad-actor users being a minority.)
With the developers outside jurisdiction, the problem is that while they of course might or might not be required to comply with the law, but they can still be coerced/manipulated/otherwise encouraged into providing a "patch" (backdoor) into the application.
I believe that much better solution would be to simply use any popular platform as a transport layer, with independent end-to-end encryption. Possibly with some steganography as well. The simplest example would be users exchanging memes/cat pictures - this will not stand out in any ISP/VPN traffic analysis. It will also not stand out (that much) in content analysis by any entity that can decrypt/access plain-content. The images being exchanged could then contain embedded (and end-to-end encrypted) content.
While this is still far from perfect - you could imagine detection of repetitive images being sent, content/timing patterns or actual analysis of attachments for steganography but all those still require significantly more resources to work on massive scale.
Alternative would be to use custom platform but having as many "external" (in a sense of not working with/for bad actor) users as possible
This is more about ordinary people maintaining privacy in their normal daily activities, in ways that aren’t too inconvenient to use 24/7.
If a bad actor has the knowhow to build a custom platform they sure have the ability to access the internet in a way where they can’t be found by IP.
Governments still like to push anti-privacy laws because they help catch non-technical criminals who don’t put in a serious effort to hide. This is why they hate “built in” privacy protections in consumer software and demand ways around it, because they help protect even technically illiterate criminals.
What I'm trying to say is, the important question is how much do we want to erase privacy for 99% of people who use normal consumer software in order to help police catch the ~1% or whatever the percent of criminals is that also use normal consumer software, and just happen to also be criminals. The 0.01% of people that are criminals and have the resources and knowhow to actively try to avoid detection by building their own systems are not going to be caught in trivial ways (like tracking their IP to their apartment, vpn or no vpn, or tracking them through correlation from using their personal social media account from the same connection they perform illegal activity from) anyway so they don't matter.
You can outlaw math, and the result will be really bad.
The old-fashioned way: physical surveillance for the cases where they strongly suspect a crime. It's even easier these days with the new surveillance capability.
> How do we solve this problem?
It's called a search warrant.
If you only have peer-to-peer encryption with communications still passing through servers unencrypted then, sure, you can get a warrant to force disclosure.
With end to end encryption, however you can show up with a warrant all you want it makes no difference because they physically cannot hand you clear-text communications.
Same for encryption at rest. The strength of the encryption algorithms we have today makes it near impossible to recover the clear-text data.
That is not how E2E encryption breaking will be used.
For E2E encryption breaking, they won't ask for "future conversations between A and B for the next 2 weeks" they will say "give me all historical communication from this person between all of their followers for the last 5 years".
For the first time in the history of the world, were are in a time when, not just your future communications, but your past ones, the ones that you think are private between two people in a modern-world way (that is, over the internet) are not private except when E2E. Worse, by all accounts from a normal person - they are private. I mean, we certainly act like the messages we send family iMessage or whatever are private, and we assume the same level of privacy we'd be afforded for a phone call, except that's not present; and, with laws like this, it will be even less present.
And not only is the former bad, but the latter is much, much worse.
The fact that they may be accessed by law enforcement agencies in specific circumstances does not mean that they are not private.
If you are seeking absolute privacy for all time, i.e. absolute secrecy, the only option now and since writing was invented is to leave no trace: either communicate face to face discreetly or hand deliver letters and destroy them after reading.
What has changed recently is that we are effectively communicating in writing for virtually everything (even a voice call is affectively "in writing" as it's data stored on a medium) and that the cost of storing those writings has dropped to zero. In addition, and that's the issue, encryption has reached a stage where what I suggested (to hand deliver letters and to destroy them after reading) has become 'easy' and cheap for all communications if you want to, which creates big problems for law enforcement and security agencies.
I think this is a legitimate issue. The level of the debate, for example in this thread, is low and not helpful because, at least in some tech circles, people refuse to acknowledge real world issues and there is an extreme and utopian view than anything less than absolute secrecy is absolutely unacceptable.
The physical letters, yes—but if those letters were written in a code or private language it is well established in precedent that one cannot be compelled to translate them into plain language for the prosecution. Forget all the false skeuomorphic analogies about "locks" and "keys"; encryption is not a safe you put your message into, it's a set of private codes.
A warrant authorizes law enforcement to seize the physical evidence (letters, hard drives, memory chips, etc.). Making sense of the content afterward is entirely their problem, and that remains true even when making sense of the content is likely to be beyond their capabilities.
The particular mechanism being exploited in this law (revoking section 230 protections for companies that do not implement whatever "best practices" are promulgated by this new committee) is particularly bad because section 230 should never have been required in the first place. It should go without saying that a service provider is in no way liable for illegal content uploaded by users without the service provider's knowledge. Putting aside the fact that the idea of "illegal content" is itself nonsense in any country that purports to recognize freedom of speech, the occasional removal of unwanted content that is specifically brought to a moderator's attention does not imply that the service provider actively controls everything that is published on the platform. Logically, if a fully unmoderated site is not to be held liable for content uploaded by users then a partially moderated one likewise should not be held liable for content which has simply not yet been brought to the moderator's attention. The court erred in lumping partially moderated sites in with ones requiring full prior review and approval before posting. Section 230 was passed to address this miscarriage of justice, and as such revoking or weakening its protections, or holding them hostage as this EARN IT act would, is itself unjust.
Well, no. That's the issue!
They shouldn't be executing search warrant behind your back to some their party that happens to be stirring your data. They should execute the search warrant on the person who owns the data they're investigating. If they refuse to give up the data they're breaking the law.
> Same for encryption at rest. The strength of the encryption algorithms we have today makes it near impossible to recover the clear-text data.
Same for encryption at rest. We already have laws for that. Get a search warrant. If they refuse to give the data you have a warrant for they broke the law. For example, if you have something in a safe the cops won't just come into your house and crack your safe. They'll get a warrant first then in theory you'll open it for them or they'll have to crack it if you break the law and don't open it.
Warrants aren't issued for data, they're issued for property. A warrant authorizes law enforcement to search for and/or seize evidence (i.e. physical property) without regard for the owner's property rights. The owner doesn't have to give them anything or aid the search in any way beyond simply not interfering. Standing back and leaving them to break into the safe on their own does not violate any law. You might open it anyway just to show goodwill and avoid damage to the safe, but there is no obligation to do so, and unnecessarily demonstrating that you have the ability to open the safe may, in certain situations, amount to testifying against yourself.
Your script kiddie might break at the suggestion that refusing to cooperate means 2 to 5 years in jail but organised crime, or people who are risking a much harsher penalty, won't.
These are the real, down to Earth issues, not utopian discussions about individual liberties.
I'm not willing to give up my privacy just because other people are committing crime and the cops can't figure how to do their job. I'm in the US and the 4th amendment says I don't have to.
It's simple to read a letter. It's not difficult to eavesdrop on a phone copper wire. But it's impossible to intercept communications encrypted end-to-end, or encrypted at rest, considering the strength of the algorithms freely available today.
I find it horrifying how many Americans lack the basic understanding of what they are facing here. The STASI reserved for itself the right to read every letter and before them it was the GESTAPO who argued that, for the protection of national security ("Schutz von Volk und Staat"), all communication must be accessible by the secret police.
The american people have no idea what devil they are summoning.
I agree. There is a problem if they have open access to everyone's data.
> But it's impossible to intercept communications encrypted end-to-end, or encrypted at rest, considering the strength of the algorithms freely available today.
What about to people talking to each other in private. That could be considered as "problematic" as end to end encryption. Should their phones be recording them so the government can snoop later? And for encryption at rest, what if someone has a safe that self destructs is someone unauthorized tries to open it? Should that be illegal?
The cops can do their jobs in meatspace.
They used to be able to read letters with a warrant, IIUC.
With E2E encryption now widely available, no sane criminal will use any non-encrypted channel.
So, a lot of methods law enforcement used to find very helpful effectively no longer exist.
The steelman principle says that this is what you need to argue isn't a problem.
Disclaimer: I lean towards the EFF's position on this one, but I can see there are some debatable issues here.
They can park a van across the street from my house or office. They don't need access to the telco infrastructure to do their jobs.
The lack of foresight in previous generations is not an excuse to perpetuate their mistakes.
Decades ago, if they had evidence you were involved in crime, they could apply for a warrant to tap your phone, search your house, or read your mail.
That's focused surveillance, not mass, and it's under the oversight of a judge.
In a world of end-to-end encryption, they can't realistically find useful evidence by doing those things any more.
Parking their van outside doesn't help as much, either - you can conspire to commit crimes quite easily without ever leaving your house or having confederates come there, thanks to encrypted video chats.
Where will I commit this crime? Somewhere in meatspace or it doesn't matter. Cops can work backward from there.
Most people don't approve of making millions from insider trading, for example.
Or of cryptolocking beloved photo collections and holding them ransom for bitcoin.
You can't stop cryptolocking by tapping a phone or by banning encryption.
Exactly, this is a big practical problem and what these laws try to address (in a good or bad way...).
IMHO, this is a legitimate worry and opponents should indeed come up with practical alternative solutions. I think the EFF and others do not take a pragmatic approach but rather camp on ideological positions that will get them nowhere.