Hacker News new | past | comments | ask | show | jobs | submit login
‘Five Eyes’ Nations Quietly Demand Government Access to Encrypted Data (nytimes.com)
478 points by aaronharnly on Sept 5, 2018 | hide | past | favorite | 227 comments



Of course they are going to ask, and legislators will weigh the political cost/benefit to it.

My impression from the previous crypto wars and the skirmishes that have followed is, as technologists, we take a very tactical view of technology, and underestimate the intentions of people who understand power and politics the way we understand information systems.

The way we see the security of a system, they see the sovereignty of a state. Just as incompleteness in our code can yield system level compromises, incompleteness in their ability to apply their rules to their territories and domains also yields compromises that makes the whole system untrustworthy.

I don't agree with what I perceive as their Hobbes-ean need for total control, where I think the localized, depth first absolute authority of a state becomes malignant when it is applied breadth first and in totality to all aspects of life, but you can sympathize with the urge without agreeing with it.

They should be mindful that post-Snowden, no matter how large the field we live in, people have seen the walls and bars at the perimeter, and that broad perception is likely a greater source of instability than any gaps in the ability of the state to enforce them.

Viewed this way, the 5Eyes statement seems unwise.


I don't have to sympathize with what I see as unconstitutional overreach. Who funds these agencies anyway?


5Eyes includes nations other than the USA (4 of them), so the constitution need not apply.


I know the usage of other nations spy agencies has long been used to bypass US laws protecting it's citizenry from our own agencies.

That doesn't make it right.


I often see this claim made without any supporting evidence. The Five Eyes agreement explicitly forbids its members from spying on each other. It facilitates sharing of information gained from spying in countries outside of the group.

https://www.pbs.org/newshour/world/an-exclusive-club-the-fiv...


>forbids its members from spying on each other

It isn't spying if they willingly exchange the information they have intercepted. Straight from the link you posted:

Yet there have been reports in the British press — amplified most recently by former NSA contractor and leaker Edward Snowden — that that’s not the case, that the Five Eyes spy on one another’s citizens and share the information to get around laws preventing agencies from spying on their own citizens.


That would be a good story if Snowden provided any documents to back it up. He didn't.

I trust the BRUSA documents over the word of a high school dropout who failed his analyst exam for misunderstanding the course materials and misdescribed PRISM for misunderstanding the documents he leaked.


They're not spying on each other, they're spying on us.


MUSCULAR, revealed by the Snowden leaks, is a program in which the UK's GCHQ broke into Google datacenters and exfiltrated information, which it handed over to the US's NSA. Because the NSA received the data from a foreign partner, they treated it as foreign data and did not scrub it for US citizen data.

For those who remember, this is the program with the famous "smiley face" drawing of where Google decrypted data at their network edge.


> MUSCULAR, revealed by the Snowden leaks, is a program in which the UK's GCHQ broke into Google datacenters and exfiltrated information, which it handed over to the US's NSA. Because the NSA received the data from a foreign partner, they treated it as foreign data and did not scrub it for US citizen data.

You misunderstood Snowden's documents. The UK tapped undersea cables entering their country. They did not break into Google's datacenters. According to Snowden's documents, the NSA is not allowed to collect and store US citizen data, no matter the source. There was at one time email metadata collection (from/to/when) from that source that was ended prior to Snowden's leaks according to his documents.


> Intercepting communications overseas has clear advantages for the NSA, with looser restrictions and less oversight. NSA documents about the effort refer directly to “full take,” “bulk access” and “high volume” operations on Yahoo and Google networks. Such large-scale collection of Internet content would be illegal in the United States, but the operations take place overseas, where the NSA is allowed to presume that anyone using a foreign data link is a foreigner.

https://www.washingtonpost.com/world/national-security/nsa-i...


That article's author is very confused. "Full take" is in reference to other interception programs in war zones. The documents about MUSCULAR itself show that it filtered to particular data types sent to international regions and filtered that data to just the information on selected individuals. The same source previously also provided email metadata collection on everybody, but that program ended in 2011. https://www.theguardian.com/world/2013/jun/27/nsa-data-minin...

Refer to the primary documents when you see articles like the one you posted with contradictory information like "full take" vs. "100,000 selectors" with a mere "millions of records every day."


The author is Barton Gellman, one of the reporters to whom Snowden leaked documents, and the lead at the Post for the team that shared the 2014 Pulitzer with the Guardian for the Snowden coverage.

The article you posted is not about MUSCULAR. MUSCULAR did not require FISA authorization; that was kind of the point of structuring it the way they did.


It doesn't matter who wrote the article. What matters is that it contradicts itself on what MUSCULAR is.

My article is about a data collection program that used MUSCULAR as a source.


When you tell me to "refer to the primary documents when you see articles like the one you posted", it's relevant that Barton Gellman is one of a few people who have reviewed Snowden documents directly, including documents that the Post chose not to publish.

I'm done with this conversation; it's clear you're not acting in good faith.


> it's clear you're not acting in good faith.

I pointed out that the article you posted said the exact opposite of what you claimed it said (in addition to saying what you claimed it said). How is that not acting in good faith? If the article is contradictory, you need to go to the primary documents, which were available from The Guardian.


In the US, telecom companies are required to provide wiretap capabilities to law enforcement. Do you think that's unconstitutional overreach?


A wiretap doesnt interfere with my ability to use the phone. Breaking security definitely interferes with my ability to use the internet.


What’s the difference? They’re both forms of eavesdropping.


Phones shouldn't have an expectation of privacy if they're over analog lines as you have no control over the content once it leaves you. I, personally, think that they're both overreach but, legally, I think it's the difference between someone listening to a conversation you're having in a public park vs. one that you're having in your bedroom at home. If someone can hear me in a public park, that's on me as I clearly didn't put thought into how accessible I was. If someone can hear me in my bedroom, though, then I have to question the integrity of my home.

Encryption was specifically created to guarantee the integrity of an A->B interaction. If we're compelled to break that, then the whole system is no longer able to be trusted and its integrity is shot.


Would it be naive to suggest that perhaps we should never have trusted it?

Isn't this why spies always meet in person and go to great lengths to set up dead drops etc.


Of course not phones! I mean, "The President's Analyst" ;)

But from a professional, and WWII hero:

> 17. The greatest material curse to the profession, despite all its advantages, is undoubtedly the telephone. It is a constant source of temptation to slackness. And even if you do not use it carelessly yourself, the other fellow, very often will, so in any case, warn him. Always act on the principle that every conversation is listened to, that a call may always give the enemy a line. Naturally, always unplug during confidential conversations. Even better is it to have no phone in your room, or else have it in a box or cupboard.

https://blog.cyberwar.nl/2016/02/some-elements-of-intelligen...


Maybe not naive but I think never trusting your own home is just a recipe for paranoia. You have no reason not to trust your home unless some actor had reason to do so from the onset. It's not a trivial matter to bug someone's home or bedroom without their detection unless you have free and clear access.

Edit: Just now realizing that you meant we shouldn't trust encryption and phones, not our home. Whoops. Leaving my response for posterity and lulz.

Spies meet in person because they're people of interest. They take the job knowing that nothing they do is ever really done in secret.


Why shouldn't we have trusted it? It's solid math.


Resting on a massive assumption.. so really the opposite of solid math.


One only exposes you to the spy, the other exposes you to every other malicious actor on the internet.


If Apple includes an additional public key in the list of keys that can decrypt an iMessage, how are you now exposed to every other malicious actor on the internet?

There are things handset makers could do that would allow law enforcement access to a device without compromising the security of every person using that device. For example, they could add a connector to the logic board that grants access to the keys after a fuse is blown. It would only work on devices that law enforcement have in their possession and once they blow the fuse, the device is otherwise useless so you don't have to worry about using a device that's been compromised. It could even be designed so that the extracted key is encrypted and can only be decrypted by Apple after they receive physical possession of the phone and a court order.

A scheme like this would individual phones that law enforcement have in their possession to be accessed. It wouldn't allow mass decryption and so normal users are still protected. That seems like a reasonable compromise to me.


This would ruin the physical security of lost devices wherein the attacker is willing to ruin the device. This is worse than key escrow because it ruins the physical security all all devices everywhere.

At least with key escrow we could laughably pretend that the government would keep their master keys secure. You are suggesting that all portable devices ship with security that can be defeated with a screw driver and hoping nefarious people don't react by installing actual security in software.


> This would ruin the physical security of lost devices wherein the attacker is willing to ruin the device.

Only if they could also secure Apple's cooperation. That's what I was addressing when I said "the extracted key is encrypted and can only be decrypted by Apple".


Ok you are right I missed that. However

Once apple has the keys to the kingdom what is stopping the government making apple give the government such keys ensuring that no warrant is required.

What is stopping users from using software that is ACTUALLY secure.


> what is stopping the government making apple give the government such keys ensuring that no warrant is required

The same thing that's stopping them from silently requiring Apple to include backdoors into every device today.

> What is stopping users from using software that is ACTUALLY secure.

It's the same as with a regular phone line. Users are free to speak in code or use an analog scrambler. Just because a particular interception technique isn't perfect doesn't mean it wouldn't be valuable to law enforcement.


No, both depend on how careful the eavesdropper is about safeguarding their special access. Both could be secure (as secure as planned, anyway) but both add some extra risk.

There are practical differences, sure, but it’s all the same principle.


One cannot automate physical wiretaps on hundreds of millions of people from Nigeria and steal their data or sabotage their infrastructure via access to their phone lines.


It's been a while since wiretaps were a physical thing.


They don't require access to your phone companies premises?


No, not anymore.


The difference is that we can secure Internet traffic. So we should. Just in case.


The apparently Hobbesian view that you have over FVEY's desire to decrypt data is misguided. Most people that work in politics or intelligence in liberal states aren't in it for the power. They are either curious, desire a life of excitement, or want to bring positive change to society.

They want to decrypt data because they want to protect people from threats. I'm not saying we should allow them to, but ascribing malintent is misguided.

Edit:

Before the reflexive downvotes ask yourself: Who is asking for the ability to decrypt? These people are making half or less what they could make in the private sector.


Do they do this because they want to enforce their own view of utopia onto others? That excuses nothing.

>Of all tyrannies, a tyranny exercised for the good of its victims may be the most oppressive. It may be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end, for they do so with the approval of their consciences.

-- C. S. Lewis


Lewis never met a billionaire. The robber baron is never satiated. They are not in pursuit of money, or even power. They seek relative money and power. Their greed ends only when they are on top, at which point they become rightly paranoid of their peers. So no, I don't believe that life under the robber baron any better than under "moral" leadership.


In which case they are still equally bad and thus it doesn't matter if they think they are doing it for benevolent reasons.


I disagree that we should trust that their intentions are "for the greater good" or even "somewhat reasonable." Government and law enforcement agencies, from top to bottom, are staffed by individual people. As we have seen time and time again in the corporate sphere, once we get a couple committees in place it becomes easy to let things that bother an individual slide by as responsibility rests on the larger group.

And, of course, once the group has managed to squash the voices of it's individual members it is open to exploitation by external forces, like money from lobbies (designed to sway the group) or bribes to individuals (often in order to get the few who disagree to align with the group).

Some of the people in government and law enforcement want to decrypt data to protect people from threats. Some of these groups are looking at their authority and seeing it erode, they are looking to decrypt data to preserve that authority. As we have seen in practice, others are looking to extend their authority (as in the case of projects like Echelon, etc.) by having real-time access to people's data.

While you might be able to agree with some of these people some of the time, I strongly doubt you can agree with all of them all of the time. And if you find that you cannot agree with all of them all of the time then, in my opinion, they probably should not have the ability to decrypt any of your data.

I think we all would do well to remember that while government and authority love to talk about pornography, child abduction and terrorism the vast majority of people have nothing of the sort on their phones or computers. Protecting the data that everyone has should not be forgotten (financial, private, corporate, etc.)


> They want to decrypt data because they want to protect people from threats.

This is not true at all. That is the smoke screen they have put up, but the real intent is to institute pervasive surveillance and have more power over people. They're lazy and have decided that defeating encryption is the easiest way to bypass the effort required to investigate crimes properly.

Attributing this policy drive to altruism is truly naive wishful thinking.


> Attributing this policy drive to altruism is truly naive wishful thinking.

And attributing it to malice is truly naive pessimistic thinking.

The reality is that both of those positions exist. There are people in government (and anywhere) that want to help or hurt people. I find transparency more helpful than hurtful, because we are in an age where there are public repercussions for visible actions.


That misses the point of rights in the first place. A proper system operates under the assumption someone will try to abuse it and has measures to counter it. Just like a banking website designed under the assumption nobody would try to access it illicitly is hopelessly naive so is assuming it about the levers of power.

Rights and justice are based upon the same assumptions - no you can't take the prosecution's word that they are totally guilty because they look shifty and aren't from around here! They have to prove that there was a crime and that they committed it beyond a reasonable doubt.

Until they can prove say secure communication in itself can kill anyone in the world like a magic spell they have no case for this.

Pessimism is thus wise in the same sense that you shouldn't ask a stranger on the street to hold your wallet for half an hour.


Maybe so, but this transparency doesn't go both ways. Instead the government takes a paternalistic view and makes it as hard as possible to have themselves held to account (directly contradictory to the spirit of democracy.) For example, in Australia it has become more and more illegal for journalists to report on human rights abuses and material from whistleblowers about the government's activities. Earlier this year they tried to legislate 15-20 year jail sentences for journalists who broke the government's vague parameters for responsible reporting, under the guise of national security and preventing journalists working as spies for foreign agencies. Treating the free press as enemies of the state by default is deeply troubling, I hope we can both agree. This is what I see in these debates about encryption and backdooring. I'd feel much more comfortable being held to account for my communications if I felt empowered to hold my government to account in the same way.


I said most people in intelligence or politics in liberal states are good. Not all.

There are hundreds of thousands of people with top secret clearance for fucks sake. They're not all part of some conniving cabal.


I would say you're wrong when trying to say people are good/bad. For the most part the people in the machine don't matter. They are cogs in a machine that only see a very small amount of information and rarely know what the big picture looks like.

The output of the machine is what matters. If the output of the machine is bad, then everybody that is a part of it is and unwitting bad actor.

We have to make sure we don't make bad public policy machines.


> And attributing it to malice is truly naive pessimistic thinking.

My government is literally killing refugees in order to make a public statement that seeking asylum in our country is worse than staying in a country where you face death from bombs, lynch mobs or government execution.

I think my interpretation of this Five Eyes directive as a lazy power grab is being extremely kind to the people involved because I'm implying they have a plan and are aware of what they are doing.


> Most people that work in politics or intelligence in liberal states aren't in it for the power

This may be true, but even if so it is irrelevant. You do not need evil intentions from hundreds of thousands of worker bees to make something ugly. Just a few politicians or leaders on top are sufficient. I suspect in WW2 Germany most factory workers making gas chamber equipment were diligent, hard working, family loving fellows who did not know the full use for the product they were building.

And yes, I do not trust politicians who lie to the public "for its own good". I think most of those are in it for the power.


This is a valid point; and completely worth discussing. I was replying to someone that was saying something different, however.

In terms of lying to the public: I agree to a point. I think there are certain truths that can't be voiced for fear of enabling our adversaries. That being said, however, I think it was counter productive for our intelligence agencies to lie about the dragnet. It almost invited a Snowden like character and the inevitable public blowback wasn't worth it.


"They want to decrypt data because they want to protect people from threats."

That's "in it for the power".

Power doesn't look the same from the inside. Basically nobody (except college-age Mark Zuckerburg) goes around saying "I'm an evil Machiavellian genius that's going to fuck over the world for my personal benefit." Instead, the feeling of having power, from the inside, is the feeling of being able to make decisions for other people for their own good. It's the ability to deal with people as abstractions who should want things, whether they actually do or not. People can't be trusted to invest their own money, so we need regulations so that only the wealthy can invest. People don't know what they're searching for, so we need to correct their queries for them. People want lower prices, so we'll give it to them by turning the screws on our suppliers. People would collapse into chaos without law and order, so we enforce it.

Sometimes the powerful are even right in their views - after all, very often that's how they got to be in power in the first place. But that doesn't stop them from being resented, because the resentment stems from the fact that they are making decisions for other people that those other people neither consent to nor really want. Someone who is actually "not in it for the power" is someone who says "I trust you to live your own life how you want, as long as it doesn't prevent me from living how I want."


"Most people that work in politics or intelligence in liberal states aren't in it for the power."

True, but those directing and financing them are. Sorry but this is exactly how being in power works. You don't spy on common people because you want to arrest them all now, but rather to keep an advantage position "just in case". Power is all about maintaining an advantage position with respect to an adversary or a potential one. Should you one day, say 10 years from now, become a political figure, the pictures someone collected 5 years ago of you dancing naked and stoned in a disco would instantly become a tool to be used to either destroy your political career, or use it for their gain through blackmailing. Knowledge is power. AI training aside, gathering all possible information about everyone is how Cardinal de Richelieu would get something enough to hang any of the 7 billions people living on this planet, or have them work for him 24/7.


This is why warrent requirements for wiretaps are much tougher then most other search warrants. You must show other approaches didn't work.


That wasn't true in the case of tapping into our internet communications. There was no "targeted spying isn't working so we need dragnet surveillance", They just saw something that would make their jobs easier and ignored the obvious moral issues, and the 4th amendment, for some reason. "good guys" don't need those limitations I guess, and will never do wrong. (Except... our hegemony does do wrong by the world every day, and uses its might to resist any positive change that takes power from the powerful)


> Before the reflexive downvotes ask yourself: Who is asking for the ability to decrypt?

People who are known for running the largest spying operation known to man, with utter disregard for privacy. People who oversaw dark sites? You tell me.

> these people are making half or less what they could make in the private sector

But they couldn't get the same level of access, they couldn't see what their ex is doing, for example.


How else are you going to parallel construct a case against someone if you can't intercept their communication using extralegal means?


How else are you going to make a compelling argument without cherry-picking an extremely rare event and implying it's the norm?


do you have evidence that it is a rare event? we have evidence that it is indeed not a rare event because the agencies have a specific term for it...


The burden of evidence is on the original claim if it is not widely accepted or reasonably obvious.

The only evidence is a lack of evidence. There are documented examples that surface on a rare basis.

The fact something has a term has no impact on the frequency of the event. Unless you think that a Dyson Sphere is something that's been observed.


Haven't ascribed malintent at all. I will say the people you describe have moralized their authority, as we all tend to, and I have described it without that filter. It's just incentives.


> Most people that work in politics or intelligence in liberal states aren't in it for the power. They are either curious, desire a life of excitement, or want to bring positive change to society.

I suspect you're correct. However, the systems we design must be strong enough to deal with the inevitable lone wolf or corrupt regime. If we rely on average people with good intentions being the only people in power we will fall to the first outlier to show up. This isn't a matter of pessimism or distrust. It's a matter of us having to be successful at defending every attack, every time and them only having to succeed once. All it takes is one. You design the system not for the authorities you hope to have in power but for the one's you fear having power.

Governments have many tools, the most powerful of which is physical coercion, to gain access to data they have reason to believe exists. I want the serving and execution of a warrant to be expensive (and legible, except in the most rare, unique - and thus highly regulated - circumstances). If it is expensive then it will have to be targeted - resources aren't infinite. Making these powers trivially cheap and unnoticeable when exercised will only lead to widespread abuse and unwarranted violations of privacy as we have seen time and again whenever dragnet surveillance becomes cheap, easy and accepted in a population.

Nation States already have the power to inject silent updates and "hack" foreign agents -- everything they're asking for under these new laws. They generally do not need to break encryption or gain encryption back doors. They do full take on the data before and after it is encrypted and decrypted. This, for the most part, is expensive and thus targeted, for sophisticated intelligence targets. Governments now want to expand this power which has been traditionally reserved for targets which pose grave national security risks. Governments want to apply a military tool to civilian police investigations - effectively militarising police surveillance of all citizens. Are sufficient effective checks and balances even possible for such a powerful capability?

Add to that, the fact that governments want us to pay to create and maintain this surveillance directly, thus hiding the true cost and scope of their investigatory activities - and a means by which we can challenge the implementation of such a system lawfully, by compelling all private companies to build in vulnerabilities on demand. Worse, once those vulnerabilities are in place in a system, all users of that system are vulnerable, not just the targets of an investigation - in effect, making all users, except for just the first one on that platform, cheap and easy targets. Eventually, this facility will be built into every system at design time - it's just another regulatory requirement; and even that barrier, the first target, will be gone.

The fact that they want to use these extremely powerful, hard to detect tools and techniques, developed for covert surveillance against foreign powers who pose a national security risk, against us common citizens should be concerning. The fact that they are attempting to push the burden of creating these tools onto the public, the public against which they will be used, and in so doing, make exercising this power extremely cheap and - beyond the first target on a platform, anywhere - effectively free should be downright terrifying.

Again, it's not about malintent; it's about protecting from the outliers - the abusers - because, while the abuse of a single individual or community can be extremely harmful to that individual or community, widespread abuse of such an investigatory tool is almost entirely impossible to fight. These tools defeat the very means by which people organise resistance against oppression: free, open and private communications.

We should think very carefully lest we put a surveillance ratchet in place that will become very hard to coordinate against by its very nature if abused. Given the history of "exigent powers" becoming normalised introducing such a system seems extremely dangerous.


This is a great comment. Thanks for replying.

I completely agree; and I assure you I do not think that I have all the answers or have considered every angle, but I do think about this line of thinking quite frequently and I just don't know how to square it.

On the one hand, I think we need intelligence to stop a cyber 9/11 (or worse, a cyber Hiroshima or holocaust), but on the other hand you're completely right that the outliers are going to show up. I don't know man. It's a tough one.

I tend to favour prioritizing the character of the people that get into those positions and making our political systems as robust as possible, but that's just a stab in the dark. What you've outlined is a real problem that I don't know how to solve.


> Most people that work in politics or intelligence in liberal states aren't in it for the power.

Could be, but you're making a VERY dangerous assumption about their lack of gullibility.

Technology is a difficult field to understand - just look at all the asinine content filtering proposals to get a feel just how out of their depth well-meaning politicians can be. Now think about encryption, an inherently much more difficult subject. The same, or similarly well-intentioned, politicians are going to be completely out of their depth. They can be goaded, led and nudged towards a goal they do not, and indeed can not understand.

Make no mistake. This proposal and its talking points are being fed to the talking heads by parties who want to outlaw end-to-end encryption.

The politicians talking about this are probably innocent of malice, but they sure as hell are guilty of heinous ignorance.


>Who is asking for the ability to decrypt? These people are making half or less what they could make in the private sector.

Many of these people are willing to take a lower salary to help others but history has shown that many, many others take lower paying jobs in government because it affords them power over other people.


Regarding your private sector point:

I looked into contracting in the UK for a national agency a couple of years back. The day-rates they proffering were almost double a London day-rate (before negotiation; I can only imagine what could have been secured). There's a deeply invasive interview process but they would cheerfully pay outrageous amounts to secure talent (and I wasn't even particularly good!). I assume not many people have the skill-set they're after and even fewer pass the checks. Perhaps it's different in the US but that was very much the state of affairs in the UK circa 2015


> Who is asking for the ability to decrypt? These people are making half or less what they could make in the private sector.

While I agree with your general point about not ascribing malign intent (at the very least, the misunderstanding about the security implications of weakening decryption seems primarily a case of Hanlon's Razor to me), but the "making half or less what they could make isn't convincing to me. There are those who'd happily sacrifice raw dollars for the power of being able to decrypt


And of course there's always a certain proportion of them who manage to find ways to convert the ability to decrypt back into raw dollars.

There's always abuse. And since their activities are secret, well...


My perspective is that it's all ass-covering. When the next terrorist attack comes, they want to be able to say that they did everything they could to prevent it.


If ass-covering is the goal, then the plan there is to loudly ask for information while quietly ensuring that they won't get it, then when the next attack occurs they'll have an easy scapegoat.

If they actually get the information then the next attack will make them look worse.


The irony being that most of what they do to prevent it is actually more likely to encourage it.


Most people think they are good people doing the right thing in the context of their own thoughts this is pretty meaningless.


Regardless of their initial intentions, large intelligence organizations aren't immune to Pournelle's Iron Law.


> I don't agree with what I perceive as their Hobbes-ean need for total control

If you asked them many would say that a state must maintain total control within its territory or someone else will fill whatever vacuum remains -- the mafia, other states or their intelligence agencies, private cartels, etc. Once such agencies get a toehold they can grow their power and eventually challenge the dominant structure. In many cases the new boss may be worse than the old boss.

Politics is all about compromises. Police and intelligence agencies always sound like they are asking for total Orwellian control because they know they'll get only 1% of what they ask for -- so ask big. It's a negotiating strategy.


That might be true in other cases. But here, there is no negotiation because we're not even allowed to know what's being negotiated. They won't tell us how often these powers are used, and they lie about how much it benefits.


What you describe already happened, it's just that the new boss moved into the old boss' office before people could object. We had an accountable system, and now we no longer do.

With mass surveillance I get the idea they know they get 99% of what they ask for, and even more, they mock us because they know they can circumvent the remaining 1% easily through parallel construction.

Remember those NSA slides where they mock the public for willingly buying smartphones that can be tracked everywhere? Yeah, we're the morons, not them who openly lie to the public and redefine language itself to make what they're doing not a crime against humanity.

It's as if they installed cameras on every street and mock people for driving, except of course, that too is now practically normalized under the guise of "safety".


>Police and intelligence agencies always sound like they are asking for total Orwellian control because they know they'll get only 1% of what they ask for -- so ask big. It's a negotiating strategy.

they have a level of surveillance that orwell could not have imagined. orwell imagined hidden microphones and highly visible cameras. we have all of those things and far more.

you see, if they negotiate 10000 times and they get 1% of what they want each of those times, things move in their direction and eventually they have more than what they originally wanted.


Ultimately these governments can break security at any level. It’s not just the encryption or the apps — if they require the OS manufacturers to cooperate with them, they can record all user input and output. Likewise hardware manufacturers.

It’s not just phones or computers, either — The UK was well known for having high CCTV density well before the proliferation of low-cost digital cameras; By my estimate is now well within government spending limits to put all movement under surveillance by putting cameras on every corner which combine ANPR and facial recognition to cover pedestrians and cyclists as well as motorists; and laser diodes are so cheap every window, never mind person, can be surveilled with laser microscopes.

This is also cheap enough for criminals to do it. I recently got (fake) scam blackmail emails demanding bitcoin under the threat that they had used my webcam to record me watching porn (duct tape over my webcam says they didn’t), but imagine a local crime gang doing that with a drone pointing at your window.

We have to change a lot of stuff in out society very quickly to keep us all safe. We need a world where none of us need secrets, because very soon we won’t have the ability to keep them. We also need the ability to survive ourselves breaking the law, because the law was created with the (at the time reasonable) belief that only important violations would be brought to the attention of the authorities, because most of us can’t get through the day without violating several [1], and because even though current state-of-the-art A.I. can’t automatically enforce all those laws, we should assume that is coming.

But not just what, also how fast it changes and how slow we react: How long ago was it demonstrated that keys can be duplicated from a single image taken by a telephoto lens? And how many keys have been made safe against it since? The only thing keeping us safe is that even the bad actors aren’t keeping up with the tech. That isn’t good, because it means that whoever does use it will look, what’s the phrase, “indistinguishable from magic”.

[1] https://mises.org/library/decriminalize-average-man


The UK already has very serious ANPR coverage. I haven't been on a motorway without it in the last few years —many A roads too— but it goes much deeper than that. I'm in the middle of nowhere and a dinky little B-road near me has a 4×2 grid of ANPR cameras monitoring traffic on both directions.

Most interestingly are the justifications for these things. Widespread ANPR means it's easy to find people who are driving their cars without MOT, VED and insurance. People without those are liabilities to us all, so we don't want them on the road, right?

It's just a happy coincidence that the security services also get to monitor more and more movement in the country.

I wish they'd fund health and social care with as much enthusiasm.


> Most interestingly are the justifications for these things.

The problem is not so much the _initial_ justification. It's the inevitable "control creep" [1]. When something bad happens, they'll just go "hey we have this vast surveillance network in place, why don't we just re-use it for XYZ". That is the fundamental reason why people need to be so vigilant and militant against mass surveillance. Whatever the initial justification was for putting it in place is irrelevant in the long term.

[1] Control creep is where the data generated for one form of governance is appropriated for another


I honestly believe the "creep" is baked in, [at least partially] funding the whole thing.

I can't see how the DVLA could alone justify plonking £10k in hardware, £30k in installation alongside a little rural road in the middle of Norfolk.


Yet the roads themselves are increasingly in a state of disrepair. Maybe the money wasted on this stuff could be the thing that finally gets people to care about it.


Those aren't pot-holes, they're traffic calming craters. They slow the terrorists down. Wrong kind of ice.

Ah crap, I'm running out of excuses.


>Widespread ANPR means it's easy to find people who are driving their cars without MOT, VED and insurance. People without those are liabilities to us all, so we don't want them on the road, right?

>It's just a happy coincidence that the security services also get to monitor more and more movement in the country.

Automated and near-perfect enforcement of minor civil infractions sounds pretty damn Orwellian to me.

I think you've got the scope creep backwards. They get all these systems in place by screaming about security and terrorists then say "well since the system is already here let's use it to keep our citizens on a tighter leash". Then they justify it vilifying these people who commit minor civil infractions as criminals when in reality they're your neighbors.


There's a social phenomenon[1] where people are disproportionately critical of their socio-economic peers who try to cheat their way up the ladder. Benefits cheats. Tax dodgers. Petty smuggling. Daytime TV is littered —perhaps deliberately— with shows covering these sorts of crimes.

My point is you don't even need to wheel out terrorism every time to get this stuff through comfortably. You can just play the social injustice card.

[1]: I've completely forgotten the name of it. I've read a few studies on this and they're just amazing how strongly people feel about people unfairly getting one over on them. If anybody reading this knows the name of what I'm talking about, I'd appreciate a reply to remind me.


Every petrol (gas) station has ANPR; which makes sense, you can't get too far before you have to fill up.


If you're on the run, you've probably changed your number plate. If you're just an opposition politician being monitored, then where you buy petrol probably doesn't reveal very much about your activities. And you're probably using a card to pay for the petrol anyway.


Sure; but it raises the bar for criminals. Ie, it makes being an effective criminal require more knowledge and more work. And that makes a huge difference in practice. How many criminals actually have good enough opsec to change the license plates on their car? I bet it’s well under 20%. And I know that an 80% solution kills me as an engineer, but I bet law enforcement sees an 80% solution as a massive win.

Us technologists should know how much this stuff matters from the huge effect good design has on product adoption. (Or dark patterns on user behaviour). This is the same effect in action - changing defaults changes the behaviour of the majority.

Another example: People say that “if you make guns illegal only criminals will have guns”. Yet here in Australia very few crimes are committed using firearms. This is the same effect in action. (I’m not arguing for gun control - just that these laws have an effect)

And with that in mind, I think the reason why we’re finally seeing a big push from the 5 eyes is because finally, finally one of the big chat platforms (WhatsApp) has rolled out end to end encryption. That lowered the bar far enough that privacy from the government is becoming the default.

One implication of this way of thinking is that it changes where the battle lines are. To win, the government doesn’t need to make end to end encryption impossible. They just need to make end to end encryption a bit difficult and non-obvious. Doing that will probably push the % of criminals who use proper encryption back into single digit percentages. After all, if you can research and understand the implications of application and messaging security, you can probably make a better living working at an IT desk somewhere than you can from stealing cars. Law enforcement would probably see that as a huge win, even if all us techies can keep sideloading Signal or whatever.

Personally I don’t consider that good enough - I want a society where everyone has privacy. Not just those who have opted in to it.


I think a lot of petty criminals are driving vehicles which are not correctly registered. (I've heard about cases in which a cyclist has been hit by a white van, gone to the police with the van's number plate and been told: Oh, they don't seem to have registered themselves properly. Since no one's been killed we can't be bothered to investigate any further.) So a lot of this computer-based, large-scale surveillance is more effective against law-abiding political activists than it is against ordinary criminals, who drive second-hand white vans and pay for everything with cash.

From your last paragraph, I think we basically agree.


> After all, if you can research and understand the implications of application and messaging security, you can probably make a better living working at an IT desk somewhere than you can from stealing cars

I doubt that. I think the main thing keeping cars safe from the 1% or so who don’t care about the law or ethics of theft is that it’s almost impossible to get away with it. Those with the relevant skill and the willingness to be criminals probably just take an easier approach, like card skimming.

This belief is based on how much second hand cars are worth and therefore how few cars a thief would need to steal each month for a very big salary.


I think you can get away with it, if you know what you're doing, but a stolen car is worth a lot less than the same car sold second-hand legitimately. Probably you either have to sell it to someone who knows it's stolen, knows not to take it anywhere near a legitimate service centre, and is prepared to forfeit it if stopped by the police, or you break it up and sell the parts, or you have a way of smuggling it out of the country to somewhere where they don't care about where cars came from.


Right; and this was my point in the first place. The police don't have to make it impossible to get away with stealing a car. They just need to make it difficult and awkward. Thats still enough to massively disincentivized car theft - which in turn has resulted in far fewer cars being stolen.

Likewise if they ban end-to-end encrypted chat apps from the app stores, I bet that would decimate the number of people who used them. Even if anyone could just get an android phone and sideload signal, in practice adoption would still fall low enough to make law enforcement happy. Even amongst criminals.


But these are privately owned by the garage to report drive-aways.

The context of this comment seems to imply these have something to do with the state?


It's the police who generally have the most up to date database and are also the only people really able to do anything with the information (other than court summons, I guess).

Regardless of who owns the ANPR, since their sole purpose (if you ignore the survelience aspect) is cutting down crime, the data will end up with the state eventually.


They all feed into the National ANPR Data Centre:

www.npcc.police.uk/FreedomofInformation/ANPR.aspx

https://en.wikipedia.org/wiki/Automatic_number_plate_recogni...


No they don't. Those articles are referring to police ANPR cameras, not such cameras in general.


If this is a planned escape, a full tank and a couple of 20l jerry cans will get you from Land's End to John o' Groats, and back in a modern estate doing ~50mpg.


Or just swap your plate for one from a same make and model you found on eBay. You can trigger ANPR's all you want and even the police vehicle mounted ones won't be cause for much suspicion.


You don’t even need to buy a plate. If you’re gonna commit a crime, why not just swap it with someone else’s who has the same make and model?


It's not possible to legally purchase a plate for a registration you don't own. About 10 years ago the DVLA started to crack down on places printing plates without changing eligibility ahead of time.

I know we're already talking about the law, and there are always ways around this, but you're probably better off sticking a foreign plate on the car.


It is easy to purchase a number plate for whatever text you like, e.g. https://www.myshowplates.com/

I find this type of thing the easiest place to purchase number plates because they don't make you jump through a load of hoops with sending copies of the V5.


I did say legally. The Road Safety Act 2006 requires suppliers of number plates to be registered and, looking at the law, I see no provision for the sale of "things that look like number plates but they're really not, honest guv".

UK statute is painful to read and interpret though. I may have missed something.


You implied that the illegality of purchasing such a plate rested on the customer. I think it is perfectly legal to purchase such a plate, it's just not legal to sell one.


> Most interestingly are the justifications for these things. Widespread ANPR means it's easy to find people who are driving their cars without MOT, VED and insurance.

And doesn't even work, people use fake number plates.


This is always the problem with this stuff (and perhaps is even by design). They always justify it by mentioning terrorists etc, but it's really used to keep normal, mostly "law-abiding" people in line. Real "bad guys" will always find a way around it.


How can it not be trivial to detect that in most cases?

Must match make model and color with the number plate, also must make sure that the real number plate isn't used ~simultaneously somewhere distant.


"match make model" sounds very difficult automatically from some traffic camera. "plate isn't used ~simultaneously somewhere" even if it is, which one is legit?

And do what, dispatch the police immediately to both plates? It's also a question of not be caught _for how long_


I read an article a while ago talking about this exact thing. Apparently the answer is yes, the UK police do notice if a number plate makes an impossible journey, and while they didn’t claim in the article that they automatically verify make and model, they do have the ability and motivation to send a police car after both vehicles in such cases.

Of course the story would be slightly more believable if not for the fact that shortly after I read it, the police sent me a speeding ticket for a car which I had sold nine months earlier.


>tjoff posts on FaceBook: wow, this week vacation in Bahama is great!

takes plate from car


How much of a problem are unregistered drivers in the UK actually? Or is this one of those anecdotes-turned-hypothetical-epidemic things lawmakers use to rally support?


> takes car

FTFY


APNR is "Automatic number-plate recognition"


"dinky little B-road near me has a 4×2 grid of ANPR cameras " How do you know they're ANPR?

"I wish they'd fund health" How do you know they're state funded cameras?


> How do you know they're ANPR?

Just a happy coincidence that a large portion of my job is interfacing booking systems with ANPR installations. The ones down the road are MAV cameras. Not cheap.

Yeah, they might be using them as CCTV. But that's a lot of wasted money.


Because the ANPR system is state owned.


> We also need the ability to survive ourselves breaking the law, because the law was created with the (at the time reasonable) belief that only important violations would be brought to the attention of the authorities, because most of us can’t get through the day without violating several

This is arguably the biggest problem of all. Selective enforcement is already a thing, and is only going to become a bigger problem. Just look at all the bullshit prosecutions that have happened in the UK over things said and done on social media.

Police are also increasingly keen on downloading the entire contents of people's smartphones (and prosecuting you if you don't give up the password). So anyone who comes to the attention of the police is at risk of having their entire life pored over to look for any other infringements.


That's an excellent reason to avoid smartphones.

Maybe Apple could fork the iPhone line with devices that retained nothing locally. That ran entirely in RAM, working with minimal data from iCloud. And which fully wiped RAM whenever not in active use. Basically like Tails.


> By my estimate is now well within government spending limits to put all movement under surveillance by putting cameras on every corner which combine ANPR and facial recognition to cover pedestrians and cyclists as well as motorists; and laser diodes are so cheap every window, never mind person, can be surveilled.

That's a maybe. You are however vastly overestimating the time it would take for the UK government to go to tender over such a platform. Then you are overestimating by an order of magnitude the private sectors ability to deliver such a platform within a reasonable budget and within the next few decades.

Have you looked at government infrastructure projects? Thankfully they set a reassuringly pitiful benchmark.

For now..

I agree with the rest of your comment wholeheartedly.


> Have you looked at government infrastructure projects?

You assume that they work with the same enthusiasm on the projects we benefit and on the projects they benefit.


There are plenty of classified and military projects that don't go out for tender ( national security over-rides EU tendering regulations ). They are quietly handled by civil servants who operate on career timescales far longer than any Government.


That’s certainly a fair point which I had not given due thought.

Of course, if the government was competent then we wouldn’t have this problem in the first place…


People have given up believing that these laws are created for their benefit... I think this breakdown in trust is partly to blame for the current state of the US (and UK) political systems.

Fixing the issues you list, are part of regaining trust in the political system overall. If it doesn’t get fixed slowly and methodically thing will just get worse and worse until something snaps.


> We need a world where none of us need secrets

… which is impossible? This would require that the government never gets corrupt because you need secrets to get rid of a corrupt government.


It would be a world without governments, where no one could be dominated by anyone else. And yes, arguably impossible. I do like Vinge's bobble novels, though.


You also need secrets to be corrupt. Who watches the watchers? Literally everybody.

At least, that’s the good outcome.


Can you not be corrupt out in the open, yet hold a monopoly on violence that no one can challenge?


The monopoly on violence that defines a state is specifically the monopoly on legitimate violence; it is not a monopoly on capacity for violence but on the legitimacy of violence.


The legitimacy is a function of sufficient supremacy than any moral rightness. Otherwise organized crime and fundamentalists wouldn't be capable of reducing governments to failed state status. The legitimacy is that any who wield it openly will be crushed like if say the Hell's Angels decided to try to annex New York City by force of arms. If they couldn't then well there is your new government by strongman


> The legitimacy is a function of sufficient supremacy than any moral rightness.

It's a function of popular acceptance (or at least acquiescence) which can acheived by any combination of perceived rectitude or overwhelmingly capacity (in practice, it's usually a blend of the two, and not just one or the other, in any stable state.)


The last time I knew, facial recognition technology is not yet advanced enough for the type of system you imagine. Is there anything specific you can point me to that would support your 'estimate'?


Are you assuming that the facial recognition software has to identify between 65 million adults? Or something like “27 faces entered this road; there are seven possible exits from this road; using data from the cameras on those seven exits, which exit did each of these 27 faces take?”?

Because the former, I absolutely agree with you. I’m imagining the latter.


It feels weird to read stuff like this, really.

What they want is to be able to wiretap people, without them knowing. Because if encryption is what's bothering them, you can get a warrant, seize the phone and/or computer, and make the owner unlock it / give you the keys, by law.

It is perfectly logical and lawful. However, if unwarranted (in the sense of without a warrant) wiretapping is involved, then yes, encryption "hinders the law enforcement". Except it doesn't. Because as mentioned earlier, just get a warrant, and make the owner unlock / give you the key, by law.

It doesn't hinder the law enforcement, it hinders the intelligence agencies work and makes it less invisible. And I kind of think that's a good thing too.


Following up on your point, NYT really swings-and-misses here. (I suppose they could also be engaging in gaslighting or dog-whistling ...)

> Ordinary Americans — including President Trump’s former lawyer, Michael D. Cohen — are also increasingly using encrypted apps to conduct delicate conversations to prevent monitoring by the government or others.

"Ordinary Americans" are not only using "encrypted apps" to conduct "delicate conversations." People, regardless of jurisdiction, are using these applications because they expect that their private conversations are and will remain private -- especially in a world where digital interactions are replacing/will continue to replace face-to-face interactions.

If I have a conversation with someone over coffee at my kitchen table, I wouldn't (necessarily) consider that conversation to be "delicate." Regardless, I wouldn't want any person or government entity to be able to listen in -- surveillance inhibits freedom of expression and the results are often without context.


> […] make the owner unlock / give you the key, by law.

That only works if you can threaten to put the owner into jail for not complying. If you're trying to spy on communication between two people outside your jurisdiction, you're out of luck.

(that doesn't mean I support the US governments attempt to undermine secure communication)


Refusing to provide your passwords is itself a crime in the UK, not to mention obstructing an investigation and god knows what else they decide to stick you with for trying to have some privacy. I believe the idea is to threaten you with more jail time than you would receive for the crime you possibly committed.


Luckily with modern key agreement protocols the user never knew the decryption keys at all. Encryption of data at rest is another matter. We need better duress mechanisms there, and in order for those to be effective we need a big cleanup of how many applications store user data.


That makes no sense to me. There must be some mechanism for accessing messages. Whatever that is, it's those credentials that you'll need to produce. Or rot in jail indeterminately.


The term for the concept is forward secrecy, or often "perfect forward secrecy". It seems like magic to me, too, and I'll probably be reviewing the basics tonight. I couldn't begin to tell you how the trick is accomplished.

Somehow, an attacker can have your whole conversation log (encrypted), including the key exchange, and be unable to retrieve the key used, even if he has the credentials you used at the time the key was generated.

The real crux of the question may be, "How does Diffie-Hellman work?" (Well known key exchange method.)


Yes, I understand PFS, ratcheting and all that. But you're talking about stuff captured off the wire. And yes, that can be protected with PFS. Even OpenVPN does that. But I thought that we were talking about devices and stuff stored on them. Even if all transport had PFS, devices and encrypted files all have passphrases and/or secret keys.


I've been curious about one aspect every time I've seen this law mentioned.

Theoretically, let's say you use VeraCrypt or something similar to create a password-protected volume inside a big .mp4 file using steganographic techniques. A second secret password is set, which can decrypt another hidden volume which contains the actual sensitive data.

You give out the password for the first volume under duress.

Would that be enough to satisfy the law? If so, isn't it undermined?



They all know about VeraCrypt, and passwords for hidden stuff.


Yes, but the point I'm getting at is whether someone could hypothetically still be outside the law if they gave access via the first password, but the interrogators suspected (but could not prove) that a second volume existed.


They don't need to prove anything. They just imprison until you reveal the password for the hidden volume. And if there's really no hidden volume, then you're SOL.


Max tariff in the UK is 2 years, or 5 in Terrorism cases. Does anybody have any actual insight on this specific UK law, rather than guesses?

Once again, this is in context of UK law. So speculation about rubber hoses in dungeons isn't particularly relevant (luckily).


> […] a crime in the UK

I don't think somebody living outside of UK cares. The threat only wors if that person lives in UK jurisdiction. Not everybody does.


It seems to work in the US. There's a man in Philadelphia, who's been jailed for years. He's charged with contempt of court. because he claims to have forgotten the FDE key for his macOS box. Which investigators believe is loaded with child porn. And so he periodically sees the judge, who extends his sentence for contempt. There's apparently no limit under current US law.


Well, a lot of people live in EU jurisdiction. And at least until March 2019, a UK warrant can cause you to be arrested by any EU police if one of the crimes covered by the warrant is also illegal in the country where you're arrested.


How can they prove that you know the password? What would the consequences be if you legitimately forgot or didn't know the password?


Very likely, you just stay in jail until you remember. Or whatever the maximum is, if there is a maximum.


It comes down to whether a judge believes you or not.


Tacking on to wereHamster's comment, encryption also irrevocably hinders investigators if the suspect is dead and the investigators are attempting to discover any potential co-conspirators.


>And I kind of think that's a good thing too.

Not just a good thing but isn't it really the point of the whole exercise?


Let's not kid ourselves. Surveillance moves like this are about control, not security, and especially not about national security. If anything, moves like this actually weaken national security by forcing bad standards and backdoors on people.

The 1946 USUK act that officially created the five-eyes in the first place post-Atlantic treaty needs to be completely re-evaluated and potentially scrapped.


As I see it, any system that can be compromised to pwn malefactors - even the most conceivably horrible terrorists and criminals - cannot be trusted. And notwithstanding all the slander and conspiracy theory, Tor is perhaps the only working example of a compromise-resistant system. Unless it actually is backdoored, anyway.

Obviously, the Five Eyes don't see it that way. But I gotta wonder how commonly Tor is used among TLAs, and how the debate goes, if it is. Because this would destroy Tor. Unless operators were totally anonymous, and relays only stayed up until targeted.


I thought it was decided that tor was already compromised, because the five eyes intelligence apparatus already controls more than 50% of the exit nodes, giving them almost complete insight into where all the traffic originates?


They rely on Tor themselves, they have a strong incentive to disclose/patch any major flaws in the protocol. They might exploit smaller flaws for a single operation, but they probably have more to earn from a healthy Tor network.


That's the dogma. But do we really know that?


We as in the public can probably never be 100% sure of that, but looking at where the project started, and the current state of anonymous networks there is no real alternative. They are definitely using Tor to make attribution harder when running operations, there are no real alternatives. They benefit from Tor being open and used by everyone else, it is much easier for them to hide in the noise of all other traffic then.


Yes, I do agree. However, some say that's just the cover story, and that Tor overall is a honeypot. Or at least, that Tor is a honeypot for all users except US government operatives. There's no way to be sure, right?

As far as alternatives go, maybe they have something like Tor (onion routing) or I2P (mix networking) that user covert channels. It could even be running on government-controlled Tor relays. Or maybe installed as hidden malware.

That seems unlikely, of course. But remember when allegations about ENIGMA were totally conspiracy theory.


If Tor is just a honeypot, then when does it pay off? There are tons of illegal activities going over Tor right now, including truly awful stuff like terrorist attack plotting and recent pictures of child abuse. If Tor is surveillable, why isn't that surveillance being used to catch and prosecute those people? What are they waiting for?

I will say (while acknowledging that I can't prove this) that I have friends who work in national defense and law enforcement, for whom Tor is an impediment. I've never heard them talk about a magic decrypt button; quite the opposite. So if Tor can be decrypted, it is a capability that is closely held and rarely used.


There is lots of horrible stuff on Tor .onion sites, yes. But there was a lot more of it a few years ago. Given general technical cluelessness, even among assholes, much of it was hosted by a few services. Such as Freedom Hosting. But it and some newer ones were compromised, run for a while as honeypots, and then taken down. There aren't really that many independent .onion sites with technically competent operators. Some of the hard-core child porn sites, perhaps, and some of the persistent dark markets. But who knows which of them are honeypots? I mean, PlayPen ran as a honeypot for months, with no interruption in the sharing of child porn, plus infecting users with phone-home malware.

It's not that there's a "magic decrypt button" for Tor. However, it's very likely that the NSA and GHCQ, at least, have some capability to identify Tor .onion sites and users. But they arguably don't want to reveal capabilities, and so are very careful about disclosing information. To some extent, that happens under programs like the DEA's SOPA. But on the other hand, recall that the NSA was cagey about revealing intercepts that could have prevented the 9/11 attacks. Or that charges against the Weathermen were dropped in 1973, after the (then unnamed) NSA got squirrelly about its intercepts being introduced as evidence.

Overall, I'm relatively confident that Tor isn't fundamentally backdoored. But there's no way to know what's going on with any .onion sites that you access. They could be FBI honeypots. Or Russian honeypots. Or independent criminal honeypots. You gotta treat them all as radioactive. As sources of malware and worse. That means at least using Whonix, running on a Linux host machine. And better yet, a dedicated host, used only for Tor and other iffy stuff.


Even if that is true, what's your alternative? Don't use tor?

It's dangerous to scare people away from tor on the basis that it might have vulnerabilities, because anything else is certainly worse.


Yeah, that's another key point. What is the alternative?


I2P maybe?


It's still too small.


Tor has less than 7k relays[1]. Last time I checked[2], I2P had over 20k.

I2P has far fewer users, but every user is a router by default. Tor users have to opt-in to be a relay.

[1] https://metrics.torproject.org/networksize.html

[2] http://stats.i2p


Maybe I ought to take another look at I2P. One thing that I don't like about it is how every user is a router. And every router is typically discoverable by every other router. That makes users stand out a lot more than Tor users do. Basically as much as Tor relays do. And then there's the scarcity of clearnet exits. And indeed, discrimination against them as bandwidth stealers.


A couple other points. So the Five Eyes certainly have an incentive for dominating the Tor network. But if they actually use it, they also have an incentive for not dominating it too heavily. I mean, that was the whole point of releasing Tor as open-source. Also, don't the Russians and the Chinese also have an incentive for running Tor relays, in order to compromise their adversaries' users?


Gotta cite for that?

And actually, your comment makes no sense. Controlling exits shows where traffic goes. Exits don't connect directly with users, but rather through middle relays, and either entry guards or bridges. So adversaries either need data from all three relays in circuits, or they must control both ends, and be able to signal through circuits. That's how CMU researchers pwned users and .onion sites, using the relay-early bug, which allowed intra-circuit signaling.


I believe they're referring to the traffic analysis style attacks. You don't have to know the content of the traffic if you control the entry node and exit node, either the guard relay, or the hidden service by monitoring the size and timing of individual packets. This doesn't deanonymize on its own unless you control the hidden service, but can be used with other data to link anonymous identities to the IP connecting to the entry node.

That being said, what they're referring to is pretty old research and I haven't heard about any evidence proving any agency controls a large amount of the network.

For a more current account of modern attack methods against Tor this paper[1] is a pretty recent compilation of researched attacks. The Tor developers actively try to defend against these but you should still be careful, it isn't researched enough yet (read: Tor isn't that old) to defend against resource rich attackers.

[1]: https://arxiv.org/pdf/1803.02816.pdf


I hadn't seen that. Thanks.

It's a great review. But I've read much of the covered literature. And I'm not aware of anything published which shows traffic analysis being effective in practice. On the actual Tor network. If there is, I'd love to see it.

Also, as I've said, it's dangerous to rely solely on Tor. It's trivial to identify all Tor users, from ISP data. And with a honeypot website or .onion site, an adversary can modulate to help with traffic analysis. But if users are hitting Tor through nested VPN chains, the universe of traffic to analyze becomes much larger. And so false positives interfere with correlation.


any system that can be compromised to pwn malefactor

That's kind of the system we have for everything else that isn't digital communication. I don't support backdooring but this line of argument, in itself, is not likely going to convince policymakers.


At some point, we can forget about convincing policymakers, and just build stuff that works.


Well, there is the little detail that we greatly rely on, benefit from and live in a society of laws.


In my considered opinion, "society of laws" is generally just a fantasy. Maybe it works well enough if you're privileged and/or keep your head down. But generally, laws are whatever the powerful say they are.

Edit: Also, it's arguably moral to violate obviously unjust laws. I mean, that's how the US was founded.


I wouldn't recommend it as a way to win a message board argument but some living in a society-of-really-not-laws tends to disabuse one of this particular considered opinion.


Yeah, I get what you say. I lived in Mexico for a while. Being a gringo helped some with the corrupt police, as long as I had the necessary cash. But eventually I got too worried about being targeted by kidnappers.

Still, a more-or-less lawless Internet is arguably distinguishable from lawless society overall.

Edit: Also, my perspective is colored by living with the War on Drugs for some decades. Having friends spend time in prison. Helping support others to avoid prison. Reading about millions more imprisoned, predominantly minorities. So I'm rather cynical about "society of laws".


A society governed by the rule of law can have bad or outright unjust laws. The orthogonality of these things is a bit counter-intuitive.


The War on Drugs has been far more than a "bad law". It's arguably fucked up at least two generations of minority Americans, mostly black men.

And some years ago, at perhaps the peak of that war, we had a US administration that dealt in illegal drugs in order to buy weapons illegally from declared terrorists, and then illegally provided said weapons to counterinsurgency forces, explicitly violating the express will of Congress. And then the Vice President, a former CIA Director, who was nominally in charge of the operation, was subsequently elected President. And some years later, his youngest son, with no obvious qualifications for office, was elected President for two terms, and broght to his administration many of the advisers and operatives who had carried out the drugs-for-weapons program.

I mean, it's hard to make up stuff this this!

What do you point to, from the 70s through the present, that exemplifies the "rule of law" in actual practice?


I'm seeing laws increasingly implemented and used to protect the rich and powerful from the plebs, and am rapidly losing any faith in the justice system.


Exactly.

But good luck convincing policymakers to be beholden to such plebian concerns. Does nobody remember any time that such blanket surveillance was abused?


Surely the final solution to this problem is a community-based one - one that decentralises the tech giants?

I'm still struggling to figure out why a cohesive, widespread, community-driven solution hasn't emerged yet. Anybody have any ideas as to why?


Technologically, it's end to end encryption, with forward secrecy. Done. We have lots of commercial and non commercial products that do this, from PGP to Signal. Even WhatsApp does it.

Eventually, the elderly, uninformed US government is going to pass a law requiring a back door. And two things will happen:

- someone nefarious will get the private key, and do something nefarious with it. (Hopefully no one will die)

- offshore private messaging will become a thing.

There's a market for bith insecure-but-endorsed and secure-but-illegal, to be sure. And the 5 eyes will probably fight for decades to squash private messaging.

For a worst case scenario, see the war on drugs for an example of how this could go.


> Eventually, the elderly, uninformed US government is going to pass a law requiring a back door.

Eventually, the elederly, uninformed governments are going to die off, to be replaced with younger governments. Still uninformed, but hopefully less tyrannical.

> offshore private messaging will become a thing.

This won't have to happen. Just because the law says there has to be a backdoor doesn't mean open source software developers will comply. Develop your software anonymously, release it anonymously, and run it wherever you want. It doesn't need to be a service provided by a company in a foreign land, it can just be software you run on your own computer.


> Still uninformed, but hopefully less tyrannical.

I don't think this is a reasonable thing to just hope for. Historically, governments have always had tyrannical streaks and I don't think it makes much sense to simply assert that "this time it'll be different". It probably won't.


I think policy just lags social acceptance.

It never used to be legal to be homosexual, for example, then it became socially acceptable, and then a generation of politicians had to die off, and then it became legal.


> It doesn't need to be a service provided by a company in a foreign land, it can just be software you run on your own computer.

Problem is, in a worst case scenario ISPs could be compelled to proxy all traffic through government boxes for analysis. At that point any suspicious activity will probably be frowned upon.


That's why it's gotta be spread as malware, for plausible deniability, and use covert channels, to make detection difficult.


Yeah, I recently researched a piece on secure messaging apps. Briar and Ricochet are the only arguably secure ones. They're both P2P, and use Tor .onion services. Tox and Ring are also both P2P, but peer IPs are readily discoverable, so users are vulnerable to identity compromise. For the rest, all bets are off. Signal, Telegram and WhatsApp say in their privacy policies that they may disclose such account information as IP address and phone number when legally required. And there are no others that protect against identity compromise.


To put it simply: It's easier said and theorised than done.

The easiest to use, most consistent and stable system is always going to be a centralised system and these cost money. Once money is involved it starts to become a company and be bound to the laws the countries want. Unless someone can solve the known problems with decentralised systems while simultaneously making the system as easy to onboard and use as the tech giants can we will never have a truly competitive open source equivalent the products of tech giants.


> as easy to onboard and use as the tech giants

It doesn't need to be as easy to onboard and use. Freedom from censorship and surveillance is worth an awful lot of inconvenience in initial setup, and more people learn that every day.


- winner-take-all economics created the giants

- similar forces drive techies: would you rather make Facebook for free or for a billion dollars?

- tech culture doesn't understand community building

- some censorship / legal control is absolutely necessary lest you become the child porn network, or crown in spam


> some censorship / legal control is absolutely necessary lest you become the child porn network, or crown in spam

Arguably, "becom[ing] the child porn network" is the only proof that you can't be compromised. Because assholes will eventually migrate to the most secure options.


The assholes migrated to TOR. They were wrong, their migration didn't act as a proof. You can always be compromised a dozen different ways if you're breaking the law (in the US) and your adversary has the resources and reach of the US Federal Government. You're fucked, period, if they're on to you and determined. We keep seeing this demonstrated over and over and over again. There are always flaws somewhere.


As I read the evidence, you're only fucked if your OPSEC is weak. Sure there have been lots of high-profile takedowns. But I only know of one case where a bug in Tor per se compromised users. That was CMU exploitation of the relay-early bug. That's how they got PlayPen. And then they ran it as a honeypot, serving phone-home malware, and nailed lots of users.

But in every other case that I'm aware of, OPSEC failure led to pwnage. Freedom Hosting. Silk Road 1. Sheep Marketplace.

So can you point to another case where defects in Tor design or practice got people pwned? I doubt it. But even so, I am concerned that the absence of such news just reflects parallel construction.

And finally, it's not enough to just rely on Tor. In my opinion, one should always use Tor through a nested VPN chain, comprising servers from at least three different VPN providers. That way, adversaries must compromise both Tor and the VPN services. Also, it's crucial to never run servers in places that can be associated with you. Because the more traffic there is, especially if an adversary can control traffic flow, the easier it is to do traffic correlation.


> you're only fucked if your OPSEC is weak

The problem is that actually implementing not-weak OPSEC is much harder than most people understand (even those with technical backgrounds). For example, is your "secret" activity on TOR easily tied to your non-secret activity from a search of network entry/exit times (i.e. "secret" activity is the complement "non-secret" activity)?

I recommend Zoz's DEFCON talk, "Don't Fuck It Up!"[1], for a very good overview of how hard OPSEC has become.

[1] https://www.youtube.com/watch?v=J1q4Ir2J8P8


True. Good OPSEC is nontrivial. But in my case, all of my online activity uses part of the same nested VPN chain. Then I branch the chain for opennet vs Tor traffic. So an adversary would need to get data from multiple VPN providers, just to know what involved Tor. And it's not at all very hard.[0] Also, about OPSEC, see my review.[1]

0) See https://www.ivpn.net/privacy-guides/advanced-privacy-and-ano...

1) https://www.ivpn.net/privacy-guides/online-privacy-through-o...


Then there is convenience. Average user prefers low cognitive load when using computing equipment or applications. Certain decentralised approaches are bit more difficult to understand and/or get it to work. Same goes with security.


Well, there's Tor, I2P, and good old Freenet. But I2P is too small, and Freenet users are too readily targetable. And once suspicious peers can be globally targeted, it's hard to imagine how even decentralized systems could be viable.

The best I've come up with is a P2P mix network which used some covert channel. For plausible deniability, it would need to spread as malware, pretty much like WannaCry did (both virus-like and worm-like). But obviously, it wouldn't do anything horrible on inadvertently infected systems, except perhaps relay some traffic.

And yeah, it's a horrible idea. But what else is possible, if using unauthorized encryption itself becomes illegal?


I think this is quite similar to ad-blocking. If ads would have stayed small instead of becoming assholes, nobody would have adblockers. And if the intelligence agencys wouldn't spy on anyone but only with a court-order, encryption wouldn't be that interesting as well.


Not really. Before the 80s brought ubiquitous personal computers, and academic cryptography became a thing, the NSA and its friends had pretty much a monopoly on strong encryption. That is, encryption was implemented by dedicated hardware. There weren't that many manufacturers, and they were under intense pressure to only sell strong encryption to the NSA and friends, and sell backdoored stuff to everyone else.

So anyway, we went through this in the 90s (the Clipper Chip). That died down, in part because terrorists and criminals weren't really using much encryption. But now we have iPhones with strong encryption, and TLAs and LEOs are seriously freaked.


This seems like a pretty empty threat. The government already has the authority to demand lawful access. "Lawful" includes a warrant. If the government wants to show up with a warrant, I expect companies to aid the government in gaining access to legally-relevant data. If they want help in a broad-spectrum fishing expedition, the US at least has no clear affirmative authority and a small pile of legal precedent based upon the Fourth Amendment that says they in fact lack that authority.

The fact they had the technological capability previously to act without Constitutional authority is irrelevant. Show up with a warrant or go pound sand.


Whether they get a warrant or not is orthogonal to the question of whether they ultimately succeed in breaking security for virtually everyone and everything.

In other words this notion of “get a warrant or pound sand” ignores that even with a robust legal warrant-requiring regime, they still would need to require back doors (key escrow, effectively the same thing) that would screw up security royally in order to get what they want.


Simply put, actions like this reaffirm that encryption works. Use it.


I would refer people to these two posts about this subject:

https://www.schneier.com/blog/archives/2018/09/five-eyes_int...

https://boingboing.net/2018/09/04/illegal-math.html

Basically, crypto backdoors are a very bad idea.


Maybe this is a stupid question, but if the tech industry claims that you can't make a backdoor safe, how do they keep safe their update mechanisms? Aren't they basically backdoors-by-design?


In a sense they are, but they send the same updates to everyone whereas a backdoor needs to allow malicious actions to be taken against specific targets only. If vendors are forced to abuse updates in this way then users are going to demand a certificate transparency-like system to stop the abuse.


> they send the same updates to everyone

Google's staggered updates to Chrome are updates sent to specific targeted users. The iOS App Store (and Google Play store, I believe) has an opt-in beta program that also sends betas to specific targeted users.

> If vendors are forced to abuse updates in this way then users are going to demand a certificate transparency-like system to stop the abuse.

But yes, leveraging users' trust in the update system is a risk that needs to be factored into the debate.


Right, I'm not suggesting that it's even remotely possible for millions of systems to be updated at the same instant, just that outside of some special circumstances like beta channels (yes Google Play supports this) the huge numbers of devices are offered the same updates.

If trust is lost in the update mechanisms then something will be done to restore it. That would likely be large developers like Apple, Microsoft and Google deploying a technical solution to prevent themselves from sending different updates to different devices without loudly warning the user and automatically reporting the attack to the world. That would please users and reduce compliance costs. Really, the major operating systems should just do it now to prevent this from becoming a problem in the first place.


>If vendors are forced to abuse updates in this way then users are going to demand a certificate transparency-like system to stop the abuse.

I always thought this would be a neat use of Ethereum: a smart contract publicly stores the hash+url of the latest version of $softwareProduct (and maybe also the hash+url for the latest beta release, etc). The software would only update to a version named by the smart contract. The smart contract would only allow the application developers' addresses to be able to update the published hash+urls. The smart contract could have arbitrary logic here, like to allow the developers to add new addresses that can publish new versions, or to implement voting logic so a majority of the developers have to agree on publishing a new version, etc. In this system, it's not possible to secretly push a malicious update to one user.


They are, but you have to trust someone. If you install Windows on your machine, you have to trust that Microsoft isn't doing nefarious things to your machine.

Since you're already screwed if Microsoft is a bad actor, trusting their digitally signed updates doesn't increase your attack surface area. Well, it didn't. The Australian proposal adds Australia to the list of organizations you have to trust to accept updates signed by Microsoft.


You make the updates user opt-in only. That doesn’t make everything perfect but it reduces the likelihood and potency of the worst scenarios.

The EFF advises device makers to never enable automatic pull of updates. Because doing so would open the door for forced silent updates by court order.


> part of an escalating war between government officials and Silicon Valley over access to people’s private data

Who is missing from that competition?

I'm purposely misconstruing the meaning, but it makes an ironic point. Remember that most of these tech companies make money by collecting and using the same data they claim to protect, and some provide it to the government.


Legislators don't seem to understand that backdoored crypto is bad crypto.

If I write a chat app that uses strong encryption, with the keys stored on each user's device, there are no legal grounds for me to modify any part of my app if the government wants access.


It's likely the 'Five Eyes' already have access to all or most data from the level of telecom equipment. This is probably why they banned Huawei and ZTE.


What can we do about it?



Yes. Normally we'd mark this one as a dupe of that one, but it seems to be taking this story a while to ripple through the system as people become aware that that document was published and try to figure out what it means. Since the NYT article does contain some new information as well as more background, we'll leave it up.


Fascism.


Please don't post like this. Even if you're right, an unsubstantive single-word comment is not the thing to post—and certainly not that word, which leads to the nether regions of the internet barrel.

https://news.ycombinator.com/newsguidelines.html


I want you to know up front that I agree with you - this is such, from the very people we were warned about, and we should oppose it wherever it exists. But...

Do you care to elaborate why this is Fascism? Why we should care? Or most importantly...

How do you respond to this instance of "Fascism"? Any ideas? Suggestions? Pointers? Calling it out is nice, but "calling Hitler a Nazi" is just an obvious Tautology. I expect better on HN.


As an aside, I recently came across the argument that programs like Five Eyes were designed because of mass infiltration of immigrants/others who are not designed for Western ideas/government and that it's the price that people pay for relegation of their freedoms. So I might have been living under a rock but this argument is on the alt-right for what it's worth but I'm not entirely sure how to process it. In the sense that there's far too much irony and the lack of a unified framework of laws that work towards humanity maybe? (Sorry for the incoherent thought but I had to get this off my chest)


That doesn't really make sense to me. Five eyes, and intelligence agencies, collect information that has nothing to do with people in their countries. For example the US tapping Merkel's phone or bugging the Copenhagen climate talks. It is done to gain diplomatic, military and economic advantage. You can sample them on Wikileaks.


Isn't PRISM and other related programs part of Five eyes? I guess my point was more that as the US moves away from it's War on Terror the next natural target is on ideology


Not really since that target probably doesn't support the selling of more guns to the USA agencies.


Don't think there's any connection between 5 eyes and immigration. The Five Eyes refers to intelligence cooperation between the major English-speaking nations. That arose directly out of them having been allies in WW2 and facing a common adversary (the Soviet Bloc) in the cold war. In WW2, they didn't have as deep collaboration with many other Nato powers because France was occupied and Italy and Germany were the enemy they were fighting. It was natural for this close cooperation to continue from there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: