Hacker News new | comments | show | ask | jobs | submit login
No encryption was harmed in the making of this intercept (risky.biz)
67 points by Khaine 9 months ago | hide | past | web | favorite | 32 comments

> Do we believe that law enforcement bodies should have the authority to monitor the communications of people suspected of serious criminal offences?

I actually believe they should not. They should have the authority to attempt to do so with proper oversight, but that in no way implies that they will necessarily succeed.

I do not, however believe that they should have the authority to compel others to actively aid them. They should not, as the author suggests have the authority to compel Apple or Google to install malware on users' phones. If the police can do it themselves, with proper judicial oversight, or convince a software maker that spying on Osama bin Satan is worth compromising their principles, so be it.

The idea that the ends justify the means tends to lead to some dark places and I submit that we should not start down those paths.

We've already gone down those paths. Societal expectations and legal precedent have been building for this since the 1880s where registers of telegram metadata were kept. I don't understand why everyone is so naive with respect to the internet.

The Government's main argument is "Why should the internet be different from the 'real world'?" Telephone companies have had to comply with these kinds of government requests since telegrams were around. No-one I have heard has had a good serious argument against this.

Saying no-one should have this power is great and all, but its waay to late. These powers to get access to this information with a warrant have been around in some form or another since before WW1.

Certainly tracking telegrams is a far reach from today. Metadata tracks user's physical moves. Depending on web activity, it can even comes close to reconstructing what they are thinking.

Who draws the line of what laws a commercial communication platform must follow and under which jurisdictions? There is no line because complying with the laws of one country your users are in will break the law of another.

I believe he problem is the centralization of communication. There is no reason that these platforms can not be distributed networks running open source software. Any government, which does or does not respect basic human rights, would be free to attempt to intercept the communication and anyone else would be free to try to prevent it.

The Australian Government has asked tech companies to support them getting legal access to devices, specifically the Prime Minister called out for the ability to get plaintext versions of WhatsApp, Telegram and iMessages. These are arguably the telegrams of today.

Yes, given the centralised nature of these services it is a concern that foreign government's demands could be unreasonable. However, the nature of operating in a country is to comply with their laws and regulations. What the Australian Government has asked for is that these communication methods be treated equally, and for tech companies to get on board with that.

If Google and Apple don't want to support the operations of authoritarian governments then perhaps they should actually take a stand and not operate in those countries, and not have their products made in those countries.

I value my privacy and my freedom, but honestly I think that as an Australian the Australian Government does have the right to be able to access my devices during an investigation, provided their are appropriate checks and balances involved. Just like if I had a safe in my house, they have the right to get it opened with a search warrant. If I refuse to open the safe, they have methods and tools that allow opening it. I fail to see what makes iPhones any different from a safe conceptually.

I understand that technically it may be very difficult or potentially impossible to provide a secure mechanism to provide that support to the Government, and that is where my greatest concern for these laws come from.

> I value my privacy and my freedom, but honestly I think that as an Australian the Australian Government does have the right to be able to access my devices during an investigation, provided their are appropriate checks and balances involved. Just like if I had a safe in my house, they have the right to get it opened with a search warrant. If I refuse to open the safe, they have methods and tools that allow opening it. I fail to see what makes iPhones any different from a safe conceptually.

Warning: you are relying on analogies to pre-digital technology to make sense of digital technology.

I'll try a lightning round in ten minutes:

1. A physical safe generally must be physically opened by officers coming to your location and opening it. Typically a warrant is served to the person who owns the safe, who then reads it, and can often watch the officers physically enter their property and possibly even watch them open the physical safe using the combination they give to the officers. After the officers leave the person can have the lock changed with a fair amount of certainty that the officers didn't change the physical makeup of the safe in such a way that they can remotely access the contents of the physical safe in the future.

2. Under certain circumstances officers could access the safe without the person present. But, generally, they have to physically enter to the person's dwelling to do that. Neighbors may see them enter because officers are conspicuous physical objects that travel well below the speed of light.

3. Officers can compel people to open the safes under certain circumstances. But some safes are expensive often because their design makes it difficult to open them without the key. I'm unaware of previous situations where law enforcement made a public awareness campaign about the difficulty of opening expensive safes under the rare circumstances that such safes held vital evidence to time-critical investigations. I'd love to read about such a campaign of "Our Safes Going Dark" if you have a reference to such a campaign.

4. Law enforcement cannot outsource the physical opening of safes to an Italian safe-opening company that installs proprietary software at the station that allows the officers to do remote and clandestine opening of many types of physical safes. (Where remote means opening and viewing the contents of the safe without having to dispatch a physical officer to a physical location.)

5. Law enforcement cannot leverage the same type of Italian safe-opening techniques in an automated system that checks the metadata coming across the internet for certain selectors and then automatically and secretly opens certain safes, storing pictures of contents of those safes for later perusal or even changing the substance of the safe so that the safe itself will report certain data back to law enforcement just in case it turns out this safe belongs to someone who turns out to be a criminal.

6. Nation states could not indiscriminately store the data about the sum total of all the places you traveled and people you interacted with on your way to and from the safe, store that data about you, and then redefine the word collection to mean what happens when someone reads one or more of the pieces of data that were collected.

Bonus 7. If nation state #1 wants to embarass nation state #2 by releasing some of state 1's secrets, those secrets cannot easily be leveraged to make digital ransomware appear on a large number of safes of nation state #3.

Essentially, everything I can think of in ten minutes that is important about digital security/privacy/surveillance is lost when you use analogies to physical pre-digital objects.

My first response was "technology has changed since then", but then I thought: if I used a Vigenère cipher in the telegram era, whose responsibility was it to make sure the government got a copy of the plaintext? All the modern messaging apps do is make it easier to do the same thing.

The answer to the government's argument, of course, is that information theory doesn't behave like empirical reality in several significant ways, which add up to the Internet being fundamentally different from the "real world."

They would rather not admit this. Or perhaps they don't understand.

There's two reasons as far as I'm concerned:

1. The internet has very much democratised communication (or at least did around the time of the inception of the web) - this is the reason we have such campaigns for net neutrality. Government intervention into transmission of telegrams was tolerated because the medium was largely out of the direct control of most citizens. Having more control and insight gives a sense of ownership and a greater awareness of civil liberty around the bounds of use. Which I think is largely positive.

2. The efficiency of internet communication and automation thereof makes any intervention in communication channels much much more far reaching with comparatively little resources or effort. And therefore far far more dangerous.

> because the medium was largely out of the direct control of most citizens

This is still the case with the Internet. Who do you think subsidized the construction of the physical Internet infrastructure?

And most of the major pipes and cables are owned and operated by large megacorps that work closely with governments.

It's not like citizens ran the fibre optic cables themselves and now the government wants to come in and take over.


(a cop could) ask them to devise a way to retrieve the requested data from that device. Like, say, pushing a signed update to the target handset that will be tied to that device’s UDID (Unique Device Identifier). That way there’s no chance the coppers can intercept that update and re-use it on whomever they want

I'm amazed at the naiveness of this approach. Sure, it looks good on paper: "Cops can't use wiretap for user X against user Y".

But using technical means to prevent illegal behavior by police seems to be entirely the wrong approach. How about instead not giving the police the means to break the law?

Or, why stop with device-specific checks? Why not just force the APP to do a UDID lookup to a police-controlled server, where it can control who gets wiretapped? If the companies don't comply, throw the executive in jail for "contempt of court"! After all, we're out to get the bad guys!

Apple and others are right to fight this nonsense. They should take every possible step to ensure that they themselves can't help the police side-step the law. It not only protects their users, it protects themselves, too.

Remotely installing apps works fine with Android. What stops Google from being compelled to deploy an app for a user or millions of them? Probably not much, and the user probably has no control over it, right?

What about being forced to hand over passwords?


The Miami Herald reports that a child abuse suspect was jailed for six months for contempt of court after failing to reveal the correct passcode to his iPhone

Why is he being forced to incriminate himself?

1) the information on the phone will change the charges against him and/or sentence. It would seem that a fifth amendment defence is appropriate, and legal

2) the information on the phone will not change the charges against him and/or sentence. In which case the information isn't needed.

This whole debate for me can be summarized into one idea: Do you want other people to be able to watch everything you do?. If so, all bets are off. If not, the government shouldn't have the right to watch suspects, or to force people to incriminate themselves.

Sounds like the contempt was due to deliberately handing over the wrong passcode and playing innocent, rather than a principled refusal to self-incriminate. Being held in contempt for such shenanigans is entirely appropriate. Of course if it really was a mistake that's hugely problematic.

And when it is publicly known that the Apple, Samsung and Google compromises their operating systems, the adept criminals will flash their own operating system onto their phones.

Apple and Samsung need to provide a worldwide infrastructure for handling requests from law enforcement all around the world (also in the less liberal regimes). How secure can this process really be by now. It doesn't matter that the coppers cannot reuse the exploits, they will just request and receive a new one instantly.

Now criminals also have access to these services, since we now police corruption is a thing.

> Let me put this bluntly: If [weakening of encryption] is what the government winds up suggesting, then by all means hand me a bullhorn [...]

well how else would you interpret the statement that Australian law supersedes math?

- - -

That aside, "We do not share customer data with 3rd parties, as it is not possible due to end-to-end encryption, except when we send things in plaintext, because of our moral obligation to the Australian government" is going to be a greaaaat addition to privacy policies for Australian companies. Going to go real well for international business.

Just as spying through eavesdropping on plaintext was defeated through technical means, pushing malicious updates will also be defeated.

Tech companies will have to declare that their update mechanisms contain no means to apply updates without user consent, and when updates are sent they will publish the hash. Security researchers will independently verify such claims by reverse engineering.

Then whoever some government wants to spy on cannot be tricked into installing a malicious update, as suggested by this blog post.

Governments are just going to have to accept that fact that their ability to intercept private communication and recover private documents will be reduced to zero in the near future, as it should be.

A very good point. We've been obsessing over the security of the pipes - and rightly so - with the real assumption that our access to them is our own. The recent border holding phenomenon in the US come to mind when thinking about how much you really own your technological property. Maybe it's time to make free hardware the next major project.

Excellent article. Nothing irritates me more than hyperbole about politicians and law enforcement. Already we have people saying "Macron wants to ban encryption".

All that said, LE hasn't helped itself at all with over-zealous and blunt-edged approaches so far -- cf FBI cracking iPhones, the NSA dragnet with little oversight.

I don't mind the government being able to read my mail. I just want them to need a warrant first, that a judge has signed off on, and that's particular to me.

This article promotes the idea of companies pushing malicious updates to specific users. Malicious in the sense of circumventing E2E encryption without notifying the user and even doing this retroactively?

Suppose then, that we get deterministic compilation and an open-source client for E2E encryption. It would then be possible (or rather, much easier) to detect such malicious updates. Should such protection be made illegal?

I agree that there is no principled objection to access to communications provided there is a warrant. However, if this requires weakening security far beyond such points, it is no longer obvious that the trade of is worth it.

Marcon wants to ban complete systems where the encryption and overall security is effective enough that the provider of said system cannot intercept messages for the government. While that's technically different, the effect is the same.

"Australia passed a metadata retention law that came into effect in April this year. It requires telecommunications companies and ISPs (i.e. carriage service providers, or CSPs) to keep a record of things like the IPs assigned to internet users (useful for matching against seized logs) as well as details around phone, SMS and email use.

The PROBLEM is, people have moved towards offshore-based services that are not required, under Australian law, to keep such metadata. Think of iMessage, WhatsApp, Signal, Wickr and Telegram. ...."

Wiretapping at its best. have not done historical analysis, but seems like much of the personal freedom reduction laws, start in Australia these days(if looking amongst English speaking countries). Closely followed by UK.

Australia has long been the petri-dish for the new globalist elite to test their social engineering tools. It is, after all, one of the most successful white societies to have survived the empire.

>Now look, I’m not advocating for these laws. I’m not. What I am trying to do is move the goalposts for this discussion.

To me the goalposts have to be moved back to the public sphere, and what we should be doing is addressing the devolution happening in the computer world that results in users having absolutely no clue how to use their computers beyond opening up a single app and pouring all content into it. There is a fine balance between usability and uselessness, and its the user who is the most difficult component in this line.

The alternatives proposed aren't any better. If we are to believe that law enforcement has Trojans that can bypass encryption, then we must believe that they aren't reporting the security flaws which allow the installation of these Trojans. These security flaws are not limited to law enforcement--terrorists can use them just as effectively as law enforcement can. So what the argument boils down to is: we shouldn't worry about law enforcement undermining security because law enforcement had already fundamentally undermined security? I don't buy it.

I think Patrick Gray's essay actually adds more confusion. His effort to clarify the confusion by differentiating the layers between "metadata" vs "encryption" is technically correct but the extra details ends up obscuring what governments want: cleartext

Yes, I agree with Patrick that framing the arguments on "math of encryption" is misguided. Yes, it leads to thinking that legislators are stupid similar to trying to pass a bill to redefine the value of pi.[1]

Let's be clear.... the lawmakers actually don't care if the most powerful supercomputers at NSA/GCHAQ can't crack WhatsApp's latest 2048-bit-quantum-elliptical-encryption used in conjunction with Apple iOS tamperproof enclave chip.

All that math doesn't matter. It's a mistake to think that government is retarded because "math is irreversible". That smugness makes people lose sight of the ultimate goal: the cleartext.

If getting that cleartext means new laws that require all Android/Apple phones include a software keyboard interceptor (the onscreen keyboard) to log keystrokes and send the cleartext to a government server, then so be it. Such a keystroke logger would then make the "irreversible math" a moot point.

Tldr: Don't frame the policy in terms of "encryption". Instead, focus on the "cleartext".

>The first problem actually has very little to do with end-to-end encryption and a lot more to do with access to messaging metadata.

This is true, but it's misleading because we know that governments eventually want more than the just the metadata.

>Now it’s all very well and good for WhatsApp to argue that it doesn’t have the technical means to do so, which is a response that has lead to all sorts of tangles in Brazil’s courts, but the Australian law will simply say “we don’t care. Get them.”.

This line dances close to what we really should be discussing (the cleartext). If you find yourself grumbling "the government wants to weaken encryption!", it means you're thinking like a computer scientist. On the other hand if you imagine, "if there was a new algorithm for _stronger encryption_ but it didn't matter because all the cleartext was available for monitoring", it means you're thinking like the government.

>Do we believe that law enforcement bodies should have the authority to monitor the communications of people suspected of serious criminal offences?

This question is another way of framing it which is closer to the aims of the government. They want to monitor the cleartext.

[1] https://en.wikipedia.org/wiki/Indiana_Pi_Bill

Is it not sound, in principle, for the government to get access to the cleartext when they have a publicly gotten warrant? I personally think so.

However, I do not see a way to achieve this without massive compromise to security. That is, without either giving LE access without warrants and without making it much easier for criminals to get access.

> On the other hand if you imagine, "if there was a new algorithm for _stronger encryption_ but it didn't matter because all the cleartext was available for monitoring", it means you're thinking like the government.

I would indeed be in favor of giving the government publicly auditable and well secured access to cleartext uppon request. I just don't see a way to do this that works.

By auditable here, I mean that every single decryption that occurs should be publicly available. This could e.g. happen by requiring a secret key known only by the judiciary. With homomorphic encryption, you might even be able to do this stuff on a blockchain, but that is quite the pipe-dream.

Anybody who cares about the privacy of their communication can still use an open source stack. So yes it works as a tool for oppression and for catching extremely inept criminals, but it doesn't do much else for national security.

When using an open source stack to do illegal encryption leads to XX years in prison, not many people will use it.

You can hide messages in pictures of lolcatz.

There's something in cryptology called "plausible deniability" that addresses this.

Plausible deniability only works if any doubts are actually interpreted in your favor (which, as history shows, isn't guaranteed in practice even in legal systems where it should be in theory), and it's easily possible to make laws that turn most practical options into crimes.

Yes, it's plausible that the non-approved software stack on your phone isn't doing any illegal encryption, but that fails if having non-approved OS on your phone a crime by itself.

Yes, it's plausible that the TrueCrypt volume you have doesn't contain anything bad, but that fails if mere possession of TrueCrypt tools is a crime by itself.

Yes, it's plausible that the encrypted traffic sent to/from your phone didn't contain anything bad, but that fails if having any encrypted traffic not going through state-approved MITM https is a crime by itself.

Etc ad infinitum. Don't underestimate the coercive power of gov't if they actually want to restrict something. Technical means can protect you only if you physically live outside of their reach.

They may as well ban all files of random data of unknown origin. If it passes diehard tests with flying colors, confiscate it and arrest all known possessors of it.

And they may actually do just that eventually.

The point I'm trying to make is that in an oppressive regime the only thing that actually provides plausible deniability is having and using the exact same hardware/software as everyone else uses; a rooted phone with an opensource OS, unusual chat apps or cryptography tools won't give you any plausible deniability but actually make everything even more risky for you.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact