I actually believe they should not. They should have the authority to attempt to do so with proper oversight, but that in no way implies that they will necessarily succeed.
I do not, however believe that they should have the authority to compel others to actively aid them. They should not, as the author suggests have the authority to compel Apple or Google to install malware on users' phones. If the police can do it themselves, with proper judicial oversight, or convince a software maker that spying on Osama bin Satan is worth compromising their principles, so be it.
The idea that the ends justify the means tends to lead to some dark places and I submit that we should not start down those paths.
The Government's main argument is "Why should the internet be different from the 'real world'?" Telephone companies have had to comply with these kinds of government requests since telegrams were around. No-one I have heard has had a good serious argument against this.
Saying no-one should have this power is great and all, but its waay to late. These powers to get access to this information with a warrant have been around in some form or another since before WW1.
Who draws the line of what laws a commercial communication platform must follow and under which jurisdictions? There is no line because complying with the laws of one country your users are in will break the law of another.
I believe he problem is the centralization of communication. There is no reason that these platforms can not be distributed networks running open source software. Any government, which does or does not respect basic human rights, would be free to attempt to intercept the communication and anyone else would be free to try to prevent it.
Yes, given the centralised nature of these services it is a concern that foreign government's demands could be unreasonable. However, the nature of operating in a country is to comply with their laws and regulations. What the Australian Government has asked for is that these communication methods be treated equally, and for tech companies to get on board with that.
If Google and Apple don't want to support the operations of authoritarian governments then perhaps they should actually take a stand and not operate in those countries, and not have their products made in those countries.
I value my privacy and my freedom, but honestly I think that as an Australian the Australian Government does have the right to be able to access my devices during an investigation, provided their are appropriate checks and balances involved. Just like if I had a safe in my house, they have the right to get it opened with a search warrant. If I refuse to open the safe, they have methods and tools that allow opening it. I fail to see what makes iPhones any different from a safe conceptually.
I understand that technically it may be very difficult or potentially impossible to provide a secure mechanism to provide that support to the Government, and that is where my greatest concern for these laws come from.
Warning: you are relying on analogies to pre-digital technology to make sense of digital technology.
I'll try a lightning round in ten minutes:
1. A physical safe generally must be physically opened by officers coming to your location and opening it. Typically a warrant is served to the person who owns the safe, who then reads it, and can often watch the officers physically enter their property and possibly even watch them open the physical safe using the combination they give to the officers. After the officers leave the person can have the lock changed with a fair amount of certainty that the officers didn't change the physical makeup of the safe in such a way that they can remotely access the contents of the physical safe in the future.
2. Under certain circumstances officers could access the safe without the person present. But, generally, they have to physically enter to the person's dwelling to do that. Neighbors may see them enter because officers are conspicuous physical objects that travel well below the speed of light.
3. Officers can compel people to open the safes under certain circumstances. But some safes are expensive often because their design makes it difficult to open them without the key. I'm unaware of previous situations where law enforcement made a public awareness campaign about the difficulty of opening expensive safes under the rare circumstances that such safes held vital evidence to time-critical investigations. I'd love to read about such a campaign of "Our Safes Going Dark" if you have a reference to such a campaign.
4. Law enforcement cannot outsource the physical opening of safes to an Italian safe-opening company that installs proprietary software at the station that allows the officers to do remote and clandestine opening of many types of physical safes. (Where remote means opening and viewing the contents of the safe without having to dispatch a physical officer to a physical location.)
5. Law enforcement cannot leverage the same type of Italian safe-opening techniques in an automated system that checks the metadata coming across the internet for certain selectors and then automatically and secretly opens certain safes, storing pictures of contents of those safes for later perusal or even changing the substance of the safe so that the safe itself will report certain data back to law enforcement just in case it turns out this safe belongs to someone who turns out to be a criminal.
6. Nation states could not indiscriminately store the data about the sum total of all the places you traveled and people you interacted with on your way to and from the safe, store that data about you, and then redefine the word collection to mean what happens when someone reads one or more of the pieces of data that were collected.
7. If nation state #1 wants to embarass nation state #2 by releasing some of state 1's secrets, those secrets cannot easily be leveraged to make digital ransomware appear on a large number of safes of nation state #3.
Essentially, everything I can think of in ten minutes that is important about digital security/privacy/surveillance is lost when you use analogies to physical pre-digital objects.
They would rather not admit this. Or perhaps they don't understand.
1. The internet has very much democratised communication (or at least did around the time of the inception of the web) - this is the reason we have such campaigns for net neutrality. Government intervention into transmission of telegrams was tolerated because the medium was largely out of the direct control of most citizens. Having more control and insight gives a sense of ownership and a greater awareness of civil liberty around the bounds of use. Which I think is largely positive.
2. The efficiency of internet communication and automation thereof makes any intervention in communication channels much much more far reaching with comparatively little resources or effort. And therefore far far more dangerous.
This is still the case with the Internet. Who do you think subsidized the construction of the physical Internet infrastructure?
And most of the major pipes and cables are owned and operated by large megacorps that work closely with governments.
It's not like citizens ran the fibre optic cables themselves and now the government wants to come in and take over.
(a cop could) ask them to devise a way to retrieve the requested data from that device. Like, say, pushing a signed update to the target handset that will be tied to that device’s UDID (Unique Device Identifier). That way there’s no chance the coppers can intercept that update and re-use it on whomever they want
I'm amazed at the naiveness of this approach. Sure, it looks good on paper: "Cops can't use wiretap for user X against user Y".
But using technical means to prevent illegal behavior by police seems to be entirely the wrong approach. How about instead not giving the police the means to break the law?
Or, why stop with device-specific checks? Why not just force the APP to do a UDID lookup to a police-controlled server, where it can control who gets wiretapped? If the companies don't comply, throw the executive in jail for "contempt of court"! After all, we're out to get the bad guys!
Apple and others are right to fight this nonsense. They should take every possible step to ensure that they themselves can't help the police side-step the law. It not only protects their users, it protects themselves, too.
The Miami Herald reports that a child abuse suspect was jailed for six months for contempt of court after failing to reveal the correct passcode to his iPhone
Why is he being forced to incriminate himself?
1) the information on the phone will change the charges against him and/or sentence. It would seem that a fifth amendment defence is appropriate, and legal
2) the information on the phone will not change the charges against him and/or sentence. In which case the information isn't needed.
This whole debate for me can be summarized into one idea: Do you want other people to be able to watch everything you do?. If so, all bets are off. If not, the government shouldn't have the right to watch suspects, or to force people to incriminate themselves.
Apple and Samsung need to provide a worldwide infrastructure for handling requests from law enforcement all around the world (also in the less liberal regimes). How secure can this process really be by now. It doesn't matter that the coppers cannot reuse the exploits, they will just request and receive a new one instantly.
Now criminals also have access to these services, since we now police corruption is a thing.
well how else would you interpret the statement that Australian law supersedes math?
- - -
That aside, "We do not share customer data with 3rd parties, as it is not possible due to end-to-end encryption, except when we send things in plaintext, because of our moral obligation to the Australian government" is going to be a greaaaat addition to privacy policies for Australian companies. Going to go real well for international business.
Tech companies will have to declare that their update mechanisms contain no means to apply updates without user consent, and when updates are sent they will publish the hash. Security researchers will independently verify such claims by reverse engineering.
Then whoever some government wants to spy on cannot be tricked into installing a malicious update, as suggested by this blog post.
Governments are just going to have to accept that fact that their ability to intercept private communication and recover private documents will be reduced to zero in the near future, as it should be.
All that said, LE hasn't helped itself at all with over-zealous and blunt-edged approaches so far -- cf FBI cracking iPhones, the NSA dragnet with little oversight.
I don't mind the government being able to read my mail. I just want them to need a warrant first, that a judge has signed off on, and that's particular to me.
Suppose then, that we get deterministic compilation and an open-source client for E2E encryption. It would then be possible (or rather, much easier) to detect such malicious updates. Should such protection be made illegal?
I agree that there is no principled objection to access to communications provided there is a warrant. However, if this requires weakening security far beyond such points, it is no longer obvious that the trade of is worth it.
The PROBLEM is, people have moved towards offshore-based services that are not required, under Australian law, to keep such metadata. Think of iMessage, WhatsApp, Signal, Wickr and Telegram. ...."
Wiretapping at its best. have not done historical analysis, but seems like much of the personal freedom reduction laws, start in Australia these days(if looking amongst English speaking countries). Closely followed by UK.
>Now look, I’m not advocating for these laws. I’m not. What I am trying to do is move the goalposts for this discussion.
To me the goalposts have to be moved back to the public sphere, and what we should be doing is addressing the devolution happening in the computer world that results in users having absolutely no clue how to use their computers beyond opening up a single app and pouring all content into it. There is a fine balance between usability and uselessness, and its the user who is the most difficult component in this line.
Yes, I agree with Patrick that framing the arguments on "math of encryption" is misguided. Yes, it leads to thinking that legislators are stupid similar to trying to pass a bill to redefine the value of pi.
Let's be clear.... the lawmakers actually don't care if the most powerful supercomputers at NSA/GCHAQ can't crack WhatsApp's latest 2048-bit-quantum-elliptical-encryption used in conjunction with Apple iOS tamperproof enclave chip.
All that math doesn't matter. It's a mistake to think that government is retarded because "math is irreversible". That smugness makes people lose sight of the ultimate goal: the cleartext.
If getting that cleartext means new laws that require all Android/Apple phones include a software keyboard interceptor (the onscreen keyboard) to log keystrokes and send the cleartext to a government server, then so be it. Such a keystroke logger would then make the "irreversible math" a moot point.
Tldr: Don't frame the policy in terms of "encryption". Instead, focus on the "cleartext".
>The first problem actually has very little to do with end-to-end encryption and a lot more to do with access to messaging metadata.
This is true, but it's misleading because we know that governments eventually want more than the just the metadata.
>Now it’s all very well and good for WhatsApp to argue that it doesn’t have the technical means to do so, which is a response that has lead to all sorts of tangles in Brazil’s courts, but the Australian law will simply say “we don’t care. Get them.”.
This line dances close to what we really should be discussing (the cleartext). If you find yourself grumbling "the government wants to weaken encryption!", it means you're thinking like a computer scientist. On the other hand if you imagine, "if there was a new algorithm for _stronger encryption_ but it didn't matter because all the cleartext was available for monitoring", it means you're thinking like the government.
>Do we believe that law enforcement bodies should have the authority to monitor the communications of people suspected of serious criminal offences?
This question is another way of framing it which is closer to the aims of the government. They want to monitor the cleartext.
However, I do not see a way to achieve this without massive compromise to security. That is, without either giving LE access without warrants and without making it much easier for criminals to get access.
> On the other hand if you imagine, "if there was a new algorithm for _stronger encryption_ but it didn't matter because all the cleartext was available for monitoring", it means you're thinking like the government.
I would indeed be in favor of giving the government publicly auditable and well secured access to cleartext uppon request. I just don't see a way to do this that works.
By auditable here, I mean that every single decryption that occurs should be publicly available. This could e.g. happen by requiring a secret key known only by the judiciary. With homomorphic encryption, you might even be able to do this stuff on a blockchain, but that is quite the pipe-dream.
Yes, it's plausible that the non-approved software stack on your phone isn't doing any illegal encryption, but that fails if having non-approved OS on your phone a crime by itself.
Yes, it's plausible that the TrueCrypt volume you have doesn't contain anything bad, but that fails if mere possession of TrueCrypt tools is a crime by itself.
Yes, it's plausible that the encrypted traffic sent to/from your phone didn't contain anything bad, but that fails if having any encrypted traffic not going through state-approved MITM https is a crime by itself.
Etc ad infinitum. Don't underestimate the coercive power of gov't if they actually want to restrict something. Technical means can protect you only if you physically live outside of their reach.
The point I'm trying to make is that in an oppressive regime the only thing that actually provides plausible deniability is having and using the exact same hardware/software as everyone else uses; a rooted phone with an opensource OS, unusual chat apps or cryptography tools won't give you any plausible deniability but actually make everything even more risky for you.