I worked for Atlassian in Sydney. I know there's a pretty decent tech culture that is thriving in Sydney + Melbourne (and elsewhere, I'm sure). I really hope more of my colleagues find a passion for politics and get to a position where they can steer our clearly incompetent government in governance issues pertaining to technology.
In the meantime, does anyone have a clear idea of how this Act will even work? Lets talk in specific terms for something like Signal. I'm assuming Signal has no legal footprint in Australia. How/can Australia compel Signal to allow Australian enforcement agencies to snoop on conversations?
If they can't, won't one of the worst outcomes of this legislation be that any kind of technology company that needs to deal with encryption (which these days should be basically 100% of them) be forced to move overseas? How could a single Australian-based tech company have even the slightest scrap of credibility for data security when a law like this exists?
Final note - is anyone from Fastmail around here? I'm a Fastmail customer and this has me extremely concerned.
Unfortunately, this goes deeper. The law may be capable of compelling Google or Apple of Australia to force deploy to your phone a malicious version of the Signal app with the "technical assistance notice".
The truth is that such interception is normal, in Australia, in other countries.
AA is two things. 1st, the government there legalizing an already existing practise to cover their own liability in the event that the grey practise is eventually exposed. Especially amid the current public awareness of privacy.
Next, more importantly, AA is to get around recent security patches that rendered previous vectors now impossible to use, since these collections were often done covertly. It's law compensating for where a much relied on covert method no longer works. Thus, the 11th hour urgency.
The cover story that it is this huge privacy catastophe is just more noise, to distract from the big story; how interceptions like this have been going on for more than a decade.
If not, is the law capable of compelling your telephone vendor to ship you an upgrade that weakens its upgrade testing enough that Apple/Google can ship you such an upgrade?
(This has other consequences: if a developer releases the same apk to several stores, but it's signed by different certificates on each store, a user who installed the apk from one store will not be able to upgrade it using the other store.)
Easily checked, run jarsigner -verify -verbose -certs some.apk on an APK of your choice. I ran it on 31 just now, no cross-signing visible anywhere.
And if this article is trustworthy, this isn't hypothetical, it's already happening right now. Right now people are being served with orders to do things like this and if they tell anyone (including the company they work for and are in essence "attacking"), they can kiss their life goodbye.
That's what makes it so scary. A programmer that is living in Aus that works for Google or Apple could one day get a notice that they are now mandated to modify code for an unknown reason with the threat of prison if they don't or if they tell anyone. Technically even programmers that don't work for those companies can be compelled to make contributions to open source software to introduce vulnerabilities or exploits, and again there is literally nothing the person can do except follow orders or go to jail forever.
Put that way, it sounds like a feature, doesn't it? But perhaps a little implausible?
What if Apple or Google changes the OS security mechanism itself?
In a way this might be a blessing.
They'll buy the newest Apple phone because it's 10 nanometers thinner and it's a status symbol.
Apple Cloud was hacked and celebrity pictures were stolen and noone batted an eye.
iCloud was not hacked; the celebrities were spear phished: https://en.wikipedia.org/wiki/ICloud_leaks_of_celebrity_phot...
When you shift responsibility from the provider to the customer, you play right into the security blame game.
Using multi factor authentication, using long, difficult passwords, and don’t let your security questions be obvious. If someone knocks me out, uses my finger to TouchID into my bank’s app and transfer money, that’s the price I pay for the convenience of not wanting to login with my password. Same with using weak passwords and questions.
This is the same as having a car recall and having people dying before the letters reach their homes and saying 'they should have known this company's cars could explode'
Apple could have forced people to use multi factor authentication, and whether or not they should have forced it is a separate discussion that can be had. But I was claiming that your original comment was that iCloud was "hacked" is incorrect, since it implies there was some weakness on Apple's technical backend.
Maybe they should take a couple of notes for their broken cloud implementation from another phone manufacturer in the space that takes security seriously:
Phishing. I think your comment is pretty disingenuous as Apple has generally showed a strong commitment to privacy and security.
>According to court filings, Collins stole photos, videos and sometimes entire iPhone backups from at least 50 iCloud accounts and 72 Gmail accounts, “mostly belonging to celebrities,” between November 2012 and September 2014, when the photos were posted online.
Replace "Signal" with "OpenBSD" and watch the freaking fireworks.
(Why OpenBSD? De Raadt hasn't promised to be "nicer" recently. Linus has.)
What the five eyes do is normal espionage. They often get away with it, but when they are uncovered their illegal actions aren’t covered by gag orders.
IIRC Jeremy said its a swiss company with swiss servers, so, no problem.
Do you have a link for that statement?
But we're not just talking about headquartered companies.
Cloudflare has datacenters in Brisbane, Melbourne, Perth, and Sydney and an office in Sydney. Could they be compelled to hand over your website's certificate that they have because they're the front end load balancing proxy for your website?? That way the police can man in the middle your website. Cloudflare would be gagged from telling you they gave away your private TLS certificate you entrusted them with.
Give customers a choice of whether to enable Australian proxies for their site.
Use distinct certificates for the proxies in each region, with short expiration times to limit the damage if a certificate falls into the wrong hands.
Stealing your own existing cert is less likely to be detected.
Obviously half employees including directors are in Australia, so that opens up to pression anyway, just like Japan could in theory compell a director of a foreign company by threatening to jail him on a fortuitous tax accusation. The directors would never be so square as to not comply, right?
But they still have subsidiaries in Australia, which arguably does a lot of work given that at least a third of their employees are in Australia, and so would be subject to the local laws. The employees could be compelled to bypass things and not be allowed to talk about it to anybody, including the parent company.
Hotel rooms usually provide a small safe for which guests can pick the combination. There is also a secret backdoor combination or master key that lets hotel staff open the safe.
This creates an obvious security hole : if the backdoor combination is easy to guess, or a copy of the master key falls into the hands of unscrupulous employees or ex-employees, then the contents of the safe can be stolen. As a guest, there is nothing you can do to reduce this risk.
Now, imagine that a hotel found a solution where locked safes are destroyed and replaced at almost no cost to them, and could give up on the backdoor. Burglaries involving the backdoor would vanish, and although this slightly increases the risk of losing your belongings by forgetting your combination (since the hotel can no longer open the safes of forgetful guests), it's a net improvement in security.
Ten years later, the hotel community has reached a consensus that safes-without-backdoors are the Right Thing to Do. The state then mandates that all hotels should be able to give access to the contents of those safes to the police. But they're not saying that hotels have to use a backdoor combination or master key, so they're not really asking anyone to reduce the security of their safes...
Reminds me of the time someone tried to legislate pi=3. There's absolutely no way to give police a back door into encryption without giving criminals the same back door.
This feels disingenuous to me. It would be fairly trivial, for example, to store a copy of all keys, encrypted with the government’s public key. Of course, there’s a million eats to go wrong, but that’s different from “mathematically impossible.”
As is pointed out in the Schneier article, the problems with a key escrow scheme are on the law enforcement side of things. They could lose access to their keys, especially if a lot of different agencies have keys.
Those are difficulties that can in theory be overcome, although it may not be practical to do so. That's a far cry from a pi = 3 issue.
This is approaching a pi = 3 level falsehood because of the “in no way compromises” clause. There are many schemes that are outright illegal (in my not a lawyer interpretation of this law), and it nakedly makes the other schemes harder with state actors as additional points of failure.
Appealing to Schneier on the topic of encryption is not an irrelevant appeal.
I’m not a cryptographer but I assume there are other schemes that are at least weakened by the requirement of a third party holding a key, much like the TSA master lock program was broken by statistical analysis of locks that were mastered this way.
But the mathematical impossibility if this aside, there is a very real practical impossibility if trusting an organization as large as the US government to keep such a database secure. There are better ways to help law enforcement than blowing such a large gaping hole in the web.
Most secure N party communication systems can be made to be secure N+1 party communication systems. If that +1 is the police, then arguably you have in fact given police a back door without giving criminals the same back door.
Criminals who want in would have to do so the same way they would before the back door--compromise one of the parties to the communication--except now there are N+1 parties to try and compromise instead of N so the attack surface is larger. How much this lowers security depends largely on the competence of the +1 party.
The popular model of a back door seems to be some wide open spying interface protected only by running on an undocumented port or something like that, and that all the bad guys have to do is get a copy of one client, reverse engineer it to find the access info, and then they are in.
For some reason, people tend to overlook that a back door is really just another communication channel, and the mechanisms modern cryptography provides for securing communication channels apply to back doors as much as they do to any other channel.
Nevermind that increasing the parties being attacked almost always makes the job easier not harder. See the TSA master key debacle for an example of how adding a third party master weakened the security and allowed statistical analysis to break the lock.
As long as one corrupt cop exists on the access pool the premise doesn't hold up.
1. The law applies to all tech companies who have users in Australia, regardless of where the company is incorporated.
2. Atlassian offers primarily self hosted products, to which this law does not apply.
The only difference is that it's easier to write off Australia than the entire EU.
A company based in Iceland might have Australian customers, but if they have no representation in Australia, there's precious little the authorities can do. They are of course free to pursue their inquiries through the country where the company is resident (Iceland), but that country has no obligation to adhere to Assie laws, and most likely isn't going to.
Yes, but as someone not living in, working in, or traveling to Australia I don't really have to care. I don't own a company that makes crypto, but if I did, my reaction to this would be along the lines of "don't open a Sydney office but otherwise business as usual".
On my reading of the A&A bill there was no extra-territoriality clauses, so I think it only applies within Australia.
However, I am not a lawyer and this is not legal advice.
> 5. the person provides a service that facilitates, or is ancillary or incidental to, the provision of an electronic service that has one or more end‑users in Australia
> 6. the person develops, supplies or updates software used, for use, or likely to be used, in connection with:
> (a) a listed carriage service; or
> (b) an electronic service that has one or more end‑users in Australia
> 8. the person manufactures or supplies components for use, or likely to be used, in the manufacture of a facility for use, or likely to be used, in Australia
There's several more.
A communications provider, under the given definitions is not bound to be on Australian soil, but rather interacting with Australia as a nation.
Applying this law to those of different nationality is difficult, and unlikely to succeed, however those of dual-citizenship can be held accountable.
This opinion I have, that the law does apply to those internationally, is one I have seen supported by several law firms I have occasional contact with.
Probably aided by:
> 317F. This Part extends to every external Territory.
> 317ZC.4 Part 4 of the Regulatory Powers (Standard Provisions) Act 2014, as it applies in relation to section 317ZB of this Act, extends to:
> (a) every external Territory; and
> (b) acts, omissions, matters and things outside Australia.
> 317ZD (Enforceable Undertakings).
> Part 6 of the Regulatory Powers (Standard Provisions) Act 2014, as it applies in relation to section 317ZB of this Act, extends to:
There's a few more - but as the Act is stating it is enforceable to both external Territories and acts, omissions, matters and things outside Australia, I do think the most likely reading is that 'acts' can be enforced upon Australians living outside the borders.
The "acts, ommissions, matters and things" appears to give extra-territoriality to subject matter but not to legal personalities (ie companies and people).
The "communications provider" part is very broad, and while in Australia I am definitely covered by it. But the courts will not generally interpret legislation as having extra-territorial effect unless it explicitly says so. Otherwise every Act would need a stuff like "ps. the Fisheries Amendments (Rex Hunt Is A Wanker) Act is non-territorial".
My question is not about whether a legal personality (ie, a company) is affected if they have a physical-legal presence in Australia, because of course they are affected. My question is whether someone like me, who is outside Australia's boundaries, can be served a notice while I am out of the country. On my reading it's still a "no".
But I am still not a lawyer.
We are a testing ground for this insanity, the time to fight it is now.
Australian citizens did not ask for this. All the "consultation" letters were ignored in favour of creating a police state.
Then when law enforcement claims they are not being cooperative, they can say they have a tool that meets their needs if they're patient.
Courts don't take to kindly to people trying to be cute with their demands. It's not like they are going to say "well they are technically right" and give up, they are going to just up the consequences or clarify the request until you comply in the way that everyone knows they want you to.
Edit: And according to the wikipedia page  for Lavabit, he was successfully held in contempt of court for the printout move.
We absolutely do not need the "not my problem, I don't care, engage in mass surveillance all you want!" engineers.
And even in the US it still isn't a sure thing that Lavabit's response would work again if someone else tried it. I don't know the details, but I believe there is still some uncertainty around if the FBI just kind of "allowed" them to close down by not pursuing it any further, or if they got what they needed, or of secret laws were changed because of this instance.
In Lavabit's case, there was a lot of FBI involvement, a lot of secret court orders and gag orders, and a lot of accusations from the owner of Lavabit that he was brought to secret courts without legal representation and no chance to appeal, and even he says that there are things he still can't talk about.
Please note that we're not seriously suggesting that encryption providers should adopt this -- not as long as there are other options. But if you're legally obligated to do something, this is the "f*ck off and leave me alone" approach to compliance.
 C.V. Wright and M. Varia. Crypto Crumple Zones: Enabling Limited Access without Mass Surveillance. In Proceedings of IEEE European Symposium on Security & Privacy, 2018.
Thinking like an economist, you want to align the incentives to make it possible but not free to access user data. A weak key (per user) that's breakable with $10k compute cost seems about right to me, but the actual optimal cost may be higher or lower.
But I think we've had the option to send personally encrypted end-to-end messages for a while now. (Open)PGP anyone?
So instead of using Signal, or Whatsapp, or whatever and depending on their client-side-encryption (and possible server-side-decryption) of private messages, how about plain email using standalone user-encryption.
Two things may come of this: Google will stop "interpreting" my email messages, and laws like these stop mattering very much.
Of course, 5th amendment (and its siblings in other countries) still apply...
Do you want a police state? Because secret surveillance of all citizens is how you end up with a police state.