"The new policy was not formally announced but appeared sometime over the past few days on Apple's publicly available law enforcement guidelines. It follows the revelation from Oregon Senator Ron Wyden that officials were requesting such data from Apple as well as from Google, the unit of Alphabet (GOOGL.O) that makes the operating system for Android phones."
This sums up Apple's view on privacy. So essentially they were fine handing over the data without warrant until this was exposed. Now that it has gone public they are "revising" the process to make it inline with their competitor Google.
They can care about privacy all they want but if they are legally required to hand something over what are they realistically going to do?
Really their only solution is to make everything end to end encrypted so it is impossible to comply (which they have been slowly making more and more components E2E), but that takes time to do.
I honestly din't know how notifications could realistically be E2E since the source is not an Apple device so they can control it, but maybe it would be possible?
The point here is that by tightening up their guidelines, Apple has tacitly admitted that they previously handed over notification data at times when they were NOT legally required to do so. Otherwise, they wouldn't have tightened up their guidelines.
If you're Apple scale and you have an army of lawyers and a big part of your marketing is that you're the privacy-conscious choice, you shouldn't fuck this up. You should have your lawyers challenge the subpoenas or do whatever has to be done and it's bad for your brand if you don't. It's unfortunate for everyone including Apple that it took Ron Wyden being a gadfly to get them to do what they should have been doing all along (and while it may not have always been about notifications specifically, he's been at this for years).
In an ideal world that is a great ideal, but that is not the world we live in.
My understanding is that before this was made public Apple was under a gag order and could not talk about this. In a situation like that how would Apple be able to bring lawyers into the case to fight it?
> they were NOT legally required to do so.
My understanding is that they very much were forced to do this.
Again I am going to ask, if Apple and Google were under Gag orders realistically what were their options?
To me it sounded like they didn't have any and the only reason we are able to get these protections now is because it's public knowledge.
> My understanding is that before this was made public Apple was under a gag order and could not talk about this. In a situation like that how would Apple be able to bring lawyers into the case to fight it?
Apple has in-house lawyers. Who do you think is advising them on this gag order? They don't need to bring anyone else in to fight it. They already have the resources.
ok, so they can talk to those lawyers. Maybe those lawyers concluded that Apple had no choice and could not fight this.
You're saying they could fight this, I am saying they can't.
Frankly neither of us have evidence either way since until a few days ago this was under a gag order.
But considering they complied, I am more inclined to believe they had to and had no choice.
Both Apple and Google complied with this order for some reason, knowing that when it comes out it will be criticized.
To me the only logical conclusion is that they had no choice but comply and if they consulted with their lawyers they likely came to the same conclusion.
So what has changed that they can require a judges order now and couldn't before? Why was Google apparently able to require a judges order before but Apple couldn't?
Except Google requires an order from a judge, and Apple did not. No one is arguing that the courts can't force Apple or Google to turn over data, but apparently Apple guidelines did not actually require a judge to issue the order.
Even making everything E2E-encrypted doesn't stop any government saying "give us access or your executives go to jail and your products are banned from sale".
End to End encryption means that illegal or covert surveillance is impossible without more work from the company, which makes it easier to say "we can't give you access" when spooks show up.
Obviously they can still send you to jail, but for this grey area "technically maybe kind of legal" surveillance execs at tech companies aren't going to jail for not complying if it's actually illegal.
From most valued company in mankind's history, in US, in 2023?
Keep dreaming, those people are not handled like mere humans, nor are those companies handled like some small family business. Nobody actually responsible went in jail for 2008 financial crisis and it would have been trivial to start pointing fingers at various culprits (but you would need many). Also banning Apple products in US would be a literal political suicide unless they would anger most of US population with something really really bad.
From time to time somebody who wants free publicity for some political goals will venture on some small crusade but otherwise that's simply not how US handles its business currently.
> From most valued company in mankind's history, in US, in 2023?
Yes, because we live in the real world not a corporate cyberpunk dystopia where multinationals are stronger than nation states.
There are many differences between the financial crisis and my hypothetical, but before I list them I will point out that banks were fined (and some ceased trading) and that people did go to jail: https://ig.ft.com/jailed-bankers/
One of the bigger problems leading to the financial crisis is that people heard about the Black-Scholes equation, and the magic words "won the Nobel prize in economics" turned their brains off.
Another big difference is that the governments understand the importance of money, while they very obviously don't understand the importance of encryption.
Another is that the current zeitgeist is that Big Tech is too big and needs to be brought to heel — unlike the bankers, who by the GFC were all asking for (and getting) let off the leash they'd been given since one of the many previous financial crises.
But rather than getting down in the weeds with these comparisons, I would end by noting that when you write:
> Also banning Apple products in US would be a literal political suicide unless they would anger most of US population with something really really bad.
You're assuming I meant the US government. Nope, there's about 200 sovereign nations in this world. Apple's products have suffered or been threatened with sales bans due to non-compliance with the laws in:
Now that the publicly available guidelines were exposed, they changed and Apple didn’t even send a notification to every journalist in the known universe. Time to gather the tin foil hats!
> So essentially they were fine handing over the data without warrant until this was exposed.
They were required by law to both hand over the data and not say anything about it.
Furthermore, disabuse yourself of the notion this instant that you are safe from being spied on by state actors. You are never anonymous on the Internet.
> As with all of the other information these companies store for or about their users, because Apple and Google deliver push notification data, they can be secretly compelled by governments to hand over this information.
"Secretly compelled" sounds illegal on the government side. In democracy if government wants to force you to do something they should go through official, due process, i.e. court warrant.
I mean, what do you want? Should they dig their heels in and stick with the old policy out of some kind of refusal to admit mistakes?
Companies get things wrong. Apple got this wrong. But, as they say, the best time to fix a mistake is before you make it, and the second best time is now.
Do you know how this relates to non-US customers in... non-US? I.e. everyone else, everywhere else? And, whatever applies to Apple in this sense, I assume also applies to Google.
If the US government knows when my laundry is done, because my local HomeAssistant instance uses google play services to send a push notification...
Apple is US based company. So even if they break the local law, US has upper hand on enforcing Apple to provide information if they don't care about diplomatics.
As a foreigner, protection from the American government is the exception, not the rule.
This lack of suitable protection is what nuked the GDPR agreement between the USA and the EU (twice). The current iteration of this framework, which I can only think of as part of a war of attrition against Max Schrems, is largely the same and will likely be invalidated in the future for the same reasons.
This isn't only an American issue, of course; every country or economic bloc wants to keep its citizens' data within their borders but also wants the ability to exploit intelligence for foreign parties.
If you want the legal protections you enjoy at home, avoid foreign products and services. If you have to pick a foreign service, pick one from a region that's least likely to cause trouble for you.
Indeed. The USA has the freedom not to be compatible with EU law and decided to prefer the ability to deny European citizens appropriate data protection over letting American companies hold EU citizens' data.
I'm not sure what drives the EU to form such meaningless pacts every couple of years. I'm guessing a combination of lobbyists and sucking up to the American government are the driving force behind these deals. It's rather sad, really.
I think it's because how many of the major tech products come from the US. If they started to actively enforce the violations every time it becomes official it's a violation nearly every company in Europe would be affected. Salesforce, Hubspot, Google Apps, Office 365, etc. all leading to violations would be a major thing, it would give some European tech companies a massive boost and probably improve European tech scene but it would be a major issue for a while.
There's plenty of European tech, but European tech companies either suck at marketing, suffer from outdated business practices, or get bought out by American companies when they start growing.
Any data center can start competing with at least the important parts of Amazon/Google/Microsoft if they collect the right open source tools and invest in making them easily accessible, but very few seem to want to take any risks. They'd rather have customers stick to their own custom APIs and web dashboards than implement any kind of common standard.
I want the European tech industry to be better, but if that's done through fining American companies into impracticality we'll only end up stuck with the annoying practices that have caused American companies to be so popular in the first place.
> There's plenty of European tech, but European tech companies either suck at marketing, suffer from outdated business practices, or get bought out by American companies when they start growing.
I would say the issue is mostly on the funding side of things. US companies get more funding so can splash the cash when marketing and can hire larger teams to build faster.
The fundamental problem is accumulation and correlation of, for lack of a better term, “public private data”.
Phone records, credit card statements, toll road data, cell tower data, etc. is not “our” data, it’s the assorted companies data. It just happens to be directly related to private activities even if in public places.
So authorities route around principles of privacy and such by engaging in entities who are not us. There’s nothing stopping authorities from talking to a neighbor. “Did you see anything? Were they home last night?”
“Oh, you’re a delivery person, did you deliver something to that house? Do you know what it was? Can you describe who answered the door? What time was that?”
Technically, that’s what the investigators are doing. No different from long standing investigation techniques.
The problem is scope, volume, ease of access, detail of the data, and, most important, how long it’s kept. Wouldn’t surprise me if the phone companies have my travel history for the past several years.
The concentration, resolution, and time span of the data makes the “Enemy of the State” movie more and more real. But it’s all using fundamental investigative techniques, policies, precedents and such that police services have been using for 100 years. It’s just cranked up way past 11. Those old court rulings and such, and concepts about what privacy means today need to change, or we’ll never be able to leave our houses.
The metadata is all you need: find a few messages in an interesting e2e chat, ask Apple which accounts received push notifications at those specific times, find intersection.
I’m not sure how E2EE would work for notifications. The ISV sends the notification but the OS displays it. I suppose some key exchange between each ISV and each device?
My understanding is that Pushover encrypts the message with your device's public key (from the app) and delivers it through Apple/Google. On iOS you need a special permission from Apple to "filter" push notifications and this is the hook they use to capture and decrypt it before displaying it on your phone
Apple forces you to use their notification service. At one point, they even tried to force desktop users of Safari to be bound to their notification service instead of being able to use web push notifications, as well.
Apps distributed on the Play Store also have to use Google's notification service.
True, but apps like signal get around that by having a pushing notification for "you have a new signal message", which then triggers the client to contact the signal servers to download the message.
And a lot of messaging programs send push notifications for highly dubious reasons, like when you read your own messages (so that the other devices mute their notifications). Element does this.
…so? That’s just not very useful when applied to a large scale service like Signal. You and god knows how many other people are getting a generic “check me” notification at a high rate.
Again, this doesn't necessarily apply in the case of a sufficiently used service like Signal:
- Alice sends message to Bob at 12:01PM
- Bob receives generic "check me" notification at 12:02PM
The key point with line #2 is that Bob is not the only user at 12:02PM who received that generic notification, and is likely in more chat than one. Pinning the specific notification to Bob is a needle in a haystack. It's not perfect, but it's not the slam dunk people are making it out to be.
And before that's even a concept, you have to contend with the fact that it's very hard to use an Apple device anonymously. The government doesn't need to correlate message timings, because if they have the APNS token for the device, all they have to do is ask Apple who the hell owns the device.
Anonymity and privacy are two different, albeit related, concepts.
If you have metadata for a couple of messages it is no longer a needle. Not sure what your point about APNS tokens is - I agree, once they hone in on who received the messages Apple would know the device.
Look, if all you're after here is an "I hate Apple" pity party, you can do better: iMessage is default backed up to iCloud - without encryption - and offers an even easier workaround for the government. I would have to imagine they get more from that than notification payloads.
It's pretty simple: don't use iMessage if you value secure communication, since we know Apple gets hamstrung by the government at points. Devs building for the platform should be taken to task here because they need to be aware of and optimize around the data leaking issue, relying on the parent platform to always be a good steward is a recipe for disaster.
If I sent a message to a group of friends, you don't need to know the contents if all you want to know is who my friends are. Just look for groups of people getting notifications at the same time. After a few messages, you'll know exactly who is in that group.
I did some anonymization across hundreds of millions of users for some GDPR stuff a few years ago. If I knew when you made a purchase, to within a day, for two separate one-off purchases of our products ... I could deanonymize you. Specifically just you. If you had a subscription payment, and I knew at least one payment you were late on, and the hour you eventually paid it ... I could deanonymize you down to a group of 3-4 people.
Correlation and timing are huge for deanonymizing things in otherwise anonymous data. We had to round our dates to the week to keep some of our analytical data anonymous.
Yeah, no. You're missing the point of this thread here.
> Just look for groups of people getting notifications at the same time.
In a sufficiently built system - like what Signal does, and like what most apps should be learning from - all the notification payload is getting is a "check for updates" command. You and literally everyone else are getting that generic notification all the time if you're in more than one chat. Just because Alice sent a message at 12:01PM and Bob received the generic "check me" notification at 12:02PM doesn't mean Bob necessarily received Alice's message specifically, because the notification has absolutely no concept of the messages period. Bob could have gotten that notification due to any of the other chats he's in.
At best the argument here is that the government could use the push notification token to link a Signal account to a real username via Apple, but if you're that paranoid you shouldn't use an iOS device anyway. You also likely wouldn't use Signal due to the phone number issue.
The "send" part I understand. What I don't understand is the "store" part.
And is the Apple's routing even needed? Web push notification don't need Apple, Google, Microsoft or Ubuntu between the web app and you. Can't app push notifications work in the same way?
The web push architecture requiring middleman servers in between the website and the browser is similar to native iOS apps requiring the APNS server (Apple Push Notification Server):
If the device is not able to receive the notification at the time it was sent out, it has to be stored somewhere, pending delivery. It's the same with email, SMS, literally every messenger app, heck it's the same with physical parcels!
You could argue the protocol should allow the sender to store the message, to allow the receiver to fetch it directly, without a middleman - and this is more or less what RSS is, except the polling architecture causes both extra latency and increased server load. But we could "fix" RSS by eliminating polling, etc - that's an easy engineering problem, the true issue would be standards/consensus.
Finally, since we're talking mobile devices, you must consider battery usage. It's cheaper to keep only a single TCP connection open; this path is heavily optimized at every layer of the stack, from network hardware and drivers to the app-facing API. Notably, email notifications have to go through their own network path (there's a couple of IMAP extensions to enable push), this is part of the reason why some email apps would like to look at your email from the POV of their own hosted backend.
These choices reflect the fundamental problems of mobile messaging. But we could still improve upon the status quo - like requiring E2EE for notification payload. That would be a huge effort though, as an app vendor you already need to implement APNS/FCM/Web (all very different from one another), and I'm sure every middleman would like to build their encryption slightly differently. Adoption would be slow and painful.
>, it has to be stored somewhere, pending delivery.
I think the "stored?" question isn't about the minimal fraction of time notifications have to be in transient storage to even "send and immediately forget". Yes, a transitory "message queue pub/sub" architecture still needs to have some notion of "storage" to work.
Instead, "stored?" is wondering about a company's discretionary and optional retention period after the successful notification delivery that isn't required by law. E.g. 30 days?, or 1 year?, or longer? ... such that those archive records of past notifications have become targets for court subpoenas and law enforcement information requests. If Apple doesn't "store" notifications in this sense, there's nothing to "hand over".
If every app need to poll different servers periodically or, listen for pushes, it will be a nightmare for battery life and be a big detriment to phone resource usage.
Sure, but shouldn't this be e2e encrypted? You could probably come up with a scheme such that Apple doesn't know the contents, sending app or recipient device.
This is up to app developers to implement themselves. The mechanism Apple provides for this is UNNotificationServiceExtension, which allows an app to step in before a notification is delivered so that it can do things like decrypt it.
Encryption should have been the default in the official framework/API to begin with. But I assume E2EE wasn't a concern for Apple/Google back then, and now it's to late to change everything without a lot of pain.
Apple previously didn't have any problems making breaking changes to their stuff, they even switched their whole CPU architecture and said "deal with it". If E2EE was really a concern to Apple, they'd implement it.
Push notifications just have the convenience of even technical people often not knowing that most of them are routed directly through Apple/Google. So how should your average user know? It's free data and nobody complains about it how they would with messengers eg
> Apple previously didn't have any problems making breaking changes to their stuff, they even switched their whole CPU architecture and said "deal with it"
What? Apple put a tonne of effort into continuity between arches. You think Rosetta is them saying “deal with it”?
> Apple put a tonne of effort into continuity between arches.
Still can't run windows on an M* mac, can you? Why? Because there is literally no documentation for the CPU and they didn't really follow any standards. They just built some shit that literally only Marcan probably knows how it actually works. Even the people who made it probably don't know how it works, only how it is supposed to work.
It should. I am sure that Apple will be working on this now. It’s unfortunate that they didn’t implement it before. Maybe Apple itself didn’t realise that this data was being taken by the authorities, and the parts of Apple that did know were not allowed to inform engineering.
Push is a just technique for minimizing (scarce wireless) bandwidth, battery and RAM consumption, which was critical in the mass deployment of the initial iPhone on AT&T.
iPhone developers typically aren’t very skilled and they will stuff things into APNS payloads without thinking through the consequences (they’re just happy to get something working at all and more often than not, are in some remote country that doesn’t allow familiarizing with the user context) so this is definitely a privacy concern whether due to legal interception or hacking.
Perhaps a future version of SwiftUI (and what Xcode Cloud may evolve into) will implement a simple E2E Push scheme by default, allowing Apple to spend more time reviewing the Push API use by the remaining legacy apps.
But…take comfort in knowing that at least Apple’s server is not purposely located across national boundaries (like Blackberry). That’s good because users have no expectation of privacy from monitoring (by US government or otherwise) when using servers (or when they themselves are) on foreign soil.
Meanwhile, US let 12,000 unknown persons literally wander across the border in just a single day last week (US can only handle 1,000 border crossers on a good day) and some are committing horrible crimes (https://foxsanantonio.com/amp/newsletter-daily/undocumented-...) —and yes, they do have Android and iPhones. The US is also sinking another $1T in (unfunded) debt every 90 days (meaning another Great Depression is not out of the question) and maybe it’s time to better focus our resources, for example, (after 40 years) China seems just about done surveying western IP and is clearly moving into a new era by having their massive APT groups (collectively 50x larger than US cyber forces) operate crude things like MoonBounce deployments —we have almost no monitoring for factory-installed threats, we’re totally wide open.
https://youtu.be/j7OHG7tHrNM
The law requires a warrant to force a company to hand over data. In many cases, companies (and people) are free to cooperate with law enforcement of they choose to. Apple seemingly chose to hand over data when asked, no warrant needed.
If a police officer comes to your door without a warrant, you can still tell them "hello, please come in and look around the house". I'm not sure why you would, but you have the freedom to do so. Perhaps it creates some goodwill between you and the police officer, as it might between Apple and the American government.
> If a police officer comes to your door without a warrant, you can still tell them "hello, please come in and look around the house". I'm not sure why you would, but you have the freedom to do so.
Nice example. I also work as a legal advisor and had that two times with a client. Second time my client invited them in, because my client doesn't want the neighbors to hear about it (he'd rather have them guess). Police know how to play this game. ;) Another game they played every time is this: one officer talks with the client, the other police officer is listening and suddenly interrupted him with some minor detail. They do everything to bring you off balance.
They never brought the so-called 'problem' to trial. It's just about letting him know they're paying attention and showing up at his doorstep (although they have his phone number). It's a shame they never did something with the false complaints against my client.
And yes, me and my client are from The Netherlands. ;)
I can't be really mad at them, although it's pretty annoying, that's for sure. To give you one glimmer of hope: after police officers at your door, you might get invited for a hearing at the police station with two detectives, they are way more nuanced than the officers on the street. But keep it business-like, because they also ask all kinds of personal questions that have nothing to do with the case at hand.
Edit: and remember, you don't have to tell them anything (everything you say can be used against you, although a little cooperation can go a long way). You can always say: if you're so sure about it, then make a case out of it.
Edit 2: I'm only talking about The Netherlands here. And my advice is general, not specific to a certain case.
Just for the people reading in the US,
if you are unsure,
you do NOT have "a right to remain silent"
at least not in the sense I think of it.
You must actively assert BOTH your fifth and sixth amendment rights
More is on the book "you have the right to remain innocent"
which I must confess I have not completely read
but the gist is
fifth and sixth amendment.
> He argues that because of Salinas v. Texas, the fact that someone has asserted the Fifth Amendment can be used as evidence against them in court, so he suggests criminal defendants and interrogatees instead invoke the Sixth Amendment, the right to legal counsel.[3]
>Apple seemingly chose to hand over data when asked, no warrant needed.
Is there an example or reference about apple doing this? The article doesn't seem to provide one.
It's not clear to me if previously they required a warrant, and now that's formalized on their public page, or if previously they didn't and they changed their policy.
In many cases the law doesn’t require. People and corporations have a lot of freedom to give information to law enforcement willingly when asked. Warrants are required to force you, but if you’ll say yes if asked nicely there are many fewer protections.
If Apple was previously more accommodating to law enforcement requests than legally required to be, then surely they can update their internal response threshold to reflect minimum necessary compliance.
Don't get me wrong (my fault for giving such a short statement without explanation), I agree wholeheartedly, but the wording in the title sounds a bit like Apple has a final say in this kind of thing. And maybe they have...
In order to enable push notifications, the on-premises Exchange 2013 servers must connect to the Microsoft 365 or Office 365 Push Notification Service to send push notifications to iPhones and iPads. Exchange 2013 on-premises servers route their update notifications through the Microsoft 365 or Office 365 notification services to remove the need for enrolling developer accounts with third-party push notification services.
So it goes like
1. Your server
2. Microsoft server
3. Apple server
4. Device
So this begs the next question: does Microsoft require warrants to disclose these notifications?
i know it might sound like something from an alien movie but my setup is 100% on premise with no Azure or O365. I did some googling which says an "HTTP PING" is kept going like a long running socket essentially. I think I'd still need to pcap the Exchange cluster to determine what happens before and after a notification fires.
I'm not sure if long running sockets are allowed on mobile phones these days. They were pretty normal for a couple of years, but phone battery life has increased substantially since Apple and Google started streamlining the notification system and basically sleeping background services that would otherwise poll for notifications through sockets. You'll still get your notifications, but (hours) late.
You may instead want to pcap a mobile client to see where the notification is coming from, I bet the server still phones home somehow. It's possible Apple exempted themselves from their notification policies and integrated their Exchange push services into the OS power manager like they did their APN service, but I'm not sure if I'd expect them to put in that much work for what's essentially a competitor.
As a side note, because it's not made clear during setup: the Outlook app for mobile phones will keep a copy of your email on Microsoft's servers (and a partial copy on the device); if you use that app, Microsoft will poll for updates the normal way and likely still push the notifications through Apple's servers, despite you not having set up any direct connection from your server to the cloud. This isn't the case for the built-in email clients, of course, but the Outlook app seems to be quite popular for Exchange accounts.
Good. It should have been that way since the beginning. It’s infuriating that companies feel that they can do this sort of things without any form of consent.
In my opinion, it shouldn't be up to the whims of companies to decide who they share your private data with if they don't have your explicit permission, or a valid warrant, subpoena or court order, to do so.
You're asking for legislation, but what we have seen coming out of the government are laws looking to make it easier to get access to data. Unfortunately, companies are all we have right now.
Doesn’t the third party doctrine and/or ECPA only require a subpoena, especially for old, partially delivered notifications? It seems like Apple and Google are daring law enforcement to sue so that they can establish a stricter precedent.
These terms should be null and void. You cannot give informed consent as you have no clue what could happen in the future. Consent should be given on a case by case basis. Also, not hidden in 100 pages long EULAs that you have to agree to in block in order to do anything. There cannot be informed consent with coercion and deception either. Personal data are as private as medical records.
> You cannot give informed consent as you have no clue what could happen in the future
That view would completely void any possibility of giving consent to anything, ever.
The same logic could be used after the fact to sue your doctors for example. "Sure I signed something, but that couldn't have been informed consent when I didn't yet know that the operation was going to go poorly."
This is why the GDPR demands explicit consent in many cases (even though many companies don't follow the law, unfortunately). The EULA is not a defence for a lot of data sharing anymore.
In my opinion, the USA deserves a similar law to protect its citizens from this type of crap. Unfortunately, privacy protection doesn't seem to be a very popular topic in American politics.
In this case, explicit would be asking users if specific named datum can be shared with specific named parties each time there is a data request, just like you're explicitly asked to send specific emails you write before they're sent.
It is crazy the level of care we have in wiretapping laws for domestic phone calls, while almost nothing for digital communication if the company volunteers it.
Yeah, the third party doctrine is complete crap in the modern era. At the very least, what constitutes voluntarily turning over information to a third party needs to be far more narrowly defined, but I think the entire premise is a bit shaky.
It actually seems understandable to me. Many politicians in the US are former lawyers, judges, etc. They could easily have learned lessons of how much privacy protections can get in the way of "real" investigations. They're also part of a government that has been handed more and more power for the better part of 60 years, privacy laws are a problem if your honest view is that the government is there to protect us by whatever means are necessary.
GDPR won't protect you from law enforcement, let alone intelligence agencies. The legal mechanisms these organizations use have broad carve outs from GDPR.
Before GDPR, there already were other protections against domestic three letter agency surveillance. As Snowden showed us, this was easily circumvented by getting UK to spy on US citizens instead and sending over the data (and vice versa).
Although GDPR doesn't restrict the activities of law enforcement, it does restrict what other companies have and therefore what LE can request of them. The biggest issue is not that LE can investigate specific things, it's that they have easy access to such a massive self-perpetuating dragnet of everything.
The GDPR may not protect you from law enforcement, but it's not like you're free to send over whatever you wish to the police without following the proper process, either.
If you have information on someome the police asks for, there are strict processes that need to be followed. Sharing information with the police at a mere request, like what happened in this case, may even force you to inform the people whose information you have provided. Unless law enforcement can demand confidentiality, they're a normal party in the eyes of the law.
For example, when the police requests security camera owners for footage in a wide area around where a crime took place, sharing that footage may be illegal without blurring or cropping if other innocent people not matching the suspect's description are in clear view, or you may need to inform those people that you have provided the police with footage they're visible in. This very much depends on where the camera is located and where it's pointed at, and I doubt anyone had ever filed a GDPR complaint with their DPA about it, but without following the proper legal process, the police is as valid a party to share information with as any other person or organisation.
In this case, where the police asked Apple but did not fully utilise their investigatory powers (because Apple could've demanded a warrant), a GDPR-like law would've made it harder for governments to spy on people. It's not like warrants are hard to come by for spy agencies, but such warrants would at least produce a paper trail of some kind in the case of a witch hunt.
It's incredibly confusing to me that so many American tech workers were ranting on HN how the GDPR is the biggest mistake ever and will sink the EU tech scene, only to discover that GDPR is a citizen's tool against total government surveillance.
Unfortunately GDPR carves out exemptions for government and law enforcement purposes. They would never willingly impose additional restrictions onto themselves.
Hard to believe apple’s postering about privacy when it takes this showing up in the media for them to harden up their policies. Seems like they only care when people know about it…
Given they are building out their own ad network and the huge premium you get for ad attribution, which they can do handily with the level of insights they have. I'm not so surprised. As Cory Doctorow says, what's more lucrative than selling privacy? Selling privacy and selling tracked ads at the same time. It's a similar thing with Apple and sustainability. They haven shown they'd rather shred old but still functional iPhones rather than sending them to a waste heap, where they could be diverted and or mined for parts. They greatly limit repairability with VIN pairing. Yet they spent a substantial amount of time in their latest iPhone presentation selling the notion that Apple is sustainable. Freeing customers of the moral burden that releasing a new phone every year and using finite resources to build them is fundamentally incompatible with sustainability.
Nah. I was with Apple for the better part of a decade; Apple care[sd] about user privacy only insofar as it gave them something else to bash Google about[1]
[1] Getting into a knock-down-drag-out with the new honcho overseeing CoreOS about us "doubling down on user privacy" was one of the last straws for me. Everybody in the audience was source-disclosed (and some of us had even read the relevant parts :), and the guy had the audacity to claim a certain thing was impossible when we knew that particular thing was not only possible but was being uploaded to the mothership ...
> Apple care[sd] about user privacy only insofar as it gave them something else to bash Google about
Apple have done a lot of work in relation to privacy, and I am struggling to think of times they have bashed Google in relation to that work. It’s rare for Apple to refer to competitors at all.
For instance, here’s Apple’s documentation on tracking prevention in WebKit:
This is clearly the result of privacy work spanning many years. I can’t think of a time Apple have bashed Google in relation to any of this functionality.
This goes for pretty much everything I can think of where Apple is clearly working on privacy-improving measures. If the only reason Apple works on privacy is to bash Google, where is all the Google bashing? Because I can see a lot of privacy work, but I can’t see a lot of Google bashing.
I should have made the scope of my comment more clear: the Google-bashing was internal.
I honestly could not care less what the marketing spin was; internally, we were being told things that we knew to be false.
I'm speaking specifically about the "absolute technical impossibility of providing certain information residing in the iDevice to anyone, even when the request was accompanied by a search warrant", which was the mantra circa 2014. The market differentiator was "we care about user privacy, in stark contrast with Google, who monetizes all data on their devices".
That was horseshit, as a specific security team regularly handled exactly those requests ... and heaven help you if you pointed that out in any discussion with management.
And would this be the only thing they did not realize or there is still a giant list f things that require to be made public until Apple realizes that those things are incompatible with their PR about caring about the customer privacy.
Maybe is time that Apple customers demand Apple to disclose what other data is released to authorities without a warrant and why is Apple not attempting to fight back against that.
I think part of the pushabck against Apple is that they knew this was possible and believed they could only talk about it once it was publicly known.
Apple has plenty of lawyers. If they honestly didn't agree with this practice they could have politely told the feds to f$#& off. The fact that it took public reporting to make Apple feel safe discussing this means that (a) they did know (2) they didn't stop it and (d) this could pretty easily be little more than PR cleanup.
> putting the iPhone maker's policy in line with rival Google
So while Apple touted privacy as a core feature, it opened a firehose of private data – something the other part of the phone duopoly, Google, hasn't done.
Lovely, now that there was spotlight on invigilation through push messages, Apple suddenly decided to close that loophole.(it is still better than what google did - nothing)
But why it was open in the first place? why were they even stored? Why weren't they E2E encrypted?
From Wired's reporting, Google already required judicial approval, and Apple gave it without them. "Google confirmed to WIRED that it receives requests for push notification records, but the company says it already includes these types of requests in its transparency reports. The company says requests from US-based law enforcement for push notification records require court orders with judicial approval." [1]
> putting the iPhone maker's policy in line with rival Google
this quote from the linked article further strengthens that point - Apple had to catch-up, not Google.
This sums up Apple's view on privacy. So essentially they were fine handing over the data without warrant until this was exposed. Now that it has gone public they are "revising" the process to make it inline with their competitor Google.