Just to make it clearer, the removal of Facebook in the accounts was not about Facebook alone (though it may have been the cause for the decision). Integrations with Twitter, Vimeo, etc., were also removed together.
Facebook’s enterprise certificate was revoked because it violated Apple’s policy by using it to distribute applications to users who’re not “internal” and not employees/contractors. That the app in question was also sleazy (as Facebook usually seems to be) was a coincidence in this case. But it’s also true that that app wouldn’t have passed the App Store review process had it been submitted there.
That one has more to do with Apple releasing frameworks that allowed any of the internet services to tie into iOS themselves.
For instance, share extensions:
>Share extensions give users a convenient way to share content with other entities, such as social sharing websites or upload services.
Apple no longer needed to pick winners and add integration for them into the OS by hand.
>Facebook’s enterprise certificate was revoked because it violated Apple’s policy by using it to distribute applications to users who’re not “internal” and not employees/contractors. That the app in question was also sleazy (as Facebook usually seems to be) was a coincidence in this case. But it’s also true that that app wouldn’t have passed the App Store review process had it been submitted there.
Facebook's spyware in a VPN's clothing had already been kicked off of the App Store last year.
>Onavo, which Facebook bought back in 2013, does two things. As far as regular consumers are concerned, Onavo comports itself like a VPN, offering to “keep you and your data safe” and “blocking potentially harmful websites and securing your personal information.”
But Onavo’s real utility is pumping a ton of app usage data to its parent company
The rules were written to block sleazy apps. That the rules were the instrument by which the certificates were revoked doesn’t decouple the app’s sleaziness from its removal. (Similarly, saying “he went to jail for stealing” is accurate. Despite the reason for the jailing being the statute banning stealing.)
No, they weren't; they were written to block distribution of applications without formal App Store review to the general public.
Only after the media takes notice and it makes Apple look bad. In both this case and the "FacePalm" FaceTime spying incident, Apple knew for months. You shouldn't get brownie points for getting coerced into good behavior.
That's a tall order, but it seems like the thing that could be mostly crowd-sourced to good effect (with a few moderators to ensure accuracy).
Other than that, I hope you're not using cracked versions of any Adobe software or commercial filters, or key generators for either on systems that have any personal information...
Although it does seem a little odd that Youtube has full marks when there's quite a lot of info about their broken take down system and appeals process...
Most companies seem to violate their own privacy policies, and it's impossible for us to know that they're doing it without either whistleblowers or regulators.
It's not like any of these indicators can be taken entirely on face value anyway, since their all indicators of past performance and a change in policy at a company could happen any time. Something is better than nothing though.
There is still a lot to be done, but it would be much worse. Some bigger culprits, due to being bigger, still get away with their bullshit.
For example, Google apps uses/used an embedded web view for sign-in, which also signs you into their web search in Safari.
They share the sign-in in a way that even newly-installed apps can use; e.g. sign into YouTube, install Google Maps, and you will find yourself already signed into Google Maps when you open the app (in my case, for the first time on this device just now.)
This may be convenient, but feels creepy and intrusive as hell.
Facebook, Instagram etc. also continue to be major offenders for things like iPad UX guidelines.
It's only shared between apps that are signed with the same certificate.
Apple protects you from asshole developers, but it also restricts you from just installing what you want in the first place and they take a 30% cut of all revenue while doing so. I hardly think they do it out of the charity of their hearts.
How is this not intrusive?
"Do-Not-Track Signals and Similar Mechanisms. Some web browsers may transmit "do-not-track" signals to websites with which the browser communicates. Because of differences in how web browsers incorporate and activate this feature, it is not always clear whether users intend for these signals to be transmitted, or whether they even are aware of them. Participants in the leading Internet standards-setting organization that is addressing this issue are in the process of determining what, if anything, websites should do when they receive such signals. We currently do not take action in response to these signals. If and when a final standard is established and accepted, we will reassess how to respond to these signals."
"Some letterboxes have 'no junk mail' signs attached, but it's often unclear whether these were even placed by the current occupant. The current resident may be completely unaware that they're missing out on our great deals. Therefore we have chosen to deliver to all addresses regardless of signage."
What an incredible way of thinking. You could rationalize anything this way.
A DNT-style browser feature with legal weight behind it would have been miles better. Your browser could present a consistent interface for every website that requests consent, and allow you to set global defaults, etc.
Maybe it's a harder issue to solve with a UI (third party cookies, subdomain cookies etc.) Maybe they just haven't gotten around to it (I used to set the global "site im visiting" cookie pref but since they added the "intelligent" cross site tracking protection that option seems to be gone..
What are you referring to? The play/pause button has never been an "Apple Music" button, and in fact the recent behavioral change is to make it even more aggressively go to the current app (e.g. if you start playing a video in YouTube, the play/pause button controls that video instead of iTunes).
I have had it default to Apple Music when doing the same thing as above but with the YouTube app. I think it has something to do with YT not allowing background audio playback unless you sub to Premium. Which reminds me how much I miss Jasmine and ProTube.
All of this is speculation.
I’m not sure what you are referring to since Safari on desktop still has a DNT option in settings and while mobile safari doesn’t have a setting specifically labeled DNT it has a setting that appears to be the same thing with a more friendly name.
Reportedly, duck duck go are now serving over 1 billion searches per month. If they're earning, say, $0.01 per search from ads, well... you do the math. Not bad for a small startup company with 55 employees.
Say tracking is perceived to add some value over content-based ad selection (doesn't matter if it actually does). Say it also provides a bunch of important looking metrics for middle managers and execs to play with, whether or not any of it means anything (that it certainly does). Say the price premium charged by Google and FaceBook and friends is under what companies value this stuff at, in dollar terms. There's a small premium to it, say 10%, some of which goes to the sites displaying the ads.
Boom, practically all advertising is now of the spyware variety. That's all it takes. Competing without spying (and without a huge trove of data or a way to get it fast) is now nigh-impossible.
Now outlaw all the tracking crap. "Oh no the Web is doomed no one will be able to pay for anything!" you will hear/read from a disturbingly large number of people. They are dumb. It'll be fine. That 10% premium goes away, yeah. Content advertising is back. Having a huge trove of user data is no longer a moat, or indeed much advantage at all, in the ad space, so Google'd need to scale back and/or learn how to launch a product that can stand on its own without the backing of their spyware empire. IOW yeah they'll probably actually be in serious trouble, but there's no good reason for them to be. FaceBook might die or have to pivot hard. But that does not mean the end of the web, or anything like it! It just means the ad money goes somewhere else. Probably to a bunch of much smaller (though still maybe quite large) companies.
Anyway TL;DR it's difficult to succeed with non-spying ads in a spying-is-allowed world, but it's 100% for sure a viable business model that can support more or less the same junk we have now if you outlaw that sort of bad behavior. To the extent DDG's succeeding I assume it's their dark patterns and maybe (I am just guessing on this second one) they also do that extorting-companies-into-buying-ads-for-themselves thing like Google does.
I wish to be wrong but e.g. facebook has demonstrated that users don't really care.
Not sure that anything can change that, although I would love to see it happen.
Facebook's user demographic _is_ shifting, though, and the conversation around the company and product in the media has changed significantly. That might not immediately show large effects, but network effects driving sign-up also work in reverse and could trigger an avalanche of users leaving in the future.
Google ads still work (command a good price) without the creepy stuff. "in Lisbon, searching for dentists." is enough for good as targeting. Facebook ads are near worthless without the creepy targeting and such.
Apple is in quite a unique position to distinguish themselves, if they choose.
I think Facebook will kill itself separate to everything else going on in the industry with regards to privacy. I think Facebook's value proposition is wearing thin.
Point being, there is some movement against this.
I’ve long said that platforms need to enforce limits, hard limits, on all I/O. And in order for something to get an exception, the system should have a big, obvious, inconvenient work-around that places plenty of blame directly on the developer (e.g. “The application Foobar is using an extremely abnormal amount of your battery power, and has used your Internet service more extensively over the past hour than any other app on your device. We recommend uninstalling this program completely, or select one of the throttling choices below:”.)
And frankly, there is no shortage of reasons why we should do this: to prevent abuses like this latest one, to prevent draining batteries, to avoid expensive data plan overages, etc. (and heck, to save the planet, because stupid simple things should not require a pile of natural resources to download to your device).
That sounds good, but how well does it work? I'd you find an app that solves a use case for you - what factors do you use to determine whether you trust this developer? How can the average iPhone (/smartphone) user apply this?
Once phones start throwing this kind of crap in people's faces there will be (understandable outrage) and bad sales figures. Technology is supposed to make your life easier, not harder.
Cat pictures don't require abnormal amount of battery and bandwidth. All streaming videos could use a dedicated API method, excluded from the cap.
Browser content would be more difficult to regulate, on the other hand web apps don't have as much permissions as native ones.
The problem is a culture of 'it's ok, everyone else does it, the world runs on it, maybe things need to change, but we need to find an alternative equivalent first.'
It was always wrong. It was never justifiable. Permission was never requested, only assumed.
If you are recording my screen without making it abundantly clear up-front that you are doing so and why, and without allowing me to decline without providing any additional identifying information, you are automatically unworthy of my trust, not only now but forever. The people who thought this was acceptable are unfit to make such decisions, now and forever.
In short: fuck you and your advertising.
Why is this sort of bullshit even allowed to be technologically possible? Because someone profits. Screw their thumbs until it isn't. You want to make money from the people using your software? You get it from them, specifically from them, by providing a worthwhile product. You should be punished for selling them to someone else.
Unless you can teach them the value of that which you ask in exchange for your product, you are committing fraud.
You don't get to benefit from the information you siphon from people who don't understand the value of it. You know exactly what you've done and always did; no second chance is warranted.
> In short: fuck you and your advertising.
Nah, I don't want to pay for every single site I visit. Content is not just given out for free. Are there limits to what counts as too much? Sure, and that's the type of discussion we should be having.
> You should be punished for selling them to someone else.
That's not how online advertising works. Rather, it's would be the equivalent of someone on your block knowing that your house is a 3-bedroom 1-bath, but know nothing about the people that live inside.
> Q. Should we post signs that there are cameras in operation?
> A. Yes. Most privacy laws require the organization conducting video surveillance to post a clear and understandable notice about the use of cameras on its premises to individuals whose images might be captured by them, before these individuals enter the premises. This gives people the option of not entering the premises if they object to the surveillance. Signs should include a contact in case individuals have questions or if they want access to images related to them.
This is a stronger requirement than the "we may use your personal information to improve our services" language in the EULAs for almost any of these apps.
So, ironically, I think you just helped prove the point you were replying to. Hoovering up ALL the usage information without specific notice or consent is not ok.
I do - if the content is good, the price reasonable, and the transaction frictionless.
And if that's what it takes to get rid of intrusive advertising and user tracking, sign me up.
By some estimates your data's worth $240, assuming you're an exactly average user: https://medium.com/wibson/how-much-is-your-data-worth-at-lea...
Since not every user will pay or can afford to pay, and presumably heavier users are worth more, you might have to pay thousands, or tens of thousands (in case you bought a house or something based on targeted ads).
You can sort of simulate this today: imagine if you were to bid past all other ads shown to you for every ad slot on every site, then you could replace them with an empty picture. That might get expensive.
Thanks for putting a price point on the amount that people^H^H^H^H^H^H^H advertisers will pay to ruin my freedom. I value it much more than they.
I could outbid easily as if never actually pay for a lead.
I don't think that content should be produced for free, but I argue that advertising-by-invasion-by-default is only acceptable now because not enough people called it out early enough, because they didn't realise the cost.
'It's always worked this way' is not a valid defence. Building a business on something should not later affect the legality of that thing.
In other words, start looking for another way for things to work instead of trying to claim that the status quo is the only way things can work.
Yes, I'm aware that this probably will feel like a step or two backward. Leaving a local maximum always feels like that.
 I forgot to address the security camera issue. That's an awkward conflation of 'physically walking into a store where you can physically pick up and leave with an item which cost physical resources to produce' with 'asking if the person standing outside can let you look at a menu'.
Every big app collects analytics without such disclosure (and definitely no visual indicator). I honestly can't think of a counterexample.
If Apple's declaration is taken literally, this will have massive fallout on the analytics ecosystem.
Good. I don't know when it became OK to log just about everything a user does in your app (or operating system, in the case of Windows 10) "just in case". It's creepy, often unnecessary, and prioritizes the convenience of the developer over the user's right to privacy.
There's your reason.
Well, obviously Apple cannot ban what is outside of its walled garden.
And it seems like those that support web applications getting closer to native apps functionality are the ones they lag on.
It can often be difficult to determine root cause of an issue when you are just given a stack trace. I suspect we will soon see two patterns arise: (1) Popups when the app launches to get consent and (2) Screen recording that still happens but only phones home if an exception occurs and where they get consent at that point.
And what does this mean for core metrics like Google Analytics ?
That being said I think this is a good compromise and I hope Apple applies this to all vendors.
I think the biggest winner here if I read this correctly is AWS and other cloud providers. The most straightforward way for someone like mouseflow to comply is to separate each customer to its own instance in a way that mouseflow has no access to user data.
In this context, Google is the third party. Plenty of apps use the Google Analytics SDK, or the newer GA Firebase SDK for their analytics.
Of course they send your info to a “third-party,” that party being the advertiser(s).
The advertiser creates and advertisement and passes it to Google with a selector, "We want to show this advertisement to men over the age of 35 in Milwaukee" and a price/click.
When someone who fits the selector arrives, the advertisement enters into Google's auction.
If it wins the auction, it's rendered to the customer.
Google doesn't say "Here's the list of our customers, and who they are, let me know who you want to send ads to".
Google is a one way mirror. They will suck in as much data as you want to give them about people, but it's virtually impossible to pry any info out of Google that's at an individual and identifiable level. Only aggregated performance information is exposed to advertisers, and the ability to mix and match targeting criteria based off of those dimensions that they expose.
Even with their enterprise marketing products like DoubleClick you can't pry out individual-level data. DoubleClick customers used to be able to export an anonymous ID, so they could use independent third-parties to consolidate conversion and impression data and measure attribution across complex marketing campaigns. But even that isn't possible anymore, due to GDPR concerns.
The closest they now get to "sharing" user data is Ads Data Hub. And by sharing, I mean they expect you to share all of your data from outside of Google, which Google will then connect to their data and allow you the privilege of running aggregated queries. But they actually keep their side of the data firewalled off, and it's not human readable or accessible at the row level, only in rollup queries.
In the ad world, I can assure you they are far more protective of user data than most anyone else. The size, effectiveness, and dominance of their marketing channels afford them the ability to take that position without it materially impacting advertiser spending. Very few advertisers take such active measures to insulate exposure of user data from advertisers. And for many that do invest resources in such endeavors, it doesn't mean that they don't provide user data to advertisers. It just means that they don't provide it for free anymore.
Oh, don't worry, only the most data-hungry machine ever built has your data.
It's only mildly reassuring to me relative to how loose I know most providers are with leaking user data. And because, having interacted with it from the advertiser side, I can tell that they not only recognize the long term value in protecting user data, but that they also invest the resources to design their systems and processes with that fundamental premise in mind.
Conversely, I also recognize just how much they do know about me, and just how privileged of a position they're able to take due to their market-dictating scale. The thought of their growth ever stalling terrifies me, since it can give them cause to re-evaluate that fundamental premise if they ever need to shore up their numbers.
My European dev team is now up at almost 1am on Saturday putting in a fix for this.
Unless Apple themselves is going to provide tools to help make better, crash-free software, these are necessary third party tools.
Apple explicitly does not do this, because they know that they'll draw a line and two months later someone will try to get through on a technicality.
They don’t even need to MITM the traffic. Just the fact that an app makes network requests when using a supposedly offline feature should immediately get them rejected.
And iOS should introduce a visible network activity indicator that can’t be manipulated by applications, like they do for location tracking.
I think this is already a thing in China.
> Further, the user should be able to control what domains/hosts the app has access to. The user should be able to have feedback indicating that the application is communicating over the network at what times and with how much data.
This is difficult to do, since it's easy to swamp the user in prompts for every little thing.
I don't see this as a bad thing. It will let me quickly see what apps are built not following best practices.
I would even surmise that a user's patience threshold for that kind of annoyance these days is pretty low.
In principle it's nice to be able to manually allow / reject individual requests, but in practice you can't get anything done if you do. Popups keep popping up all the time, and it's rarely clear what the request is good for. Then you get random failures, because you accidentally blocked an important request, or you just allow anything anyway, because there really is no way to know if that request is good or bad.
And that's from the perspective of a software developer who understands what protocols and ports and addresses are.
A firewall like that would be absolutely useless for non-technical users.
Basically I want Little Snitch built into iOS as a core feature. Hiding it behind “Advanced..” is fine.
A typical setup would work like this: when you launch an "instrumented" app, it generates a UUID. Then whenever a user interacts with the app, it would send messages like "UUID 1234... launched app version 3.14. UUID 1234 clicked the 'home' button. UUID 1234 viewed their profile page. UUID 1234 searched for a video. UUID 1234 played a video. UUID experienced an OutOfMemory exception in module foo, line 942." These were aggregated together so that you could run reports like "among people who experienced the OutOfMemory exception in module foo, line 942, how many viewed their profile page first?" That allowed developers to very quickly focus on the exact steps required to reproduce a specific problem.
So sure, apps were gathering a lot of information about what you were doing, but it really was genuinely for your benefit. There was no way for customers to run queries like "what video was heywire watching?" or the like. Everything was 100% focused on being able to quickly and accurately identify the cause of crashes. Now, that was just one company and it was several years ago. Maybe every other company was creepy? Maybe Apteligent is, too, now? I don't know. I don't have any insider knowledge into the current state of things. But at the time I personally witnessed it, I would have felt very comfortable at an EFF meeting explaining how every byte of metrics information was being used.
No, you've only talked about what currently happens when everything is working properly. What happens if the company ends up in financial trouble; to they have a Ulysses Contract on record that binds their future ability to monetize all of this data? Without legal enforcement, we just have to hope this company will somehow resist the temptation that most other companies are not able to resist.
> what specifically about that data flow bothers you?
> it generates a UUID
That's obviously personally identifying, which it's a header in all of the analytics you describe. Just because it's synthetic doesn't make it anonymous. Once it's mapped back to other information - which is trivial if you correlate IPs or event timestamps - this type of analytics is only an INNER JOIN away from being merged into someone's pattern-of-life.
The problem isn't what happens when everything works as intended. You need to also prepare for when (not if) your data is merged into other databases, and what others might do with the data in a future.
 Take a set of "UUID 1234... launched app" events for a common app that is regularly launched e.g. when someone wakes up (or whenever). Correlate those times to other times that also happen to be launched (or webpages/email visited) at similar times. What are the odds that two unrelated people just happened to open different apps [..., 2019-02-04T10:11:22, 2019-02-06T10:17:44, 2019-02-07T10:14:52, ...] (+/- maybe 30 seconds)? A unique identifier and a few high resolution (seconds) timestamps can easily identify someone uniquely when you have enough data points.
I can say that at the time I was there, it was not possible for a developer logged into our system to suss out any fine-grained information about a particular user. They just got aggregated data like "92% of people who experienced this symptom did this other thing right before it happened".
For my benefit without giving me a clear understanding of what was being collected or the option to opt out. Gee, you really shouldn't have.
Yes I know that you still have baseband + binary drivers, but at least then all of the apps are open source, and so is the OS.
Out of curiosity, what iOS features do you use? I ask because one thought you can do is have that as a phone and an iPad, so you can segregate your very private stuff and still use an iPad for iOS features
One thought for you, I just had my Nexus 5x die, but I got a Sony Xeperia XA2 for $150 and I flashed it with lineageos. See if you can use it and ween yourself off?
Alternatively setup Pi-Hole for testing.
Microsoft Edge (on Android) was the worst offender, contacting vortex.data.microsoft.com almost constantly, on almost every UI action I made. Other notable were the amount of apps contacting analytics servers in the background, when the app as (from an end user perspective) not even running.
For example Hotjar , I did a review  of the product a year ago and I could not believe the creepy surveillance level of this tool.
For me, manually disable JS or install content blockers will not get mainstream appeal for the regular users who just want to browse the web (and didn't know that maybe are being recorded).
This should be blocked by default on every browser.
In 2012, I worked at a University with analytic tools that showed a color map indicating the average scroll speed for pages on our website and heat maps indicating how long different users hovered over a section.
This stuff has been around for a long long time.
Optimizely has all of this stuff (sampled), but the fact they can 'sample' something means they have the full data.
At least with native desktop apps I can put that garbage into a VM or container. Load whatever you want. I can then apply my own firewall/containerization/VM rules.
The article even seems to mention it "Even though sensitive data is supposed to be masked, some data — like passport numbers and credit card numbers — was leaking."
When I read the report I was hoping that it will have serious consequences for Glassbox and similar services. Good that Apple is taking action so fast. I hope Google kicks in soon too.
Some time ago I built something I call “Network Blackhole”.
The project intercepts HTTP traffic from applications that I installed in either my MacBook or iPhone. An excellent example of this is Crashlytics , Segment , and Sentry , which are among a list of popular web services that many developers use to report bugs and crashes in their software, and the famous Google Analytics, which I hope I don’t need to explain what it is used for.
With the help of Little Snitch  a popular network monitor for macOS, I detect when an app tries to connect to one of these services, or similar. Then I execute a tool written in Go like so: “blackhole example.com” which does the following:
1. Inserts domain into /etc/hosts
2. Create an HTTP web server (in Go)
3. Adds a match-all endpoint to the server
4. Creates an SSL certificate with mkcert 
5. Creates an Nginx virtual-host for the server
In the end, and after 1-2 minutes, I have all the traffic to that domain gracefully redirected to a black hole, reducing the amount of data that I leak to 3rd-party websites.
However, don’t get me wrong, I understand the purpose of these services, I haven’t said they are evil or anything like that. I would probably use them myself if I had to, but I certainly would add an alert to ask for explicit consent from the user to send this information to a service that I won’t even have control over. If one of them leaks my customer’s data, I will be the only one facing the consequences.
I hope I don’t have to add another domain to my network black hole anymore.
Unfortunately, I cannot share the tool because it contains multiple zero-days for apps that make use of Paddle  and Devmate  to grant and validate software licenses. It also contains zero-days for apps developed in partnership with Panic  and MacPaw . I’ve been in contact with some of the developers of these apps to patch their software, and until they all release security updates I cannot share the code with the world.
Also IMO if Apple was really serious about privacy they'd remove the permission for apps to use your camera and see all your photos and instead require apps to use a system camera a UI (so the system takes the picture, not the app) and similarly if an app wants a photo from your library it should be required to use some system photo selector so that the app can only see the photos you select for it.
As it is, every app that asks for camera permission can use the camera anytime it wants for any reason.
Similarly any app that asks for permission to see your photos can see all of your photos anytime it wants.
Neither of those is compatible with the idea that Apple is a privacy first company.
Note that if Apple enforced those you wouldn't need to grant most apps camera and photo library permissions since they wouldn't get access to the camera data unless you took a picture while in the app or select photos.
I get it might suck in that gating the camera to system only means experimental camera apps are out. Maybe they could find a way to add secure hooks or maybe they need to add a new permission that hopefully security conscious people would rarely grant which is "Can this app access the camera anytime it wants for any reason including spying on you?"
Similarly only using the system photo selector would mean apps can't make fancier photo selectors but again, you could add permission for that if it's really important. For example Dropbox/Google/Flickr/Facebook upload all your photos features would need a "Can this app spy on 100% of the photos on your device?"
AFAIK access to depth data currently requires no permission. Seems again that that's not a privacy-first policy.
Also, Apple is apparently yanking the ability for for webpages to access orientation data. Why should apps be any different?
What you can do – at least partially with some apps – is: open Photos, select photo, tap the share button, select the app you want to send the photo to.
I use this workflow to send photos via WhatsApp, because I don't trust WhatsApp to NOT upload all kinds of meta data (or thumbnails, or photos themselves).
Unfortunately, this doesn't work with all apps.
Another approach could be to require a separate permission for “covert” (off-screen) camera usage — i.e. capturing images or video without a camera preview on the screen.
On a more serious note: it’s nice to see that Apple is finally cracking down on these shady analytics SDKs. I’m this close to forgiving the headphone jack thing...
Let’s ban the entire Google analytics framework while we’re at it, including in Safari. Then we’re getting somewhere.
It seems apple is only banning screen recording tools that send data to "third party" servers so hopefully there are open source self-hosted alternatives that we can use instead.
Is this possible?
Explicit consent can only be given if a person fully understands what they're agreeing to. The weight of evidence suggests that once personal identifying information is out in the wild it can't be retracted.
While the banks, police, and media continue to refer to such a concept as identity theft, I don't believe it's possible for anyone to give explicit consent to their data being leaked and used by people with ill-will.
Edit: leaked, or intentionally sold to bad actors. Or the parties collecting the data in the first place are bad actors themselves.
I thought that was the big thing about having to use the app store was Apple kept it locked down tight to prevent this from happening?
Key escrow for instance. Insights into phone use. Monetisation of information feeds forbidden to third parties.
I still prefer signal to iMessage.
Despite seeming scary, this is actually the most benign form of data collection. People have this naive notion that companies have this obsessive desire to track them as an individual. Working at tech companies, this could not be further from the truth. I do not give a shit about you as an individual, I care about you as a collection of attributes that I can correlate with the attributes of the rest of the user base. The only time I care about you as an individual is if you're reaching out to our customer service as an individual with a problem and I want to help diagnose it.
The problem with screen recording data is remarkably useless for anything else because it's too high fidelity to be aggregated. If I want to serve you more personalized ads or manipulate you into purchasing something, there are other tools that are far more appropriate for the purpose.
The only reason Apple is doing this is for PR reasons, to help signal to everyone that they're a privacy conscious business. But they're doing this by leveraging people's misunderstanding of how data collection is done and banking on emotional fears rather than actual damage.
At the same time as appreciative I am about Apple's privacy stance, it really worries me that they're the only large company that seems to care.... and when they don't, who will?
All so you can "optimize your A/B testing" lol.
But don't worry, there won't be a Netflix documentary or Congressional subpoena till 2025, LOONNNNG after your startup's exit. Until then, Zuckerburg gets to be the face of this general disdain and luckily gets to honestly have no idea about what is going on, because third parties integrated into his apps are the ones doing all the recording, user monitoring and data sharing.
They’ve kicked out tracking already on Safari. This is the next step. It makes Google’s software look like spyware.