Hacker News new | past | comments | ask | show | jobs | submit login
Apple tells app developers to disclose or remove screen recording code (techcrunch.com)
603 points by jbegley on Feb 7, 2019 | hide | past | favorite | 278 comments

I think we are approaching a transitionary period where lines by various companies are starting to be drawn in the sand with regards to privacy and user hostile design choices. It's still a little blury, but as the lines become clearer users will start to shift. And either by choice or regulation the lines will be drawn. Apple's hand is being forced in some cases, but usually in a direction they are already heading. Google will find the balance much harder due to the nature of their revenue model.

Apple is following up on their own promise to be a privacy-first company. That's their game now and they are playing it respectably. First, they removed Facebook from Internet Accounts in settings, then they blocked Facebook's enterprise certificate and now this. Remember that Apple was one of the first to send an anonymous MAC address to Wifi hotspots so that advertisers could not track a user in real space.

> First, they removed Facebook from Internet Accounts in settings, then they blocked Facebook's enterprise certificate and now this.

Just to make it clearer, the removal of Facebook in the accounts was not about Facebook alone (though it may have been the cause for the decision). Integrations with Twitter, Vimeo, etc., were also removed together.

Facebook’s enterprise certificate was revoked because it violated Apple’s policy by using it to distribute applications to users who’re not “internal” and not employees/contractors. That the app in question was also sleazy (as Facebook usually seems to be) was a coincidence in this case. But it’s also true that that app wouldn’t have passed the App Store review process had it been submitted there.

>Just to make it clearer, the removal of Facebook in the accounts was not about Facebook alone (though it may have been the cause for the decision). Integrations with Twitter, Vimeo, etc., were also removed together.

That one has more to do with Apple releasing frameworks that allowed any of the internet services to tie into iOS themselves.

For instance, share extensions:

>Share extensions give users a convenient way to share content with other entities, such as social sharing websites or upload services.


Apple no longer needed to pick winners and add integration for them into the OS by hand.

>Facebook’s enterprise certificate was revoked because it violated Apple’s policy by using it to distribute applications to users who’re not “internal” and not employees/contractors. That the app in question was also sleazy (as Facebook usually seems to be) was a coincidence in this case. But it’s also true that that app wouldn’t have passed the App Store review process had it been submitted there.

Facebook's spyware in a VPN's clothing had already been kicked off of the App Store last year.

>Onavo, which Facebook bought back in 2013, does two things. As far as regular consumers are concerned, Onavo comports itself like a VPN, offering to “keep you and your data safe” and “blocking potentially harmful websites and securing your personal information.”

But Onavo’s real utility is pumping a ton of app usage data to its parent company


> Facebook’s enterprise certificate was revoked because it violated Apple’s policy by using it to distribute applications to users who’re not “internal” and not employees/contractors. That the app in question was also sleazy (as Facebook usually seems to be) was a coincidence in this case.

The rules were written to block sleazy apps. That the rules were the instrument by which the certificates were revoked doesn’t decouple the app’s sleaziness from its removal. (Similarly, saying “he went to jail for stealing” is accurate. Despite the reason for the jailing being the statute banning stealing.)

> The rules were written to block sleazy apps.

No, they weren't; they were written to block distribution of applications without formal App Store review to the general public.

Thank you. It's frustrating to see people completely throw facts out of the window as soon as it comes to hating on their favorite bugbears. Today Facebook and Google, tomorrow something else.

I can see the general strategy there though: it's easier to be excessively stringent then improve and relax through iteration and emergent discovery of use cases than the other way around. E.g accessing the filesystem used to be downright impossible in the MAS, preventing apps like Coda or DaisyDisk from even existing due to stringent sandboxing, but now it's perfectly possible.

>Apple is following up on their own promise to be a privacy-first company.

Only after the media takes notice and it makes Apple look bad. In both this case and the "FacePalm" FaceTime spying incident, Apple knew for months. You shouldn't get brownie points for getting coerced into good behavior.

Is there a source for Apple knowing about the FaceTime incident for months?

I wonder, is there a list, possibly broken down by segment, that shows the relatively privacy friendly companies in that segment? Personally I would also like to see a check or not in a few categories for each for things like "Public statements supporting privacy", "Software/service features support privacy", "Business model supports/supported by privacy" and "has resisted/advocated against privacy encroaching laws or overly broad legal orders".

That's a tall order, but it seems like the thing that could be mostly crowd-sourced to good effect (with a few moderators to ensure accuracy).

Not exactly what you're looking for but EFF has some data:

https://www.eff.org/who-has-your-back-2018 https://www.eff.org/who-has-your-back-2017

Not sure I’d trust this list. All my email addresses submitted to Adobe and previously Macromedia have come back with scammy spam of epic proportions compared to any other company I’ve submitted emails to. This list has them at 5 stars. Could be they sold or they got hacked, and maybe it no longer happens, but even new emails as of 2 years ago suffered the same fate over time.

Not that I can vouch for the list, but it's not if Adobe got hacked, they are well known for being victims of some very large hack(s?).[1]

Other than that, I hope you're not using cracked versions of any Adobe software or commercial filters, or key generators for either on systems that have any personal information...

1: https://www.theverge.com/2013/11/7/5078560/over-150-million-...

All licensed. Even so, our adobe account emails end up filled with crap.

That's pretty nice, thanks!

Although it does seem a little odd that Youtube has full marks when there's quite a lot of info about their broken take down system and appeals process...

> that shows the relatively privacy friendly companies in that segment

Most companies seem to violate their own privacy policies, and it's impossible for us to know that they're doing it without either whistleblowers or regulators.

The policy is the simplest and lightest level of assurance I was expecting. It's basically PR. I'm more interested in cases where they've been tested put their money where their mouth is, or even if they are willing to note that they get legal requests, what is generally requested, how they comply, etc.

Most of these companies aren't ever "tested". It's pure luck that we ever find out they're violating our privacy. There's no transparency in a cloud app.

I don't mean tested as in some organization tests them, I mean is there information showing or alluding to a company fighting back for their users (or business model) against government overreach? If there is, that's useful information. It doesn't mean a company that hasn't been in that situation wouldn't act the same, but we can't assert much about that with any level of confidence.

It's not like any of these indicators can be taken entirely on face value anyway, since their all indicators of past performance and a change in policy at a company could happen any time. Something is better than nothing though.

https://www.privacytools.io/ comes close to what you'd like

As a user, I am always glad for Apple protecting me from asshole developers.

There is still a lot to be done, but it would be much worse. Some bigger culprits, due to being bigger, still get away with their bullshit.

For example, Google apps uses/used an embedded web view for sign-in, which also signs you into their web search in Safari.

They share the sign-in in a way that even newly-installed apps can use; e.g. sign into YouTube, install Google Maps, and you will find yourself already signed into Google Maps when you open the app (in my case, for the first time on this device just now.)

This may be convenient, but feels creepy and intrusive as hell.

Facebook, Instagram etc. also continue to be major offenders for things like iPad UX guidelines.

This single sign-on is an intended feature of Keychain on iOS. Apple designed it that way.

It's only shared between apps that are signed with the same certificate.

Not a google employee but the example you cite is hardly the design of "asshole developers." Single-sign on is convenient, and I'm glad I can monitor any illicit usage/sign-on for any of the sites you mentioned in a single place.

Apple protects you from asshole developers, but it also restricts you from just installing what you want in the first place and they take a 30% cut of all revenue while doing so. I hardly think they do it out of the charity of their hearts.

I have no need to tell YouTube my location, but if I signed into YouTube, then install Google Maps and agree to its location sharing prompt, bam, now Google can link my viewing history (and more) with my precise location, because Google Maps automatically signed into my account as soon as I launched the app.

How is this not intrusive?

They don't need to share log-in data to be able to do this. There are plenty of other ways they can identify you.

We track you for your own good/Apple are not an NGO anyway.

Actually I'd say Apple is swinging only toward user choice where it suits them. The recent change to the default media player for play-pause away from the last app and to Apple Music is one example. Dropping Do-not-track from Safari is another.

Expedia's privacy policy [0] is a perfect example of why DNT, in current form, does nothing more than misalign customer expectations around privacy.

"Do-Not-Track Signals and Similar Mechanisms. Some web browsers may transmit "do-not-track" signals to websites with which the browser communicates. Because of differences in how web browsers incorporate and activate this feature, it is not always clear whether users intend for these signals to be transmitted, or whether they even are aware of them. Participants in the leading Internet standards-setting organization that is addressing this issue are in the process of determining what, if anything, websites should do when they receive such signals. We currently do not take action in response to these signals. If and when a final standard is established and accepted, we will reassess how to respond to these signals."

[0] https://www.expedia.com/p/info-other/privacy-policy.htm

"Some cars are now coming straight from the factory with 'do not track' stickers already applied. As such, we cannot know for certain whether an individual driver actually wants to be tracked. What does Do Not Track really mean anyway? We have therefore chosen to attach tracking devices to all vehicles that visit our premises."

"Some letterboxes have 'no junk mail' signs attached, but it's often unclear whether these were even placed by the current occupant. The current resident may be completely unaware that they're missing out on our great deals. Therefore we have chosen to deliver to all addresses regardless of signage."

What an incredible way of thinking. You could rationalize anything this way.

Agree entirely. Take a clear statement. Say it isn't clear what someone may want from that statement, claiming it is confusing when it really is not. Do what is clearly the opposite intention, blaming deliberate stated confusion. Grrr.

They rationalise everything this way because their bottom line depends on it. They are incentivised not to understand.

No user actually wants to be tracked. Some users might want “relevant” ads, but nobody wants to be tracked.

Not sure how DNT fits into this. No advertiser respected this flag and, ironically, some used it to finger print a user's device.

Yeah, DNT was dead on arrival. I have no clue why the internet powers that be though that would be effective in any way. I guess its more of a feel good action.

It needs legislation behind it

Or maybe it's just that someone needs to be fined for showing a useless GDPR Popup even though DNT was sent already. But I suspect with the recent W3C downgrade it's no industry standard so nobody can be forced to acknowledge it.


Like "we use cookies, agree/disagree" irritating waste of space legislation that means nothing beyond the borders of wherever it was legislated?

The EU got it wrong with the popup aspect of cookies/GDPR legislation. It's just bad UX.

A DNT-style browser feature with legal weight behind it would have been miles better. Your browser could present a consistent interface for every website that requests consent, and allow you to set global defaults, etc.

Safari already has this type of thing for a lot of other related features that require (or make sense to provide) user consent - but not cookies.

Maybe it's a harder issue to solve with a UI (third party cookies, subdomain cookies etc.) Maybe they just haven't gotten around to it (I used to set the global "site im visiting" cookie pref but since they added the "intelligent" cross site tracking protection that option seems to be gone..

The popup about cookies was the UK, wasn't it? Unless GDPR also happened a cookies thing afterwards, but the mandatory cookie popup was an out-of-touch plague on the internet for quite a while before GDPR was a thing.

The mandatory cookie popup was only necessary when websites created cookies not necessary for the proper function of the site. Of course, every website ended up using them for tracking, so people ended up having to click through cookie notices everywhere.

The GDPR popup of techchruch (source of the article) is so infuriation I refuse to visit the site. I've spend 5 minutes on it, and couldn't find a way to opt-out of tracking.

They are even doubly so irritating when you don't use cookies. Have to click through the rejections each and every time. Quite quickly a browsing pattern forms to avoid certain sites.

> The recent change to the default media player for play-pause away from the last app and to Apple Music is one example.

What are you referring to? The play/pause button has never been an "Apple Music" button, and in fact the recent behavioral change is to make it even more aggressively go to the current app (e.g. if you start playing a video in YouTube, the play/pause button controls that video instead of iTunes).

It doesn’t matter which app I last used - usually Spotify - when I get in my car the last song played in Apple Music plays. Annoying as hell.

iOS 12.1.2

That’s usually an issue with the head unit or a weird interaction between the phone and the head unit. My old car used to auto play Music when it connected no matter what the source was on the head unit. My new one is better about not auto playing and respecting the last app.

Yes, that's my impression as well. As mentioned in a sibling post, the car where it's misbehaving seems to treat the phone more as a media storage backend than an audio source.

Updated to 12.1.4 today, started listening to an audiobook in the Audible app, paused it, went to home screen & locked phone, got in my car, plugged in USB/lightning cable, started car, audiobook automatically began playing.

I have had it default to Apple Music when doing the same thing as above but with the YouTube app. I think it has something to do with YT not allowing background audio playback unless you sub to Premium. Which reminds me how much I miss Jasmine and ProTube.

This is almost certainly due to the car trying to use the old iPod control protocol to tell your device to play. AFAIK that protocol just talks to the Music app, but it's also been obsolete for years.

That's the case for one of my cars, not the other, but (without really knowing) I have a suspicion that it's actually the car doing this. There's a few cues on how its (now outdated) interface seems to want to act like its own full-fledged media playing device, with the phone as merely a backend storage, while my other car is more content to let the phone just act as more of an audio device with a few media controls (pause, skip...).

All of this is speculation.

I deleted the Music app and that stopped it. ️

Upgrading to a modern head unit also works but obviously costs more.

I have only experienced this when swapping out to youtube or when the application I was playing music in previously gets closed to free up memory after a long period of time in the background idle (you can see this when the application appears to start up from scratch and doesn't resume from its previous position)

Also on iOS 12.1.2. I have never encountered this. I usually play audio in my car with either Pocket Casts or Spotify, and iOS reliably picks up on my last used app. I also have the Apple Music app deleted—maybe that plays into it?

Nope, not related. I have the music app and I use it. My head unit plays whatever I was listening to last. Usually my podcast app.

That’s only true if your last playing audio app isn’t killed in the background in the interim. If it’s gone the next time your HU asks for music, Apple Music gets used instead. So infuriating.

Same here.

"Do-not-track" is just another data point advertisers use to track you. It's worse than a placebo.

> Dropping Do-not-track from Safari is another.

I’m not sure what you are referring to since Safari on desktop still has a DNT option in settings and while mobile safari doesn’t have a setting specifically labeled DNT it has a setting that appears to be the same thing with a more friendly name.

Google's original business model was selling ads against search terms for users who aren't even logged in. They sell ads in a lot of other ways now, and do what they can to keep people logged in. But if they had to go back to it, they would still make lots of money, because a search query is still a really good signal of the user's intent to buy something.

Wouldn’t duck duck go be making a lot of money by now if this was the case?

What makes you think they're not?

Reportedly, duck duck go are now serving over 1 billion searches per month. If they're earning, say, $0.01 per search from ads, well... you do the math. Not bad for a small startup company with 55 employees.

That would be pretty high. Google's are in the neighborhood of $3/1000 for impressions and $0.75 per click. Recall that many SERPs don't even show ads.

If they had anywhere close to the amount of traffic Google gets, they would.

Look at it this way:

Say tracking is perceived to add some value over content-based ad selection (doesn't matter if it actually does). Say it also provides a bunch of important looking metrics for middle managers and execs to play with, whether or not any of it means anything (that it certainly does). Say the price premium charged by Google and FaceBook and friends is under what companies value this stuff at, in dollar terms. There's a small premium to it, say 10%, some of which goes to the sites displaying the ads.

Boom, practically all advertising is now of the spyware variety. That's all it takes. Competing without spying (and without a huge trove of data or a way to get it fast) is now nigh-impossible.

Now outlaw all the tracking crap. "Oh no the Web is doomed no one will be able to pay for anything!" you will hear/read from a disturbingly large number of people. They are dumb. It'll be fine. That 10% premium goes away, yeah. Content advertising is back. Having a huge trove of user data is no longer a moat, or indeed much advantage at all, in the ad space, so Google'd need to scale back and/or learn how to launch a product that can stand on its own without the backing of their spyware empire. IOW yeah they'll probably actually be in serious trouble, but there's no good reason for them to be. FaceBook might die or have to pivot hard. But that does not mean the end of the web, or anything like it! It just means the ad money goes somewhere else. Probably to a bunch of much smaller (though still maybe quite large) companies.

Anyway TL;DR it's difficult to succeed with non-spying ads in a spying-is-allowed world, but it's 100% for sure a viable business model that can support more or less the same junk we have now if you outlaw that sort of bad behavior. To the extent DDG's succeeding I assume it's their dark patterns and maybe (I am just guessing on this second one) they also do that extorting-companies-into-buying-ads-for-themselves thing like Google does.

I doubt it. Remember that Google's real customer is not its user but companies running ads. Their ad revenue is directly correlated to the accuracy for its target audience. Think of every search result page from Google as a real estate. There's so much space that you can cram ads into, so showing ads to unnecessarily wider audience costs them precious space to make money. Google would lose a significant competitive edge if they stopped using personal tracking.

I believe the parent poster's point is that google searches are at the end of the purchase funnel when there's buying intent. As a result the click usually leads to the sale, and you can populate the ad based purely on the query. For instance I query "flower delivery Tampa FL" and the query tells me enough to populate ads.

>but as the lines become clearer users will start to shift.

I wish to be wrong but e.g. facebook has demonstrated that users don't really care. Not sure that anything can change that, although I would love to see it happen.

> I wish to be wrong but e.g. facebook has demonstrated that users don't really care.

Facebook's user demographic _is_ shifting, though, and the conversation around the company and product in the media has changed significantly. That might not immediately show large effects, but network effects driving sign-up also work in reverse and could trigger an avalanche of users leaving in the future.

Facebook will find it hardest of all, but at least they don't do OSS.

Google ads still work (command a good price) without the creepy stuff. "in Lisbon, searching for dentists." is enough for good as targeting. Facebook ads are near worthless without the creepy targeting and such.

Apple is in quite a unique position to distinguish themselves, if they choose.

I actually wrote something about Facebook last night: https://medium.com/@adamjaggard/facebook-will-die-of-boredom...

I think Facebook will kill itself separate to everything else going on in the industry with regards to privacy. I think Facebook's value proposition is wearing thin.

Yes things are starting to swing towards privacy. We have GDPR, Facebook's constant fuck ups, and Apple publicly dunking on the ad-tech industry to thank for that.

Recently someone on HN mentioned off-hand that changes to the chrome extension model are coming. Specifically, these changes would break ad-blockers as they currently work.

Point being, there is some movement against this.

When the average app can send dozens of megabytes with no one blinking an eye, lots of stuff can slip though.

I’ve long said that platforms need to enforce limits, hard limits, on all I/O. And in order for something to get an exception, the system should have a big, obvious, inconvenient work-around that places plenty of blame directly on the developer (e.g. “The application Foobar is using an extremely abnormal amount of your battery power, and has used your Internet service more extensively over the past hour than any other app on your device. We recommend uninstalling this program completely, or select one of the throttling choices below:”.)

And frankly, there is no shortage of reasons why we should do this: to prevent abuses like this latest one, to prevent draining batteries, to avoid expensive data plan overages, etc. (and heck, to save the planet, because stupid simple things should not require a pile of natural resources to download to your device).

I’m truly sorry if your overblown JavaScript framework can’t draw 3 lines of text on the screen without transferring an entire 1990 operating system’s worth of code over the network for each paragraph. Yet if devices start enforcing really tiny limits (my vote is, oh, 2 kilobytes of data), I bet your organization will finally figure out what data is really important. Good luck.

I can't say I agree at all with this hyperbolic reaction. How about people just be judicious about what they install and who they trust as always. I'm for privacy as much as the next person but not when it infringes on an open platform to the degree that you suggest, and Apple already overreaches in terms of censorship and control IMO.

> How about people just be judicious about what they install and who they trust as always.

That sounds good, but how well does it work? I'd you find an app that solves a use case for you - what factors do you use to determine whether you trust this developer? How can the average iPhone (/smartphone) user apply this?

It works great. Even my parents learned what to install and not install on their windows PC. The key being that the safety features are all on by default. But for those of us that need to run software off the beaten path, we can.

I'd love to teach my parents, but I'm still challenged by giving them indicators to determine what should or shouldn't be installed. I actually find it hard to do myself, so maybe you can help. For example: My mom likes to play Sudoku. How should she go about to determine whether it's fine to install this app: https://play.google.com/store/apps/details?id=com.easybrain.... ? How can she (or I) determine whether this data is uploading data collected on her phone or doing some other sort of shenanigans?

I think Apple is making the right move. They are responsible and accountable, and users don't understand how anything works. I'm reminded of the various browsers with toolbar addons bloating up the screen.

This is as user unfriendly as it gets. People want to look for hours of cat pictures and netflix videos (or facebook, instagram, or snapchat).

Once phones start throwing this kind of crap in people's faces there will be (understandable outrage) and bad sales figures. Technology is supposed to make your life easier, not harder.

You had it just before the last sentence. Technology is supposed to make profit.

Cat pictures don't require abnormal amount of battery and bandwidth. All streaming videos could use a dedicated API method, excluded from the cap.

Browser content would be more difficult to regulate, on the other hand web apps don't have as much permissions as native ones.

One of my goals is to put the blame in the right place. It isn’t “my phone can’t last all day”, it’s “terrible apps are wasteful and do questionable things”. It isn’t “I need a better data plan for another $10 a month”, it’s “apps are greedy, lazy and, to put it mildly, sub-optimal in their use of data”. And so on. Annoyed developers and ruined sales will NOT affect apps that are already good citizens.

It's a good thing that platform providers are locking down this sort of thing.

The problem is a culture of 'it's ok, everyone else does it, the world runs on it, maybe things need to change, but we need to find an alternative equivalent first.'


It was always wrong. It was never justifiable. Permission was never requested, only assumed.

If you are recording my screen without making it abundantly clear up-front that you are doing so and why, and without allowing me to decline without providing any additional identifying information, you are automatically unworthy of my trust, not only now but forever. The people who thought this was acceptable are unfit to make such decisions, now and forever.

In short: fuck you and your advertising.

Why is this sort of bullshit even allowed to be technologically possible? Because someone profits. Screw their thumbs until it isn't. You want to make money from the people using your software? You get it from them, specifically from them, by providing a worthwhile product. You should be punished for selling them to someone else.

Unless you can teach them the value of that which you ask in exchange for your product, you are committing fraud.

You don't get to benefit from the information you siphon from people who don't understand the value of it. You know exactly what you've done and always did; no second chance is warranted.

Devil's advocate question: do you (should you) have to agree to be video-taped before entering a store on security cameras? What if you had to do that for every single store?

> In short: fuck you and your advertising.

Nah, I don't want to pay for every single site I visit. Content is not just given out for free. Are there limits to what counts as too much? Sure, and that's the type of discussion we should be having.

> You should be punished for selling them to someone else.

That's not how online advertising works. Rather, it's would be the equivalent of someone on your block knowing that your house is a 3-bedroom 1-bath, but know nothing about the people that live inside.

Here's what the Canadian government[1] says about the use of surveillance cameras and notice to the public:

> Q. Should we post signs that there are cameras in operation?

> A. Yes. Most privacy laws require the organization conducting video surveillance to post a clear and understandable notice about the use of cameras on its premises to individuals whose images might be captured by them, before these individuals enter the premises. This gives people the option of not entering the premises if they object to the surveillance. Signs should include a contact in case individuals have questions or if they want access to images related to them.

This is a stronger requirement than the "we may use your personal information to improve our services" language in the EULAs for almost any of these apps.

So, ironically, I think you just helped prove the point you were replying to. Hoovering up ALL the usage information without specific notice or consent is not ok.

[1] https://www.priv.gc.ca/en/privacy-topics/surveillance-and-mo...

> I don't want to pay for every single site I visit

I do - if the content is good, the price reasonable, and the transaction frictionless.

And if that's what it takes to get rid of intrusive advertising and user tracking, sign me up.

Plus, spy-vertising and creepily recording users' mouse movements is 100% not required for Web advertising to be A Thing that Pays For Lots of Stuff. The notion that if we kill the moats that Google and friends have with their vast troves of user data and various spying methods it'll be anything more than a speedbump to the Web generally is ridiculous. Bad for Google and Facebook and so on, yeah. But so what.

You'd also have to pay more than what you're worth at status quo today.

By some estimates your data's worth $240, assuming you're an exactly average user: https://medium.com/wibson/how-much-is-your-data-worth-at-lea...

Since not every user will pay or can afford to pay, and presumably heavier users are worth more, you might have to pay thousands, or tens of thousands (in case you bought a house or something based on targeted ads).

You can sort of simulate this today: imagine if you were to bid past all other ads shown to you for every ad slot on every site, then you could replace them with an empty picture. That might get expensive.

> By some estimates your data's worth $240

Thanks for putting a price point on the amount that people^H^H^H^H^H^H^H advertisers will pay to ruin my freedom. I value it much more than they.

Most ads these days are CPC. As I never click any ads, the “value” is none.

I could outbid easily as if never actually pay for a lead.

Stores don't actively use or sell my habits in security camera footage to advertisers however. If stores started selling footage of me in security cameras to companies I would want new legislation to require permission before filming. Furthermore, a store is a public place, and I don't expect privacy in the same way that I expect privacy in my own home while on my phone.

Mmm, that's a fair set of points.

I don't think that content should be produced for free, but I argue that advertising-by-invasion-by-default is only acceptable now because not enough people called it out early enough, because they didn't realise the cost.

'It's always worked this way' is not a valid defence. Building a business on something should not later affect the legality of that thing.

In other words, start looking for another way for things to work instead of trying to claim that the status quo is the only way things can work.

Yes, I'm aware that this probably will feel like a step or two backward. Leaving a local maximum always feels like that.

[edit] I forgot to address the security camera issue. That's an awkward conflation of 'physically walking into a store where you can physically pick up and leave with an item which cost physical resources to produce' with 'asking if the person standing outside can let you look at a menu'.

You seem to not understand what this issue is about. It’s tech crunch’s fault. They did a terrible job of reporting on this issue.

> “Your app uses analytics software to collect and send user or device data to a third party without the user’s consent. Apps must request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity,”

Every big app collects analytics without such disclosure (and definitely no visual indicator). I honestly can't think of a counterexample.

If Apple's declaration is taken literally, this will have massive fallout on the analytics ecosystem.

> If Apple's declaration is taken literally, this will have massive fallout on the analytics ecosystem.

Good. I don't know when it became OK to log just about everything a user does in your app (or operating system, in the case of Windows 10) "just in case". It's creepy, often unnecessary, and prioritizes the convenience of the developer over the user's right to privacy.

Especially since software UX has not become significantly better as a result.

Exactly. Only "conversion".

I wonder whether UX monetization has or whether the analytics haven't had any effect at all.

You didn't pay attention to the key part "to a third party". Most of the larger companies don't use third party logging software, AFAIK.

That's not true. Most companies offload analytics to a third party such as Google Analytics or Mixpanel.

Indeed, there are only a handful of analytics frameworks that control the bulk of marketshare, Google being the largest. Just banning the frameworks entirely would fix the problem for a vast majority of apps.

Sounds like a great opportunity for someone to sell some old-fashioned on premise software.

On-premise is not old fashioned. It is indeed a requirement in several countries, for many domains including finance, banking, healthcare, telco and such. When you include PII in your analytics service, it has to be taken care of explicitly. There are indeed a few on-premise mobile / web analytics platforms for product analytics purposes that you can deploy on your own servers and retain your users' data with your own rules with no dependence on 3rd parties (disclaimer: I am cofounder of Countly).

Just some minor pedantry - that should be on premises. Although it's pretty commonly misused, technically premises should always be plural when referring to a property/residence/place of business, and premise (as a noun) only means "a previous statement or proposition from which another is inferred or follows as a conclusion."

Already exists, it's called Matomo and it is awesome.

Neither of which show the passwords or credit cards users enter into input boxes, or replay the entire session and every single thing the user does in-app, like the software in question does. Did you read the original article on the software being used here? It's not analytics in the sense like the companies you mentioned are providing. https://techcrunch.com/2019/02/06/iphone-session-replay-scre...

Ohh, I wonder if this is related to why Apple is doing this. It benefits the end user but also punishes app developers who rely on Google.

> It benefits the end user

There's your reason.

How funny that "benefits end user" and "punishes ... Google" are in the same sentence in such a nonchalant way.

Dropping all analytics and crash reporting hurts the product and by extension the user.

Apple aggregates and sends crash reports if users choose to share them with app developers.

Now you may ask yourself why some do not see that as enough to fix bugs in a timely manner.

The second of the two quoted sentences doesn't include that qualification. Does apple view it as implied by the previous sentence? Quite possibly, but both viewpoints are arguable.

The first party already has your data under their own privacy policy. But users aren’t necessarily agreeing to the privacy policies of third parties when they use an app.

There is a pretty significant difference between recording users' activities at an abstract, aggregate level (e.g, to measure DAU and user retention, or to track how many users use a feature or complete a funnel), and recording the activities of individual users at the highly specific level used by Glassbox. I suspect that Apple's statement is meant to target the latter, not all analytics as a whole.

There's a fair number of vendors that "record" web sessions too. Not via a screenshot, but the difference isn't much to a lay person. They track mouse movements, key up/down, etc. Enough to credibly recreate a "video" of sorts. Apple would have a harder time banning that.

> Apple would have a harder time banning that.

Well, obviously Apple cannot ban what is outside of its walled garden.

I'm not sure about that. They certainly control which browser apis they choose to support.

And it seems like those that support web applications getting closer to native apps functionality are the ones they lag on.

I agree, which is why the exact verbiage on Apple's end is important.

Not every big app. Every app.

It can often be difficult to determine root cause of an issue when you are just given a stack trace. I suspect we will soon see two patterns arise: (1) Popups when the app launches to get consent and (2) Screen recording that still happens but only phones home if an exception occurs and where they get consent at that point.

And what does this mean for core metrics like Google Analytics ?

I have published several apps on the App Store and none of them make use of any analytics or screen recording or any telemetry, really. So your statement "every app" is categorically wrong as far as I can understand it.

What kind of metrics can you get from Apple. Clearly number of downloads, but how about things like Daily Active Users?

And if your app has any kind of online component, there's probably an HTTP request hitting your server whenever the user launches the app. So even without explicit telemetry in the app, you can easily get decent data from your web server's logs.

Quite a bit, actually: daily active users, retention rates by day, sessions are among the things you can view.

I sincerely doubt Google would send ANYTHING to a third party unless it really had to.

That being said I think this is a good compromise and I hope Apple applies this to all vendors.

I think the biggest winner here if I read this correctly is AWS and other cloud providers. The most straightforward way for someone like mouseflow to comply is to separate each customer to its own instance in a way that mouseflow has no access to user data.

> I sincerely doubt Google would send ANYTHING to a third party unless it really had to.

In this context, Google is the third party. Plenty of apps use the Google Analytics SDK[1], or the newer GA Firebase SDK[2] for their analytics.

[1] https://developers.google.com/analytics/devguides/collection...

[2] https://firebase.google.com/docs/analytics/ios/start

Huh? Google’s entire revenue model is based on advertising.

Of course they send your info to a “third-party,” that party being the advertiser(s).

Google doesn't have to send your data to a third party to get advertising revenue.

The advertiser creates and advertisement and passes it to Google with a selector, "We want to show this advertisement to men over the age of 35 in Milwaukee" and a price/click.

When someone who fits the selector arrives, the advertisement enters into Google's auction.

If it wins the auction, it's rendered to the customer.

Google doesn't say "Here's the list of our customers, and who they are, let me know who you want to send ads to".


Googlers will insist that they don't "send" your info to the advertisers, they only let advertisers buy ads based on that info.

I'm a non-Googler who has spent a lot of time around media analytics, and I'll make the same insistence that that is a true and important statement.

Google is a one way mirror. They will suck in as much data as you want to give them about people, but it's virtually impossible to pry any info out of Google that's at an individual and identifiable level. Only aggregated performance information is exposed to advertisers, and the ability to mix and match targeting criteria based off of those dimensions that they expose.

Even with their enterprise marketing products like DoubleClick you can't pry out individual-level data. DoubleClick customers used to be able to export an anonymous ID, so they could use independent third-parties to consolidate conversion and impression data and measure attribution across complex marketing campaigns. But even that isn't possible anymore, due to GDPR concerns[1].

The closest they now get to "sharing" user data is Ads Data Hub[2]. And by sharing, I mean they expect you to share all of your data from outside of Google, which Google will then connect to their data and allow you the privilege of running aggregated queries. But they actually keep their side of the data firewalled off, and it's not human readable or accessible at the row level, only in rollup queries.

In the ad world, I can assure you they are far more protective of user data than most anyone else. The size, effectiveness, and dominance of their marketing channels afford them the ability to take that position without it materially impacting advertiser spending. Very few advertisers take such active measures to insulate exposure of user data from advertisers. And for many that do invest resources in such endeavors, it doesn't mean that they don't provide user data to advertisers. It just means that they don't provide it for free anymore.

[1] https://adage.com/article/digital/google-s-move-remove-doubl...

[2] https://developers.google.com/ads-data-hub/

Sorry, I should have phrased that differently. I'm not doubting the factual accuracy of your account. I guess I'm not sure why this is supposed to be reassuring?

Oh, don't worry, only the most data-hungry machine ever built has your data.

Ha! I completely understand and agree with that viewpoint.

It's only mildly reassuring to me relative to how loose I know most providers are with leaking user data. And because, having interacted with it from the advertiser side, I can tell that they not only recognize the long term value in protecting user data, but that they also invest the resources to design their systems and processes with that fundamental premise in mind.

Conversely, I also recognize just how much they do know about me, and just how privileged of a position they're able to take due to their market-dictating scale. The thought of their growth ever stalling terrifies me, since it can give them cause to re-evaluate that fundamental premise if they ever need to shore up their numbers.

I see, and stand corrected. Must presume this is as long as they never get desperate (a la Yahoo).

Pretty much. Here's to hoping for a long and profitable future for Google on it's current path. Because the alternative is too terrifying to contemplate.

The alternative is inevitable. No company lasts forever.

Well maybe it will help OPEN SOURCE technologies like Matomo spring up and you can, I don't know, OWN YOUR OWN DATA and pay for your own hosting.

We got a 24 hour mandate to upload a tracking free version. Fuck you Apple. "Your lack of preparedness is not my emergency."

My European dev team is now up at almost 1am on Saturday putting in a fix for this.

Unless Apple themselves is going to provide tools to help make better, crash-free software, these are necessary third party tools.

And likewise big chunk of cybersecurity.

Yea Apple needs to clarify what is allowed and what is not. Keeping track of user clicks is pretty much industry standard.

I believe Apple just used their weight to declare that industry standard is, as we already know, anti-customer and anti-privacy. I suspect they’ve realized that beyond the iPhone and iPad, their biggest advantage is that they have history of respecting user privacy. Leaning into that when companies like Google literally can’t because spying on people is the core business model, is pretty smart.

I’ve seen them in action, these frameworks do way more. They record touches, delays between touches, sensor info, key logs, including mistypes. Watching the playback of the collected data against the backdrop of the app is like watching a remote desktop session.

> Yea Apple needs to clarify what is allowed and what is not.

Apple explicitly does not do this, because they know that they'll draw a line and two months later someone will try to get through on a technicality.

I think many people would be surprised by the amount of analytics data leaving their phone _all the time_. I recently was doing some work where I had my iPhone proxied through mitmproxy on my laptop, and was blown away by just how much data was being sent. Some apps were sending a request to one or more analytics firms every single time I touched a UI control. I would set up a pi-hole and VPN to block this stuff, but I'm sure the app developers will just start tunneling the requests through their own hosts. Maybe some day one of these open source phones will actually be viable.

I wish the AppStore review team would simply reject any app that generates unnecessary network traffic for no good reason.

They don’t even need to MITM the traffic. Just the fact that an app makes network requests when using a supposedly offline feature should immediately get them rejected.

And iOS should introduce a visible network activity indicator that can’t be manipulated by applications, like they do for location tracking.

I fully 100% agree with this. Apps should require user permission for network access. Further, the user should be able to control what domains/hosts the app has access to. The user should be able to have feedback indicating that the application is communicating over the network at what times and with how much data.

> Apps should require user permission for network access.

I think this is already a thing in China.

> Further, the user should be able to control what domains/hosts the app has access to. The user should be able to have feedback indicating that the application is communicating over the network at what times and with how much data.

This is difficult to do, since it's easy to swamp the user in prompts for every little thing.

> it's easy to swamp the user in prompts for every little thing.

I don't see this as a bad thing. It will let me quickly see what apps are built not following best practices.

Not all people value privacy over convenience as you do.

I would even surmise that a user's patience threshold for that kind of annoyance these days is pretty low.

I don't have patience for that either. I'd just uninstall the app.

Have you ever used an application level firewall like Little Snitch? I have tried using it a couple of years back, and gave up after less than an hour.

In principle it's nice to be able to manually allow / reject individual requests, but in practice you can't get anything done if you do. Popups keep popping up all the time, and it's rarely clear what the request is good for. Then you get random failures, because you accidentally blocked an important request, or you just allow anything anyway, because there really is no way to know if that request is good or bad.

And that's from the perspective of a software developer who understands what protocols and ports and addresses are.

A firewall like that would be absolutely useless for non-technical users.

You're spot on with the network activity indicator. And that network activity indicator should divide foreground/background traffic. Sometimes I want that network traffic, and sometimes I don't. It should be information given to the user.

I would love a Little Snitch for iOS.

I’d go further: I want a screen in Settings for every app, that shows me what it contacts, and how much data it sends, with the ability to block anything I don’t like on a per-app or global basis.

Basically I want Little Snitch built into iOS as a core feature. Hiding it behind “Advanced..” is fine.

I use this app. So far, it blocked 1108 data trackers and 3 locations trackers.

I used to work for Crittercism (now Apteligent), but I don't claim to speak for them in any way. I can state that while I was there, I never saw anything even remotely creepy being done with the data that was sent to them. As a baseline, I'm a card-carrying member of the EFF and my definition of "creepy" is a lot easier to meet than most peoples'.

A typical setup would work like this: when you launch an "instrumented" app, it generates a UUID. Then whenever a user interacts with the app, it would send messages like "UUID 1234... launched app version 3.14. UUID 1234 clicked the 'home' button. UUID 1234 viewed their profile page. UUID 1234 searched for a video. UUID 1234 played a video. UUID experienced an OutOfMemory exception in module foo, line 942." These were aggregated together so that you could run reports like "among people who experienced the OutOfMemory exception in module foo, line 942, how many viewed their profile page first?" That allowed developers to very quickly focus on the exact steps required to reproduce a specific problem.

So sure, apps were gathering a lot of information about what you were doing, but it really was genuinely for your benefit. There was no way for customers to run queries like "what video was heywire watching?" or the like. Everything was 100% focused on being able to quickly and accurately identify the cause of crashes. Now, that was just one company and it was several years ago. Maybe every other company was creepy? Maybe Apteligent is, too, now? I don't know. I don't have any insider knowledge into the current state of things. But at the time I personally witnessed it, I would have felt very comfortable at an EFF meeting explaining how every byte of metrics information was being used.

That steps way the hell and gone past my creepiness threshold. Being able to justify it in your own head as being "for [the users'] benefit" doesn't change the fact that there's a shitload of raw data there that the users wouldn't voluntarily provide if they knew about it.

I like what Apple does, they ask you if you'd like to automatically submit usage data and crash reports, and you can opt out without any loss of functionality. I have no idea what % of users opt in, but as long as some people opt in, it'll be win-win for users and developers.

In this specific case, given what I've told you about the kind of data they collected, how it was used, and how it was surfaced to the consumers of the data, what specifically about that data flow bothers you?

> I've told you about the kind of data they collected, how it was used

No, you've only talked about what currently happens when everything is working properly. What happens if the company ends up in financial trouble; to they have a Ulysses Contract[1][2] on record that binds their future ability to monetize all of this data? Without legal enforcement, we just have to hope this company will somehow resist the temptation that most other companies are not able to resist.

> what specifically about that data flow bothers you?

> it generates a UUID

That's obviously personally identifying, which it's a header in all of the analytics you describe. Just because it's synthetic doesn't make it anonymous. Once it's mapped back to other information - which is trivial if you correlate IPs[3] or event timestamps[4] - this type of analytics is only an INNER JOIN away from being merged into someone's pattern-of-life[5].

The problem isn't what happens when everything works as intended. You need to also prepare for when (not if) your data is merged into other databases, and what others might do with the data in a future.

[1] https://en.wikipedia.org/wiki/Ulysses_pact

[2] https://www.youtube.com/watch?v=zlN6wjeCJYk

[3] https://news.ycombinator.com/item?id=17170468

[4] Take a set of "UUID 1234... launched app" events for a common app that is regularly launched e.g. when someone wakes up (or whenever). Correlate those times to other times that also happen to be launched (or webpages/email visited) at similar times. What are the odds that two unrelated people just happened to open different apps [..., 2019-02-04T10:11:22, 2019-02-06T10:17:44, 2019-02-07T10:14:52, ...] (+/- maybe 30 seconds)? A unique identifier and a few high resolution (seconds) timestamps can easily identify someone uniquely when you have enough data points.

[5] https://en.wikipedia.org/wiki/Pattern-of-life_analysis

Literally every app that uses firebase can do this, and firebase is pretty much the standard for scalable apps for the indie dev

The problem with analytics data is that it can look harmless, but it's easy to to convert seemingly trivial data into tracking points. For example, consider this: UUID 1234 always watches video between 16:00 and 17:00 UTC, and then again after 02:00. This gives you a pretty good idea where UUID 1234 lives, as well as their daily schedule (perhaps they have a job during that period? Maybe the fact they don't watch video at 21:00 means they went out to eat that day?)

That's true if you actually collect any identifying information about UUID 1234. If I, as the consumer of the data, can't tell whether they're in Los Angeles or Beijing, then that information doesn't tell you much.

I can say that at the time I was there, it was not possible for a developer logged into our system to suss out any fine-grained information about a particular user. They just got aggregated data like "92% of people who experienced this symptom did this other thing right before it happened".

Wouldn’t IP addresses be routine information collected by apps that collect information over the network? That by itself would give away the location to a good degree of accuracy in most cases.

On mobile no it wouldn't. Often enough it barely even gives away the country. When roaming with Fi in Europe I have a US IP.

> So sure, apps were gathering a lot of information about what you were doing, but it really was genuinely for your benefit.

For my benefit without giving me a clear understanding of what was being collected or the option to opt out. Gee, you really shouldn't have.

I find the user-aggressiveness of some of these techniques to be insane. I have a similar setup to you and have two pi-holes for DNS on my home network propagated to all devices include a few smart devices. On any given day roughly 45% of my home traffic (3-6 people) is blocked by pi-hole.

Aside from the Librem 5, have you thought of buying an Android phone you can flash, flash an open source ROM (LineageOS), and only use FDroid?

Yes I know that you still have baseband + binary drivers, but at least then all of the apps are open source, and so is the OS.

Yes, definitely. One of these days my freedom loving privacy aware side will win out against the side of me who enjoys the convenience and ecosystem of iOS :). At this point my entire extended family is on iOS, and use several of the iOS only features.

Haha that's fair!

Out of curiosity, what iOS features do you use? I ask because one thought you can do is have that as a phone and an iPad, so you can segregate your very private stuff and still use an iPad for iOS features

Certainly nothing that couldn’t be done on another platform, but Apple has done a great job of locking us in. iMessage (including playing the games via iMessage), FaceTime, the built in location sharing features, sharing app purchases, etc. All easily replaced with alternatives, but the friction to get everyone to switch over is tough. My personal systems almost all run Linux, and I do have an android tablet, but for my main communicator, iOS is just so convenient.

Gotcha. I am "that guy" in my family as almost everyone else has iOS, so I don't share in a lot of that.

One thought for you, I just had my Nexus 5x die, but I got a Sony Xeperia XA2 for $150 and I flashed it with lineageos. See if you can use it and ween yourself off?

I should take a look at mine and see what its like. I have a degoogled android phone with fdroid so I don't expect it would be too bad. Is there any way to do it so you see the data before encryption?

I done this recently and almost every app was using certificate pinning, so seeing the actual data was difficult. My sokution in the end was to setup a tiny vm with dnsmasq setup to forward all requests and log then as it does so, then just tail the log. There was remarkably little caching going on in the device so almost every actual trigger a DNS query.

Alternatively setup Pi-Hole for testing.

Microsoft Edge (on Android) was the worst offender, contacting vortex.data.microsoft.com almost constantly, on almost every UI action I made. Other notable were the amount of apps contacting analytics servers in the background, when the app as (from an end user perspective) not even running.

If the app doesn’t pin a cert then mitmproxy should allow you to decrypt the data

Exactly this. Mitmproxy makes it very simple to download and install the root certificate on your device. Just don’t forget to remove or disable it when you’re done.

Windows collects even less data, yet people mention telemetry in almost every post related to Microsoft.

And when browsers will protect users against activity recording without consent?

For example Hotjar [1], I did a review [2] of the product a year ago and I could not believe the creepy surveillance level of this tool.

For me, manually disable JS or install content blockers will not get mainstream appeal for the regular users who just want to browse the web (and didn't know that maybe are being recorded).

This should be blocked by default on every browser.

[1] https://hotjar.com

[2] https://www.youtube.com/watch?v=FDgybTvnhjY

It's hardly new back in 2008, I worked for a company that used Tealeaf, a proxy that intercepted all web traffic before it even hit the load balancer, to record every click and request. It was used by the help desk and, when they couldn't figure it out, the Tealeaf session and player were forwarded to the devs. (This was a company under HIPPA, so a bunch of that data was related to personal health records).

In 2012, I worked at a University with analytic tools that showed a color map indicating the average scroll speed for pages on our website and heat maps indicating how long different users hovered over a section.

This stuff has been around for a long long time.

> the average scroll speed for pages on our website and heat maps indicating how long different users hovered over a section.

Optimizely has all of this stuff (sampled), but the fact they can 'sample' something means they have the full data.

It's creepy indeed. Not only do they collect all your actions (key presses included) but I believe they also send the activity to their servers via HTTP, rendering the SSL on the page that includes their script, useless.

If it's a HTTPS page, wouldn't that be blocked due to mixed content though? Or is HTTP requests from a HTTPS-loaded script allowed?

Modern browsers should block all backend/javascript http communication if the main request is made over HTTPS, unless you specifically disable it with a Content Security Policies.

Better to just disable javascript altogether. Sure, there's no dynamic loading of garbage, but I didn't want that anyway. If your back-end server can't render HTML then you need to build an app.

At least with native desktop apps I can put that garbage into a VM or container. Load whatever you want. I can then apply my own firewall/containerization/VM rules.

According to their documentation it is sent in https


I wonder if this move is related to this recent story [1] from theappanalyst, which reveals not just a privacy, but also a security nightmare.

The article even seems to mention it "Even though sensitive data is supposed to be masked, some data — like passport numbers and credit card numbers — was leaking."

[1] https://news.ycombinator.com/item?id=19102036

It's almost certainly a response to that.

It just seems to be really fast. The article appeared on HN 21 hours ago. Unfortunately there is no date on the blog post itself.

When I read the report I was hoping that it will have serious consequences for Glassbox and similar services. Good that Apple is taking action so fast. I hope Google kicks in soon too.

I think there was some associated press coverage for the last couple of days, of which that article was on the tail end of.

There were very similar news that have made to Techcrunch, so this is not anything new. So it cannot be a response to this - Apple has been working on it for some time.

I appreciate and welcome this requirement from Apple.

Some time ago I built something I call “Network Blackhole”.

The project intercepts HTTP traffic from applications that I installed in either my MacBook or iPhone. An excellent example of this is Crashlytics [1], Segment [2], and Sentry [3], which are among a list of popular web services that many developers use to report bugs and crashes in their software, and the famous Google Analytics, which I hope I don’t need to explain what it is used for.

With the help of Little Snitch [4] a popular network monitor for macOS, I detect when an app tries to connect to one of these services, or similar. Then I execute a tool written in Go like so: “blackhole example.com” which does the following:

1. Inserts domain into /etc/hosts

2. Create an HTTP web server (in Go)

3. Adds a match-all endpoint to the server

4. Creates an SSL certificate with mkcert [5]

5. Creates an Nginx virtual-host for the server

6. ???

7. Profit

In the end, and after 1-2 minutes, I have all the traffic to that domain gracefully redirected to a black hole, reducing the amount of data that I leak to 3rd-party websites.

However, don’t get me wrong, I understand the purpose of these services, I haven’t said they are evil or anything like that. I would probably use them myself if I had to, but I certainly would add an alert to ask for explicit consent from the user to send this information to a service that I won’t even have control over. If one of them leaks my customer’s data, I will be the only one facing the consequences.

I hope I don’t have to add another domain to my network black hole anymore.

[1] https://crashlytics.com/

[2] https://segment.com/

[3] https://sentry.io/

[4] https://help.obdev.at/littlesnitch/

[5] https://github.com/FiloSottile/mkcert

You have two links labelled [4], and also didn't share your tool; which was the more interesting content of your comment ;)

Thank you, I add the missing link.

Unfortunately, I cannot share the tool because it contains multiple zero-days for apps that make use of Paddle [1] and Devmate [2] to grant and validate software licenses. It also contains zero-days for apps developed in partnership with Panic [3] and MacPaw [4]. I’ve been in contact with some of the developers of these apps to patch their software, and until they all release security updates I cannot share the code with the world.

[1] https://paddle.com/

[2] https://devmate.com/

[3] https://panic.com/

[4] https://macpaw.com/

Seems like a poorly thought out policy. It should be another permission. "App ABC would like to capture your screen Y/N"

Also IMO if Apple was really serious about privacy they'd remove the permission for apps to use your camera and see all your photos and instead require apps to use a system camera a UI (so the system takes the picture, not the app) and similarly if an app wants a photo from your library it should be required to use some system photo selector so that the app can only see the photos you select for it.

As it is, every app that asks for camera permission can use the camera anytime it wants for any reason.

Similarly any app that asks for permission to see your photos can see all of your photos anytime it wants.

Neither of those is compatible with the idea that Apple is a privacy first company.

Note that if Apple enforced those you wouldn't need to grant most apps camera and photo library permissions since they wouldn't get access to the camera data unless you took a picture while in the app or select photos.

I get it might suck in that gating the camera to system only means experimental camera apps are out. Maybe they could find a way to add secure hooks or maybe they need to add a new permission that hopefully security conscious people would rarely grant which is "Can this app access the camera anytime it wants for any reason including spying on you?"

Similarly only using the system photo selector would mean apps can't make fancier photo selectors but again, you could add permission for that if it's really important. For example Dropbox/Google/Flickr/Facebook upload all your photos features would need a "Can this app spy on 100% of the photos on your device?"

AFAIK access to depth data currently requires no permission. Seems again that that's not a privacy-first policy.

Also, Apple is apparently yanking the ability for for webpages to access orientation data. Why should apps be any different?

I agree fully. It's a bummer that the "Photo access" permission isn't further split down into read or write access.

What you can do – at least partially with some apps – is: open Photos, select photo, tap the share button, select the app you want to send the photo to.

I use this workflow to send photos via WhatsApp, because I don't trust WhatsApp to NOT upload all kinds of meta data (or thumbnails, or photos themselves).

Unfortunately, this doesn't work with all apps.

I would like to see a camera access indicator in the iOS status bar, which appears whenever a camera is active.

Another approach could be to require a separate permission for “covert” (off-screen) camera usage — i.e. capturing images or video without a camera preview on the screen.

I naively thought that was how computers worked when I was younger. It still feels weird that it isn't the case

define younger, computers use to be the wild wild west where you could do anything. nowadays companies are locking up things left and right - if you don't have technical expertise you mostly can't do anything except corrupt your OS

Oh no, how will we track user engagement now? I need those engagement numbers for my promotion case by May, thanks Apple! >:(

On a more serious note: it’s nice to see that Apple is finally cracking down on these shady analytics SDKs. I’m this close to forgiving the headphone jack thing...

I, too, am thrilled with this development. Feel happy for supporting Apple, bought an iphone after years of Android because of their vocal and increasingly militant stance on privacy.

Let’s ban the entire Google analytics framework while we’re at it, including in Safari. Then we’re getting somewhere.

Server-side analytics, probably.

As a product developer, screen recording has been a game changer for finding UX bugs. There's no real replacement for watching your users use your app in real world scenarios.

It seems apple is only banning screen recording tools that send data to "third party" servers so hopefully there are open source self-hosted alternatives that we can use instead.

Ok, then you pay some people to be a sample group who will allow you to do so, not just keep it under the hood and pray you aren't found out. Things like thjs are exactly why every app should make network requests approved by the user.

Could you give people In-App purchases for free in exchange for the allowance to work together to improve UX?

From my understanding, Apple are happy for you to do it as long as it's explicitly opt-in (not just hidden in your terms of use) and there's an indication whenever the screen is being recorded.

My company app screen record everything (using repro.io). I've tried to talk with my superiors but they don't care. Is there a way to report the app to Apple?

Does your company have a whistleblowing hotline? If it were me I would start there. If that doesn't bring you any satisfaction you might want to consider getting out.

That advice will probably get him fired. This will look like a fringe political issue to everyone else.

You want to report your company's app? That's unusual. I don't see any winners in that scenario.

The people?

Malicious Compliance is one of the better ways to make waking up on time worth it

As an Android user since the original Samsung Galaxy I only have this to say - I'm literally browsing the iPhone section of the Apple website right now.

> explicit user consent

Is this possible?


Explicit consent can only be given if a person fully understands what they're agreeing to. The weight of evidence suggests that once personal identifying information is out in the wild it can't be retracted.

While the banks, police, and media continue to refer to such a concept as identity theft, I don't believe it's possible for anyone to give explicit consent to their data being leaked and used by people with ill-will.

Edit: leaked, or intentionally sold to bad actors. Or the parties collecting the data in the first place are bad actors themselves.

Why doesnt the Apple app review team catch this prior to deployment?

I thought that was the big thing about having to use the app store was Apple kept it locked down tight to prevent this from happening?

I don't want to disparage that team as I'm sure they work very hard and do an excellent job. But my overall opinion of that process having it gone through it multiple times is that it's (a) security theater and (b) only exists to create the illusion of excellent curation. I hear all the time about different rules being applied to apps retroactively.

It's definitely not security theater. But it's also not perfect, and they can only catch the things they actually look for.

Unfortunately, it's often the case that they're not looking for the right things or not looking very closely at all…

Because the app review team only evaluates a compiled binary, it is fairly easy to obfuscate nefarious activity. It could be as simple as serving different javascript from your backend to a [JSContext evaluateScript] call based on a flag you set after the app is approved, for example.

Of course, serving up javascript from your backend to a [JSContext evaluateScript] call is itself a violation of the app store guidelines. Interpreted code not executed as part of a web page rendered by WebKit either has to be bundled in the app itself, manually entered by the user, or has to be in the context of a learning app with a slew of restrictions around the UI (basically, the exception for Swift Playgrounds).

Apple's guidelines around this are pretty clear, but almost everyone that tries to use this to A/B test seems to get through app review. I'd really rather this just be outright banned, but it looks like the current policy is annoyingly lenient.

Maybe. I'm not advocating this technique, merely explaining the possible gaps. Also pretty sure JavaScriptCore doesn't have a blanket ban on evaluating a remote NSString (not that it matters for a blackhat type app), and also, this was just an example, the same decision logic could be implemented by if'ing on xml values or whatever

Isn't it possible for the review team to check video/screenshot data flowing to appsee.com or uxcam.com and check whether app has a privacy policy that mentions this explicitly?

Sure it is. So a flag could hide that.

For 30% of my revenue it seems like a lightly loaded excuse.

I specifically put a disclaimer on my website (sephware.com) that my apps don't even collect, let alone sell, any of your data. Ironically my app (Autumn) was rejected and not allowed on the App Store, even when I filed an appeal, because it uses Accessibility, despite there being a good handful of apps of the same type on the App Store that were grandfathered in before that rule, many of them long abandoned and receiving recent poor reviews with the users asking for them to be updated. I have aimed for Autumn to be not only one of the more polished and aesthetically pleasing apps on the App Store, but among the most ethically conservative also.

If your app is using the Accessibility APIs, is it actually an assistive app, meant to help people with disabilities? I’ve seen apps get rejected for using Accessibiliy but not actually being accessibility apps.

I'm waiting for the reveal of what Apple does with state actors, and I am waiting for the evidence there can be anything close to 'sunlight on certificates' or other forms of transparancy around what they do. So, yes, we win when Apple fights for our privacy. What I need to understand is what we lost in this, and what compromises lie underneath, and how much the state(s) at large secured wins which we aren't winning on, they are.

Key escrow for instance. Insights into phone use. Monetisation of information feeds forbidden to third parties.

I still prefer signal to iMessage.

This is a really dumb move. It's not like Apple was previously unaware of this, there were entire venture backed companies built entirely around being able to do this and they've been around for years.

Despite seeming scary, this is actually the most benign form of data collection. People have this naive notion that companies have this obsessive desire to track them as an individual. Working at tech companies, this could not be further from the truth. I do not give a shit about you as an individual, I care about you as a collection of attributes that I can correlate with the attributes of the rest of the user base. The only time I care about you as an individual is if you're reaching out to our customer service as an individual with a problem and I want to help diagnose it.

The problem with screen recording data is remarkably useless for anything else because it's too high fidelity to be aggregated. If I want to serve you more personalized ads or manipulate you into purchasing something, there are other tools that are far more appropriate for the purpose.

The only reason Apple is doing this is for PR reasons, to help signal to everyone that they're a privacy conscious business. But they're doing this by leveraging people's misunderstanding of how data collection is done and banking on emotional fears rather than actual damage.

Several friends and I recently noticed the red navigation bar indicating screen recording for a couple seconds after closing instagram. At the time I wrote it off as nothing, but now I wonder if even apps that big are doing similar things.

I think this needs to be clarified more.

At the same time as appreciative I am about Apple's privacy stance, it really worries me that they're the only large company that seems to care.... and when they don't, who will?

They should just ban first, explain later, in this scenario. The app developers know full well what they're doing, and the remedy should be punitive for scumbag outfits that operate like this.

I've used screen recording systems in both web and mobile and found them to be very valuable tools. And I've only ever used them to troubleshoot and improve the product. Never shared the data. That said, I've always been disturbed that it was considered fine to do. I think Apple's move is great. I hope other apps stores do the same. I think it should be done for text-based analytics like GA as well, as they get so detailed that it's pretty much the same as screen recording.

Hah, you think that's bad just look at all the sites that use Hotjar. Some entry-level UX person at thousands of companies can see you typing in all your credit card data.

Or Fullstory! Way back in 2016 that analytics package could already show you the full screen and every click on someone using a website or native ios/android mobile app.

All so you can "optimize your A/B testing" lol.

But don't worry, there won't be a Netflix documentary or Congressional subpoena till 2025, LOONNNNG after your startup's exit. Until then, Zuckerburg gets to be the face of this general disdain and luckily gets to honestly have no idea about what is going on, because third parties integrated into his apps are the ones doing all the recording, user monitoring and data sharing.

I see this more of a perf thing than a privacy thing. Since most apps connect to a remote backend, any app developer can create backend tooling that perfectly recreates a user session from server logs, with local actions logged locally and uploaded on the next request. But they’re usually too lazy to, and waste my bandwidth by uploading logs directly from my phone to a third party.

Perhaps Apple was already waiting for an outrage like this. It allows them to be “forced” to shut down third-party logging. Which only hurts competitors. What can Google say? “Well....on Android we’re not a third-party...”?

They’ve kicked out tracking already on Safari. This is the next step. It makes Google’s software look like spyware.

I think apps that do recording/tracking should be forced to have a message similar to tobacco products. I think users should be able to do whatever they like with their devices, so I'm against closed ecosystems. But users should at least be able to make informed decisions.

Let’s pretend that clicktale or something similar doesn’t exist on the majority of big websites

I don't understand why it is sending 'notices' to developers. Why can't it be a proper permission like all others - which the users have to explicitly allow when an app tries to do that? That will be proper security.

How would that be technically possible? All the app needs to do is log the user’s taps and swipes and then replay those inputs on a copy of their app.

Do any of the first party iOS apps (imessage, safari, podcasts, etc) collect analytics data?

They do.

This is one of the reasons why I won't own a computer I can't root. On my Android phone I'm running AdAway which blackholes ad and tracking hostnames. This is currently impossible on iOS without using your own external DNS server.

I suspect companies will move to owning their own endpoints and having the info forwarded to google/mixpanel on the backend. Though to be honest I’m surprised it hasn’t happened yet.

But what about these "recordings" in a web app that Apple has no control over? As far as I can tell, grabbing a screenshot of a canvas element and sending it to a server is still doable without being reprimanded by Apple.

Is this not hypocritical? I assume Apple does similar (analytics)?

Recording people's screens without disclosing that they're doing so? No, not really.

Disclose, like in the small text that nobody reads?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact