Hacker News new | past | comments | ask | show | jobs | submit login
Zoom iOS app sends data to Facebook even if you don’t have a Facebook account (vice.com)
1433 points by softwaredoug on March 26, 2020 | hide | past | favorite | 360 comments

> There is nothing in the privacy policy that addresses [that data is being sent to Facebook]

> The Zoom app notifies Facebook when the user opens the app, details on the user's device such as the model, the time zone and city they are connecting from, which phone carrier they are using, and a unique advertiser identifier created by the user's device which companies can use to target a user with advertisements

So Zoom is sending the fingerprints of mobile users to Facebook. Which helps Facebook better track users across the internet. Not only this, but Zoom is not disclosing this information (though it isn't like people read TOS and would be aware of this anyways).

Can we just stop sending data everywhere? If you don't need it, don't gather it.

One thing i'll note here as to a potential reason why they do this

I just recently attempted to set up Facebook adverts for an app I developed. When it came time for me to set the metric up I obviously chose "App Installs" as my metric to track.

To do this, Facebook told me I needed to install the Facebook SDK in my app to attribute an adverts conversion.

I didn't end up running the ad, but I can see why companies potentially have the SDK embedded in their apps to track ad-spend, hence the phoning-home to Facebook.

Edit: Just so people don’t have to dig to verify;


It is Zoom’s responsibility to list Facebook here:


Not sure why Vice called out the omission from the privacy policy – I’ve never seen one that actually lists all companies out by name. The GDPR mandates a list of subprocessors, though!

GitHub’s privacy policy is exceptional. Particularly the section on sub-processors[1] where they list out every company, don’t have any sort of CYA language that covers others that might not be listed, and make a commitment to update that page every time the sub-processors or the sub-processor’s function changes.

[1] https://help.github.com/en/github/site-policy/github-subproc...

Corrected link: https://help.github.com/en/github/site-policy/github-subproc... (original has an extra s on the end)

Thank you!

Their privacy policy even explicitly calls that out:

> Zoom, our third-party service providers, and advertising partners (e.g., Google Ads and Google Analytics) automatically collect some information about you when you use our Products, using methods such as cookies and tracking technologies (further described below).

Listing this on their webpage doesn’t solve the spying problem.

“Well, at least they told us about it” is absolutely no solution to “so many of our tools are spying on us”.

It is their right to run their business as they see fit, and it is our right to not use them. It is the deception we do not allow (any more) with GDPR.

Most present societies and thus governments do not believe that people should get to run their businesses however they see fit.

There are a thousand different ways in which companies are not presently allowed to be operated, even if the owner sees fit to do so: worker safety, discrimination, collusion, et c.

I’m not saying these things are good or bad, or that there should be more or less of them. I’m saying that Zoom’s (and Facebook’s) spying is a problem, remains a problem, and is not solved by them putting some text on their webpage.

> It is their right to run their business as they see fit

It absolutely isn't. We want to use services but we do not want to be subjected to surveillance capitalism. Privacy is more important than some business and if it can't operate without being invasive it should fail. If they insist on being hostile and tracking people despite their wishes, people will use the product anyway and they will find a way to break the tracking. They will delete the surveillance code, use network filters, send fake data, whatever it takes to stop the surveillance.

> It is the deception we do not allow (any more) with GDPR.

That law also says users have the right to object to what the service is doing with their data and that they must stop doing it if the objection is valid. Almost all data collection taking place today is objectionable, especially those related to marketing and advertisements.

Collecting data on people is not a god-given right. It is a privilege and it can be revoked. People trusted companies with that power because they thought companies would act in their best interests but they were exploited instead. Now it's time to take it away.

> We want to use services but we do not want to be subjected to surveillance capitalism.

Who is the “we” you are referring to? I think most people care so little about this that they don’t even bother to skim the TOS before using a service.

To add to your point (which I fully agree) (and I am surprised of the downvotes - I don't care for the karma but it looks that I didn't write it clear enough and/or people misunderstood my comment 2-3 levels up).

I am not touching the "add value" bit, I will stick to the ethics. Some businesses are (imho) scum (Facebook, Google, Zoom, every tracker, every data aggregator, etc.)

They may uphold the law or they may ignore the law. Since we should not burn their buildings down in retribution, we can sue them (or whatever the local privacy laws state), we can stop giving them money (our free/paid information). But it is up to us. Zoom clearly needs a (sic) phat penalty by EU to get their stuff straight. Then every EU user should bombard both Zoom and FB with questions on their data practices and "right to be forgotten". Then we should burry them in the sand and move to other service providers.

I am adamant on the issue of privacy and the reason for that is that these scum KNOW they are violating our rights, and the voice in their minds tells them "screw them, £€¥$ goes first".

Caring about this shouldn't be necessary. When people sign up for a service, they shouldn't have to stop everything and wonder about the many, many ways their personal information could be abused. Nor should they have to scrutinize the terms of every single service out there just to know exactly how they're being exploited without being able to do anything about it. This constant paranoia about everything is not a good way to live.

> I just recently attempted to set up Facebook adverts for an app I developed.

Stop giving Facebook your money. Surely there are other places to promote your app.

Sure but they probably all have a worse ROI

It's past time for us to get serious and apply HIPAA-style protection to the storage and transmission of PII, without exemptions.

Companies like Facebook will complain loudly that they won't be able to survive, but that is not our problem. If we pass legislation with teeth, they will need to change their business model. That would be the point.

Zoom has allegedly HIPAA-compliant BAAs with users in the health space. If any PHI data is making it over to Facebook without a similar agreement from Facebook, Zoom is in for some trouble.

IP address, telephone number, city and other identifying information is ALL considered PII.

I work with (adjacent industry) HIPAA protected data, which is considered PII by virtue of knowing Bob Smith is in the system. If they're under a BAA and sending that information to Facebook they're in violation.

If one of my sub-processors did this my lawyer would be livid. But hey, it's Silicon Valley, don't harsh their buzz man.

How do you even report something this technical to non technical folks who oversee HIPAA? Would you have to do a case study style write up?

As if it's binary definition - technical and non technical, unless they're amish I don't see why it can't be reported in plain terms

There are plenty of technical people overseeing HIPAA.

I am working on adding a Zoom client to a medical device right now :)

People aren't allowed to go through my mailbox and sell that information. I don't see how this is any different.

They are allowed to look at you and take notes and sell them.

Stalking is considered illegal in most states and countries.

Frankly I can't figure out why stalking a single person is illegal but stalking a billion people is considered good business.

I understand stalking involves more than taking notes and selling them though. The other person has to feel threatened or such. Now maybe you can/make the case that you feel threatened by Facebook, and maybe you can sue them individually (good luck), but I doubt you can make the case most people feel this way.

> The other person has to feel threatened or such.

Nope, they just have to want it to stop :)

Depending on the specifics they may not be. I live in a Condo tower and my mailbox isn't visible from the street, so if you decided to take notes on me as I read my mail you'd be trespassing.

The specific scenario isn't the point - but the fact that a semi-obvious scenario could be incorrect sorta is. Regulations are complex and tech has a terrible history of playing fast and loose with regulations so it's not like an imposition of regulations would be inappropriate or unwarranted - there are good and bad apples, and the bad apples spoil the bunch.

Being in a public area in a private business doesn't afford you that privacy. The trepassing charge would be possible if a security guard asked the person to leave.

HIPAA ha...

Kaiser Permanente will contact google analytics and doubleclick as you navigate their website, even when checking test results and contacting your doctor.

A user agent is not PII

What about a Unique Advertiser Identifier? What about a UAI with a name, phone number, phone model, GPS coordinates, and software version?

Had a briefing with our company lawyer a while back and any information can be considered PII when paired with other information. Eg that you bought 7 foo’s is not PII, but that you bought 7 foo’s on Tuesday might be if that can then be looked up in the purchase history and you were the only one who bought 7 on Tuesday.

Does it have to be uniquely identifying to be PII? Or is there some minimum threshold for k-anonymity?

I don't know "how unique" it needs to be. I'll ask if I get the opportunity.

It just has to be correlatable if I understood it correctly, but I don't know if unique or not. To me it sounded like if there's only a small number of possible people it could identify (say 4) then its potentially PII, however I have no idea where the line is drawn. Clearly if k is 1, its PII. If k is 2, it probably is too. If k is 1000, its probably not. But at what point does it stop being PII? I have no idea!

The legal person basically said "its complicated, anything can become PII when combined with something else, even if neither on their own are PII". The bottom line is does some combination of information identify a person, then its PII (its in the name really!), but unfortunately that means there is no clear simple list of things that are or aren't PII, it really depends on each individual case.

Her advice was to think carefully about any data stored about or for users and to avoid storing it if possible, and if not possible, think carefully about whether or not it could identify a user in some way. Its not a very satisfying answer, I know. It also doesn't answer your question :(

it's time to stop using the f'ing apps mate

Most users of Zoom aren't choosing it–it is being chosen for them. Both of my children's schools (preschool and elementary) started using Zoom this week, so it is either use Zoom or they do not get to participate.

Zoom has a web version.

Except Zoom web version doesn't work: the incoming/outgoing audio is garbled (tested with Chrome, they do not support Firefox). This is in part because they were obviously too good for WebRTC native audio and instead gutted ffmpeg and compiled it to WebAssembly (I wish I was kidding but I'm not: https://webrtchacks.com/zoom-avoids-using-webrtc/).

Moreover, Zoom has a history of RCEs (leaving an active web server after you uninstall Zoom? so that a website can reinstall Zoom without any user interaction? why not! https://medium.com/bugbountywriteup/zoom-zero-day-4-million-...), and anti-privacy behavior: meeting host gets a copy of all private messages sent between participants (there is no notice of this; https://twitter.com/rcalo/status/1237957509324746752); host can monitor if your Zoom window is active (https://twitter.com/zoom_us/status/1241768006327336963); and Zoom has audio fingerprint tracing (so if you get a leaked recording Zoom can blame a particular participant: https://venturebeat.com/2019/01/22/zoom-is-bringing-ultrason...). Running it under strace reveals it is fingerprinting your device as well (idk if that gets sent anywhere but iOS app sends stuff to Facebook...).

Zoom is creepy and should not be used. I keep a separate VM for it, as it clearly can not be trusted.

> This is in part because they were obviously too good for WebRTC native audio and instead gutted ffmpeg and compiled it to WebAssembly (I wish I was kidding but I'm not: https://webrtchacks.com/zoom-avoids-using-webrtc/).

Not a Zoom apologist I—I am also deeply creeped out by the fetish for covert data exfiltration in a platform that is so widely used in these quarantine days—but, as far as the tech goes, the story you linked seems to say that they do use WebRTC as of September 2019.

Sounds like this would be a good compilation for a complete story instead of individual bits.

Agreed! Been meaning to write something like that but a complete story probably needs second-sourcing all the bits, experiments, etc. Regrettably, the trend is clear.

How do you use it? I had to join a Zoom meeting and tried to use the webpage first but it tried to make me install a client. After the MacOS local web server debacle I will never do that. I figured the safest thing was to use the iOS app but wasn’t thrilled at the idea. I assume if there actually is a web client the process to access it is extremely user hostile. Is it a hidden link or something?

Zoom is obviously an extremely scummy company and I’d rather stay away from it entirely. Unfortunately they must be dumping cash into marketing because it’s now the biggest thing in video conferencing. It’s a shame, they now seem to have network effect going for them.

There is some shadow pattern where you need to refuse to install the app (or make it seems like the installation didn't work) and it will eventually give you a link to the web client. I'm not sure whether there is a reproducible way to get there.

Zoom links are of the form zoom.us/j/IDENTIFIER Change the "/j/" to "/wc/join/" to get the web client.

This did work for me but I had to install Chrome, it told me to install a "modern, updated browser" in the most recent Firefox and Safari. I assume they are abusing some anti-user capability in Chrome but I trust it more than any Zoom client.

Here are the instructions: https://support.zoom.us/hc/en-us/articles/115005666383-Show-...

Click "no" on the invite when it asks to open zoom (instant if you dont have it installed) and underneath there is a link that goes to the web version.

They're not sending your name and address. They're sending the IDFA, device ID, of your device to Facebook. The fact that Facebook can link that device ID to your identity is on YOU. You logged into Facebook in their app to make that connection.

They still record the data and maintain a profile on me even if I don't have an account and have never used their app.

How is that my fault?

This uses unnecessarily accusatory wording. But it is also both unhelpful and just flat out wrong - Facebook gets data fed from a lot of sources - it can start stitching up that data into a picture of you without you ever creating a Facebook account.

This is simply false. Facebook creates phantom profiles to track users that don't even have a Facebook account.

note the reason hippa exists has nothing to do with protecting individuals; it was drafted to protect the insurance companies. it is absolutely not that health data is somehow "private" enough to warrant some special protection for the persons themselves

I never heard this before. Can you explain how it protects insurance companies?

Insurance companies have incentives to get better data than their competitors, so they can offer less expensive coverage to lower risk people and leave the competing insurance companies with all the higher risk people. Until the competitors do the same thing. Then you're all just offering less expensive coverage to most of your customers and making less money. (That also tends to cause trouble for higher risk patients because insurance companies could more accurately predict ahead of time that they'll incur high costs and then charge them unaffordable premiums.)

If the health data they would otherwise use for that is "private" then that isn't allowed, so providing insurance is riskier, will have fewer competitors, and commands higher premiums.

Wikipedia claims:

It was created primarily to modernize the flow of healthcare information, stipulate how Personally Identifiable Information maintained by the healthcare and healthcare insurance industries should be protected from fraud and theft, and address limitations on healthcare insurance coverage.

Is the protected from fraud and theft part somehow incorrect?


You're now talking about a different section of the same act. There are some separate provisions in there to fight insurance fraud, but that doesn't really have a lot to do with privacy for medical records, except to the extent that having somebody else's medical records might make it easier to commit insurance fraud against their insurance policy.

The quote explicitly says that the act covers “how PII ... should be protected from fraud and theft.” HIPAA is ostensibly about protecting patient privacy and data. It’s certainly possible that the insurance industry went along with it because they figured it would help them keep their patient data proprietary, but that most certainly wasn’t the goal of the legislation.

What do you think "fraud and theft" mean in this context? Sick people aren't great fraud targets, they're frequently unable to work and have already lost what money they had to medical bills. The "fraud" is insurance fraud, for which the PII would be things like your name and policy number (i.e. what's needed to file a fraudulent claim against your policy) rather than your actual medical records. And the parties most interested in having access your medical records are the insurance companies themselves, as already mentioned. There is a fairly large financial incentive for a shady insurance company to use patient medical records to poach low risk patients.

In your proposed scenario, I find hard to believe the insurance companies won't form a cartel, keep the prices on the low risk customers, and price out the high risk customers. Somehow I don't believe the explanation.

Forming a cartel is a violation of antitrust laws and requires no one to form an insurance company that defects from the cartel in order to make higher profits for themselves. Passing a law against the practice requires neither.

quick source for the uninitiated (quoting hippa journal):

"...objectives of the Act were to combat waste, fraud and abuse in health insurance..."

I disagree with this — more regulation will make it harder to innovate.

For example, I’ve met several founders who wanted to enable tele-medicine years ago but decided against it because “the lawyers cost more than the engineers”, and walking-on-eggshells destroys morale & iteration speed.

I’m not arguing to de-regulate heath data — my point is that we should selectively apply regulation.

It’s likely a great thing to regulate self-driving cars. But please keep the lawyers away from my niche online forums, 3rd-party clients for social apps, blogs, video games, calculators etc...

If a company can't 'innovate' without sharing users' data with third parties or treating it recklessly through lax security (or uploading database dumps to publicly-accessible S3 buckets) then that company doesn't deserve to be in business.

It doesn't take a suite of lawyers to enforce that, either. Health care is gigantic mess of bullshit in the US especially, because of the multiple different 'stakeholders' - customers, insurance companies, brokers, "networks", hospitals, doctors, etc., and every mistake is a gigantic lawsuit waiting to happen. It's a disaster however you cut it.

As for personal data for some arbitrary startup, any argument that "innovation" depends on being able to be careless or cavalier with that data is just ridiculous. Be careful with it. Store it properly. Only collect what you need, and delete the rest. Expunge data you no longer need. Never send it to any third party without asking the user, and provide clear information about where and with whom the data is processed and stored at rest.

There, now you're being careful with user data and you can still "innovate" decent products, as long as your business model isn't user-hostile from the start.

I think you've missed your parents point.

The problem they point out is that well intentioned businesspeople who want to provide you a useful service and store your data correctly are priced out.

If you want to deal with medical data of any kind, you need a lawyer. Full stop. It doesn't matter how good your intentions are, or how many "best practice" blog posts you follow. You need to hire a lawyer, and lawyers are incredibly expensive.

> Be careful with it. Store it properly. Only collect what you need, and delete the rest.

This is great advice, but that's not how laws work. Congress won't pass a law that says "store it properly". They are going to pass a law that describes how you can and cannot store data in 600+ pages of legalese. And no matter how properly you think you're doing things, you have to have a lawyer to know you're actually doing it properly.

Said another way: regulation always adds cost and barriers to entry. These affect the "good" business just as much as the "bad" business.

Not every business has to be viable for a startup. I'd rather a company that can't afford a single lawyer not have access to my personal information. If that means pricing them out of it through regulation, then so be it.

That's a perfectly reasonable position. If you have considered the pros and cons and decided one outweighs out the other, that's fine.

My parent was not doing that, and instead flippantly remarked that you should just store data correctly and everything is fine.

My point is that it is important to consider the implications of government action, because they are always numerous.

Then don't use the startup? Not everyone has the same calculus as you. You don't need regulation in order for you to not use a product.

Regulation exists to protect citizens at scale. “Don’t use the business” isn’t how we’ve built society, rightfully so. If you believe the regulation to be onerous, fix it.

One is not entitled to do whatever one wants to generate a profit, at the detriment to uneducated or unsophisticated citizens, or society as a whole.

> If you believe the regulation to be onerous, fix it.

Well, that's what they're doing by not wanting it.

We are trading the personal information of billions of people for the ability for tech startups to iterate quickly, who will for the most part decide on a freemium business model revolving around mining and selling private data

And may be trading away our ability of choice in the future and being stuck with a monopoly.

Nobody talks about regulated industries with duopoly or monopolies that everyone has to deal with. Tech industry is exotic ain't?

Big companies will still find a way to track you. That won't change. You can pull up a list of all the privacy focused laws released recently and you can still see Facebook and all their products working fine but you never hear about someone who wanted to bootstrap an idea and couldn't invest much upfront to deal with slow expensive law system.

We don't need more regulations. We need more selective punishments proportional to the damage and presence. Not a lame fine that is not proportional to what companies are profiting from . And if you know anything, Facebook is the one lobbying for privacy these days. They are pushing for some of the requirements they are already compliant with to be put into law .

> The problem they point out is that well intentioned businesspeople who want to provide you a useful service and store your data correctly are priced out.

Then the way to do this is to simplify laws and their understanding. A company shouldn't need a large legal team just to figure out if they are doing something legal or not. It kinda sounds ridiculous when you think about it. That you have to hire a bunch of lawyers to figure out if you are a criminal or not. That clearly means things are too complex. I get that there are places this should apply to, but not small businesses and startups.

You can have regulation that is both easy to understand and effective. There is also letter and spirit of the law. We should never let the letter hinder the spirit.

I completely agree with you. The legal system is entirely out of reach for the average citizen, and this is something we should fix.

However, us wanting things to be a certain way doesn't change how things are. If Congress passed a "Data Protection Act" it would be indecipherable, full of technical illiteracy, and heavily influenced by the richest lobbyists (Facebook and Amazon, anyone?).

This is my objection. I would love for a real data protection act to be legislated. But Congress has its own agenda and ineptitudes. Do you really trust the people who wrote the Patriot act to protect your sensitive information?

That’s bullshit. The federal government is able to produce a lot of useful technical regulation and guidance.

Hell the whole infosec policy framework used everywhere is built off of NIST 800-53.

I’m pretty sure NIST has more engineers than politicians. The same cannot be said of Congress.

Congress would write a law with general objectives, and leave the regulatory work to an exec branch agency. The regulations generally either reference or draw inspiration from NIST.

HHS uses NIST stuff to guide HIPPA. IRS is more prescriptive, but everything in IRS 1075 is still based on NIST stuff.

You have to separate the political puffery from reality. The Federal government is very good at establishing effective regulatory frameworks. They fall down with the long-term maintenance of regulations, as it's often difficult to keep the legal mandate up to date.

If you don't store any data you won't need any lawyers. You don't need to store a single byte of data on your users or customers to provide a service or software using that data.

> If you don't store any data you won't need any lawyers.

Wrong. HIPAA applies to any business that transmits and/or has access to PHI. You don't need to be storing data on your own hard drives to be subject to these laws.

This is exactly my point. You are thinking like an engineer, and Congress is not. You cannot assume anything. You need to hire a lawyer, or you are opening yourself up to serious liability.

I worded that poorly. How about this: If you don't own, manage, solicit or control any servers having access to PHI or PII you don't have any risk of being liable.

Put all of that on the client, do your best to protect it but ultimately make it the clients responsibility.

I still haven't seen any lawsuits or regulation targeting software in that sense, apart from DRM.

There is no distinction between client vs server when it comes to the law. The same organization created and operates both and is liable as a data processor in both situations.

This is again the difference between engineer vs policymaker.

Do you have a source to back that up?

As far as I understand it, Microsoft has no responsibility for PIIs e-mails going through the Outlook e-mail client. Maybe the US is different, but at least in Europe, the GDPR is clear that software vendors have no responsibility in data being processed locally when it's deployed and run by others.

Oracle has no liability for the data stored in their database.

If you have no way of touching the data, your servers (self-managed or otherwise) aren't touching data in any form, you have no legal liabilities wrt data (apart from agreements of course).

Or am I missing something?

>The problem they point out is that well intentioned businesspeople who want to provide you a useful service and store your data correctly are priced out.

I think pricing out the odd well-intentioned business person is a good tradeoff for avoiding the "move-fast and break things" snake-oil salesmen.

>Said another way: regulation always adds cost and barriers to entry.

And saves money and harm when things go bad.

This is innovation in the wrong direction... against our privacy and consent for the benefit of a company whom I may not want to give this data to and whom didn't clarify that's what was happening. It's corporate malfeasance, and if you think it should be unrestricted "innovation" then you're probably on the creepy side of the Big Brother-like data-gathering monster.

>I’ve met several founders who wanted to enable tele-medicine years ago but decided against it because “the lawyers cost more than the engineers”, and walking-on-eggshells destroys morale & iteration speed.

Thankfully so - I wouldn't want my telemedicine to rely on eg. some random unsecured Mongodb instance.

Why disagree with this, it actually will cause innovation. How, if someone is able to figure out the way to navigate the laws easily, they will then sale their solution as a service. So when a FB, Goog or MS can figure it out, they will add it to their stuff. Also a group like EFF would make a tool to verify since it would mean that their existing tools would just be checking the server instead of each thing like Privacy Badger and their other apps do. It was really easy to innovate the car (look, I made this out of hard pointy steel, who cares if anyone else dies). Until you had to actually made them safe, do you think society would be better off going to the old methods? Innovation is for a purpose, a lot of the stuff we see now seems to be to innovate for the purpose of innovations sake and then sell it to someone who cares. Also do you really think people won't invest if their current methods don't work, so startup culture wouldn't die. Just system of having people who don't care about privacy not actually think things through ethically first.

Oh, BS. My wife is a medical practice management exec, I I do some IT consulting in the space. There is absolutely no end of telemedicine solutions and have been forever (I implemented an early one in the mid-90s). There is absolutely no end of new and innovative health care startups. Those 'founders' you talked to are the problem, not the solution: people who want to bitch about being required to do the right thing, pretend they're doing an 'Atlas Shrugged' by not putting in any effort, rather than innovate around doing the right thing.

The regulations that you likely hit when investigating tele-medicine previously are likely more closely related to why nearly no site will let anyone create an account unless they check a box confirming that they're over 13 and COPPA[1] is a pretty intense set of rules that, IMO, are way waaay overkill.

That said, regulations might make it difficult for companies, but they're there because companies have abused data in the past at the expense of customers. So, I guess, it's too bad - regulations benefit me, they don't strangle businesses, they impede it - and it's an impediment that can be overcome if the business is useful enough to people.

1. https://en.wikipedia.org/wiki/Children%27s_Online_Privacy_Pr...

Necessity is the mother of invention.

If the only way your business can survive is carte-blanche regulations around privacy and security, and you fall over the instant that's threatened, one: maybe your business doesn't deserve to survive, and two: maybe you didn't build a very good business.

Niche online forums and all the examples you list there survived (and indeed thrived) in the days before rampant data collection, I have no doubt they'd evolve and survive once again.

It makes it harder to innovate in anti-social directions, and may catch some useful things in the crossfire. But on balance, slowing the rate of "innovation" for these companies that want you to shovel all data everywhere into their gaping maw? Not a problem for me.

This nonchalant attitude around "guilty until proven innocent" regulation is precisely how we wind up with America's alphabet soup of monopoly building bureaucracy.

The ethical way to preserve privacy is to change minds in a way that changes actions. Law is the threat of violent force, and should be wielded only with deep forethought about the underlying moral and practical realities.

You can develop all of these services, just don't hoard data. With no centralized serverside database you have no liabilities. P2P is a solved problem.

Better implementations of that is exactly the kind of innovation we need now.

>more regulation will make it harder to innovate.

I'm seeing prominent VCs espousing this all over social media the past few weeks. Apparently there are even some advising Jared Kushner.

Don't let a good crisis go to waste.

When it comes to surveillance capitalism, making it harder to innovate is the point. Innovation is not intrinsically good.

Sounds like a false dichotomy. Some regulation has some downsides, all regulations are bad. I think you can reform regulations enabling health care innovations and introduce legistlation to stop Facebook to do mass surveillance without opt-in.

Similar to how we have organizations which can certify whether produce is organic or not, we need organizations which can certify whether apps and websites are certified ad tracking free.

Ironically, those orgs have themselves been accused of being pay-to-play -- if you give them enough money they will certify you organic. Since it's industry run, there is no one checking that the organic certifiers are legit.

It's turtles all the way down.

The only legit solution is the government, because they are the only ones without a profit motive in the whole system (although they can be bribed, but that's a whole other can of worms).

Yes its more in the absence of a government initiative. IIRC, what happened with organics is that it started as pay to play and then the USDA came along and took the best practices from the pay to play initiatives and introduced the USDA Organic.

> The only legit solution is the government...

Which government? I mean, I'm certainly not going to be very impressed if the (for example) US government releases a statement saying "Yeah guys, app Foo is totally legit and doesn't track you or leak any of your private information".

They may be without a profit motive but individual bureaucrats and legislators certainly have a power motive which is often just as, if not more, insidious.

Which is why campaign finance reform is essential.

"The only legit solution is the government, because they are the only ones without a profit motive in the whole system"

LOL. As you even parenthetically recognize, government doesn't magically solve humanity's capacity for wickedness.

The true "legit" solution is institutional legitimacy, which is fundamentally founded in a devotion to rational integrity. Genuine legitimacy can only be earned by sustained integrity, its not minted with the legislator's pen.

I agree that you're right to be concerned with the bad incentives of pay for play, but you appear to be missing the unintended consequences of imposing a regulatory monopoly.

We (Appfigures) are scanning apps for SDKs so we know which trackers, analytics, location trackers apps are using.

We’re working on a way to make that data accessible to app developers so they can learn from competitors, but having a simple way to tell if an app tracks you that doesn’t require signing up or anything like that could be an interesting idea.

Would you actually check an app before downloading/using it?

I am very shy of downloading apps because I expect most of them to stalk me or rat me out to Facebook and don't have the time to check for myself (by using Charles Proxy or Burp Suite).

I would definitely appreciate it if there was a service that allows me to quickly check if an app is safe.

You should try my appcheckr app. Well yes, it does generate a bit of network traffic, it's, err, updating.

There is, Exodus Privacy (see my reply on the parent comment)


There is another service that does this for Android apps:


It is also build into the Aurora Store (alternative Google Play Store client for Android).

The key here would be for you to become a search engine of apps that are "3rd-party tracking free", and then build trust with customers who value that and become a destination for app discovery.

For developers, additional traffic becomes an incentive to comply.

Ideally the certification would be made visible by the app creator when the upload the app to Apple/Android App Store so people downloading it would be able see the designation.

The problem with that is that do you really believe the developer? It needs to be a 3rd party. Ideally the store itself, but there’s no incentive for the big stores to do that.

Personally, I'd want to have the source and run that software myself - how else could I trust that the app developer didn't bribe you or find an easy way to subvert your detection?

Valid point. Getting something you can scan for installed SDKs is pretty darn hard in many cases, so I doubt most people would want to set up and maintain something to do that for a few apps...

Unfortunately, this is one case where being open-source would probably make the scanner easier to game.

This is an interesting point. Either the government would then need businesses to disclose their "rating" (similar to movies) or businesses could opt in to show a seal (like Fairtrade bananas). The problem is, if there aren't enough (popular) sites with the seal, then the value of this declaration is lost.

Yeah. For many people I interact with Facebook, and to a lesser extent Google, are the internet. So they’d never see the seal unless Google put the sites in their top three results and didn’t scrape the relevant info.

Or someone posted a meme with the seal in it on Facebook.

For apps at least, you could add the branding seal to the Appstore metadata that you upload for your app to the Apple/Android App Store.

For websites, agreed google would have to display it or it'd have to be after the initial page load which means you'd be tracked the first time you visited the site and you'd know not to visit a second time.

Perhaps for websites, a browser add-on that checked a certified ad tracking free database registry could be used.

Interesting - the movie ratings is a good analogy - as far as I know the MPAA handles it[1].

Maybe an existing organization in this space such as the eff.org with name recognization could come up with a certification methodology and branding seal for websites and apps.

[1] https://en.wikipedia.org/wiki/Motion_Picture_Association_of_...

Exodus Privacy will tell you what trackers are present in an Android app. They don't certify apps as "ad tracking free" but leave that decision up to you. The only certification of being ad-free is probably to be open source as well, in which case there's a large selection of apps in F-Droid which meet that requirement.


1) Grab Netguard,

2) Donate to Marcel for key to get granularity

3) Turn off ALL apps, so when they launch they have no wan access.

4) Open each app one by one, noting what links and phoning home the app tries to perform

5) Watch in disgust as you begin to realize all of your efforts to secure your privacy up to this point has been in vain...antacids help.

6) Take control by granularly killing the links/calls in apps you decide to keep. Spotify calls FB, but you now can fix that. This is empowering but only a start.

R/privacy will make you paranoid, but a good place to get an idea what you are up against. I read Anon, join to post, and after some time delete my content, been doing that for decades now.

Looks promising, though it doesn't seem to have caught the FB connection in Zoom[1] (though given that Zoom has the Internet Access permission I suppose it can really send data anywhere?) They do offer an on-device version, which is available via F-droid[2].

1: https://reports.exodus-privacy.eu.org/en/reports/us.zoom.vid...

2: https://f-droid.org/en/packages/com.oF2pks.classyshark3xodus...

The good thing is that this is illegal in the EU, so they can just be sued for problematic amounts of money.

They can’t be sued. They can be reported to a proper privacy authority who can chose to do something about it or not.

And as of right now (2 years after the law went into effect) privacy regulators still don't care and nothing suggests things will change. *

I've put lots of time into raising complaints with the ICO (the privacy regulator in the UK) and even for the few complaints that were upheld (they try to find every single possible excuse not to), it still didn't have any effect and I have yet to see a proper investigation into the company or a fine.

* please don't bring up the 50M Google fine about them forcing the creation of an account while setting up an Android device. Not only is it pocket money to them (aka the cost of doing business) but this ignores the much larger problem like Google Analytics stalking everyone's behaviour across the entire Internet.

GDPR Article 79 [1]:

> 1. Without prejudice to any available administrative or non-judicial remedy, including the right to lodge a complaint with a supervisory authority pursuant to Article 77, each data subject shall have the right to an effective judicial remedy where he or she considers that his or her rights under this Regulation have been infringed as a result of the processing of his or her personal data in non-compliance with this Regulation.

> 2. Proceedings against a controller or a processor shall be brought before the courts of the Member State where the controller or processor has an establishment. 2Alternatively, such proceedings may be brought before the courts of the Member State where the data subject has his or her habitual residence, unless the controller or processor is a public authority of a Member State acting in the exercise of its public powers.

That sounds like a lawsuit to me.

[1] https://gdpr-info.eu/art-79-gdpr/

See article 78. There is an escalation path.

The way it is written makes it seem like 79 is independent of 77 and 78. You can pursue a claim through a supervisory authority (77) and if you are not happy with the result go to court (78), or you can go to court directly against the processor/controller (79), or maybe even do both.

79 specifically specifically says that pursuing a judicial remedy under it is without prejudice to any available administrative or non-judicial remedy, including the right to lodge an Article 77 complaint. If this was meant as something that has to come after not getting satisfaction from an Article 77 complaint it would make no sense to say it is without prejudice to the right to lodge an article 77 complaint since that would have already had to have been lodged before getting here.

This also fits with what I saw on assorted EU and international law firm blogs, back when they were all writing articles on what GDPR would mean.

:shrug: I’m not a GDPR lawyer and further most of it doesn’t have precedent yet so we aren’t sure how its going to fall out.

I did work on a GDPR compliance effort in an industry where it is big important (real time bidding ads) and our lawyers, while hedging, were not at all worried about random lawsuits. Their opinion was that we would easily be able to submit to the courts that the complaints needed to go through the authorities first.

I did see one legal firm blog that said they expected a lot of lawsuits from individuals, but most blogs and reports I saw seemed to think it there won't be a lot of them.

If there are any, they probably won't be a big deal for the defendants. An individual lawsuit is limited to compensation for the damages suffered. In most cases, that just won't be very much, and so probably won't be worth the time and effort to pursue.

Only supervisory authorities can can impose punitive measures such as fines based on revenue. Those are the only things likely to actually make a difference.

Also I get the impression that European regulators are more responsive than US regulators. Europeans report things to regulators first and expect them to be taken care of. Contrast that to Americans who are far more likely to view the regulators as ineffective and turn first to a lawsuit.

Is it illegal in California?

under CCPA, yes.

Anyone know the best way to get started with suing them? Or should I assume that's already being done?

If you are in the EU contact the government agency which handles GDPR complaints.

Get a lawyer. Open the yellow pages and start calling.

Can we stop sharing data with facebook and google or literally ANY third party unless absolutely necessary? And if it is absolutely necessary, let users opt out and tell them what feature they lose?????

Zoom is quickly gaining a reputation for doing the wrong thing anytime they have a choice between right and wrong.

I have been using Android for the last few years (with NoRoot Firewall installed) and before that I was using iPhone (jailbroken with FirewallIP installed).

95% of the applications installed (on both Android & iPhone), when they open "talk"/send a ping to Facebook. That includes all air-companies, Spotify, anything you can imagine. The only "clean" apps I have found are Amazon, eBay, Dropbox, Signal, Telegram, Skype.

Anyone using Android, do yourselves the favor, install (free) NoRoot Firewall. Once you "Start" the firewall check the tabs "Logs" and "Pending" and you will be surprised on what your apps are doing, especially if you leave them running in the background.I also use it to block trackers, ads, FB, etc.

> Can we just stop sending data everywhere? If you don't need it, don't gather it.

But...but... consumer data is a gold mine! We have to sell it! Who will think of the shareholders?

>Can we just stop sending data everywhere? If you don't need it, don't gather it.

We're not gonna get there with asking nicely. We need legislation, with teeth (read, fines worth 6 months or 1 year global turnover)

Mobile app developers need to gather install/purchase data so they can optimize user acquisition spend. Most are not using PII so I don’t see any problem.

My opinion about this is that many apps and websites don't have a real product offering. FB, Dropbox, Twitter and others are more of a service offering than a product offering and hence everything is rooted around services and tracking services and, finally, just plain old tracking.

I don't see how is selling a service different then selling a product ? If clients want it => pay for it.

The real point that privacy advocates REFUSE to talk about is that FREE service wins over 99% market share over PAID service. So people does not want to pay for Facebook (greed) and Facebook does not want to make people pay (market share first in Silicon Valley mindset).

Well money for its thousand engineers has to be found ¯\_(ツ)_/¯

Now that the actors have collectively chosen the combo free services in exchange for private data, a small minority wants free service in exchange of nothing. How does it work, Freemium, government subsidies ?

So just to give you an example of what would be a product. Spotify is half way between service and product.

What would make it further closer to a product would be (my suggestion):

1. No free version

2. And while you are at it, don't pool views. If you listen only to "Baby" by Bieber and only that once (or many times) in that month, then Bieber et al. gets your full $10. Edit: minus fees.

They still do services, but now they are not "organised around services".

The kindle app contacts facebook too. All KINDS of webpages and apps contact facebook. It's a mess that I only figure legislation can help correct.

You think Zoom is doing it for fun? This is a revenue source right?

Why do comments suggesting that data collection is paid for by Facebook/google get downvoted?

Serious question. This wasn’t my comment but I think it’s true and I’ve said the same previously and was downvoted too. Is it because it’s obvious and well known? Did I miss the memo too?

If Facebook is encouraging the capture and transmission of this data and paying for it, does this mean that Facebook has indemnified Zoom?

Those comments are downvoted because there is no evidence that anyone is paying for data from Zoom. IMO, speculation without support should be downvoted.

Companies ultimately respond to monetary incentives. So directly or indirectly, they are giving your data to Facebook because it gets them paid.

That's... the point of a company? If this argument boils down to "Zoom to FB is bad because it gets Zoom paid directly or indirectly and FB=bad", then I think the same argument could apply to almost any business that advertises on FB (a large number of businesses).

If this isn't your argument, I apologize for misunderstanding. Could you elaborate a bit?

There is a difference between advertising on Facebook, and sending the data from your users to Facebook in order to advertise for cheaper.

But yes, any business that sends your data to Facebook for advertising purposes is bad and should not be used.

There seems to be plenty of circumstantial evidence. My hope was someone might know for sure, and speak out.

I mean for a group of people supposedly interested in innovation it seems a bit odd to be downvoted for a “what if...” question.

> Why do comments suggesting that data collection is paid for by Facebook/google get downvoted?

It's not that I disagree with the facts here. Obviously they're doing so for some sort of monetary incentive. That doesn't make it any less evil. I don't care if it's legal. Companies that depend on the sale of user data to be profitable are acting unethically and (imo) unsustainable as those profits may not be available to them in a future with better regulation.

I’m not suggesting it’s OK. I’m saying that given how widespread this is I’d like to understand why. Why did Wacom collect data? Why is zoom? It doesn’t make sense to me unless there is some external driver. If FB or G are paying for these privacy violations then I think that’s really bad too!

Considering this is the default for the Facebook SDK, it’s not obvious to me at all that they’re doing it for a monetary incentive (not that that makes this acceptable).

The GP is easy to interpret as justifying or excusing the behavior. Something being profitable doesn't suddenly make it OK.

If didn’t say or imply it was ok.

I ASKED if it was a revenue stream.

And I didn’t say or imply I downvoted you. Do you think people downvote for fun? It’s not like your question was rhetorical right

(Sorry I’m going to stop being obnoxious now, hope you understand why it’s easy to read your comment as such!)

I’ve long since given up trying to figure out why HN goes and goes against certain things. The vote system here is one of the clearest examples of “let’s build a bubble” that I can think of.

There is no way Zoom is just sending secret data to FB out of the evilness (because it’s not goodness!) of their hearts for fun. So someone is trading value somewhere.

Obviously someone at Zoom thinks they get value out if it. But that's quite different from "this is a revenue source", which would suggest Facebook paying money for it. Your typical company will happily put a tracking SDK in so some marketers get some more graphs to look at in Facebook Analytics.

How do you know if it isn’t a revenue stream?

I guess I’m thinking that if this information that Zoom (a publicly traded company) was doing this, it would be bad for business, so doing it just optionally for fun, seems like all risk and that an appropriate reward would be a financial incentive.

Even if it isn't a revenue stream it can still have value to them, it doesn't have to be for "fun". This may simply be one of their ways of acquiring metrics.

As for it not appearing in their TOS, the "risk", it isn't unusual for the left hand to not know what the right is doing. Those drafting the legal text may not have known that this particular analytics system was being used.

There are many arguments for why this was overlooked, and why it exists, without it having to be direct revenue.


> If you don't need it, don't gather it.

What if they need it... to make more money?


Even if you don't log in. The Facebook SDK sends data back.

Hook your device up to an intercepting proxy and start up a few apps. 99% of them do this.

I really wish Apple would put an end to this.

> I really wish Apple would put an end to this.

This is what really gives lie to the whole walled garden thing. Its selling point is supposed to be in Apple preventing things like this, but here we are in reality and they don't. Meanwhile they do e.g. prevent Signal from replacing Apple's default app for SMS, which has no purpose other than to create barriers for cross-platform competitors to the default apps.

Its incredible how blocking changing the default app on iOS isn't illegal. Its obviously insanely anti competitive.

It's like people forget "the bar" for anti-trust action used to be bundling the default app (web browser) with the OS (windows)

You're forgetting the 90% market share part.

Apple has more than enough market share, especially in the US, to dictate what the market does. If it doesn't technically have a monopoly in raw numbers that doesn't mean it should be allowed to get away with some of this shit.

Apple doesn't have >90% market share for iOS app stores? Who is their competition? You can't run an Android app on iOS. There are no iOS apps on Amazon or Google Play. That makes it a separate market.

100% market share of your own market doesn't really count.

Saying "your own market" doesn't mean anything. Every monopolist has 100% of their own market. On the other hand, Clorox has 100% of the "Clorox bleach" market.

The question is whether your products and your supposed competitor's products are really the same market. In other words, are your customers the same individuals? Are the products substitutes for one another?

For Clorox bleach and other bleach they are. All bleach is the same, they're perfect substitutes, if the store is out of Clorox bleach and you buy some other bleach you'll never even know the difference.

For iOS app stores and Android app stores, they're completely different markets. The customers for iOS apps are people with iOS devices and the customers for Android apps are people with Android devices -- almost completely disjoint sets of people. If the iOS app store is down, it's infeasible for the average user to get an app from Google Play instead -- they would have to spend hundreds of dollars to buy an Android phone, then replace all of their other apps, just to substitute one app. It would be like saying AT&T didn't have a monopoly in 1970 because you could change carriers by moving to Canada. They're completely different markets.

There are horizontal monopolies and vertical monopolies, there is no law saying that a monopoly needs to have 90% market share.

Didn't they just get a huge fine for this behaviour?


Help change the law!

Well, the issue isn't just that Apple doesn't put an end to it, it's that Apple doesn't let users toggle this sort of thing off. I think most HNers would say the best solution is for Apple to require apps get permission to do this sort of thing, and let the user decide if they want to have data sent back to FB. There needs to be transparency and a chance to opt in or or at the least to opt out.

Terrible take. They use it to apply many security and privacy policies! Just not the one we're talking about now.

Difficult to figure out how to actually do this, especially so without a crazy UX.

They should figure out the default apps thing. Though I don't know what you'd need for SMS, there's not much system integration there besides Siri (which I think supports plugins) and maybe sms: links?

> They use it to apply many security and privacy policies!

Have you read the guidelines? Many words requiring you to use and not discourage users from using Apple's in-app purchasing system (which they get a large cut of), prohibiting you from trying to compete with the App Store or similar, prohibiting app-alternatives they don't control (like remote desktop into a cloud server), requiring "Sign in with Apple" if you use another third party sign in service and that sort of thing.

There is a privacy section, but the dirty secret is that they have very little power to enforce it against premeditated abuses. Companies add a feature to their app that gives them a pretext for uploading your data to their servers, and then there is no way for the user or Apple to verify what happens to it from there or determine actual compliance with the privacy policy.

So the policies with a compliance enforcement mechanism are the ones that benefit Apple and the ones that are supposed to benefit users in practice don't have one.

> Difficult to figure out how to actually do this, especially so without a crazy UX.

Actually not so hard in that specific case. They could run the app and not sign in with a Facebook account, and if it tries to contact Facebook servers anyway, reject it.

> They should figure out the default apps thing. Though I don't know what you'd need for SMS, there's not much system integration there besides Siri (which I think supports plugins) and maybe sms: links?

They prohibit it on purpose. Signal isn't allowed to send and receive SMS on iOS:


> Apple does not allow other apps to replace the default SMS/messaging app.

The "Firefox" on iOS isn't even actually Firefox, it's required to use Apple's browser engine.

The App Store having additional restrictions doesn't have anything to do with the privacy aspect of the walled gardens being a "lie". You can go on a tirade about the app store's limitations if you want, but that's not relevant.

Your proposed solution would not work, obviously, because how do you define what services an app is allowed to connect to? How do you know it's connecting to Facebook's servers? Just hope they always use facebook.com?

> that's not relevant.

It's the true motive for the "walled garden" -- it explains why it continues to exist even though the stated reasons why it exists don't pan out in practice.

> Your proposed solution would not work, obviously, because how do you define what services an app is allowed to connect to?

Why is it allowed to connect to any services for no reason? If the app makes a network connection the developer should have to justify it by something other than enabling collection of user data.

> How do you know it's connecting to Facebook's servers? Just hope they always use facebook.com?

I feel confident that Apple has the resources to determine whether the servers every application using the Facebook SDK is contacting belong to Facebook.

Interesting. That clearly violates GDPR and CCPA if it extended to all digital options rather than just websites.

This was presented at the 36C3 last year: https://media.ccc.de/v/36c3-10693-no_body_s_business_but_min...

I assume a lot of people just “slap” the SDK in there and call it a day and it starts sending data.

I’m surprised Zoom is happy for Facebook to know exactly who its customers are. This is information that could be used against the company at some point, for example if FB made a video conferencing play.

Zoom uses Facebook login.

Zoom also buys ads on Facebook, so integrates the SDK for attribution.

A lot of companies had data sharing deals with Facebook, I assume they get something out of it themselves.

I don't think many people want to mix their professional with their personal (facebook). Doesn't facebook have this "Facebook at work thing" that no one uses?

This! It isn't just Zoom. It's a known "feature" of the Facebook SDK.

I would wager some or most developers do not realize that when they add a login with Facebook button, they are also sending analytics data for _all_ users to Facebook. It is, imho, a very dark pattern.

Can Apple be held liable by the CCPA as they are a reviewer with a gating process for iOS apps, allowing the Facebook SDK to be included?

That would be quite a stretch of the CCPA. The businesses that are doing the collecting have to have to comply. In this case, Apple isn't collecting that info.

Likely, simply curious if questioning Apple about it is enough for them to nuke the Facebook SDK from further app inclusion at the review process. Privacy is their whole marketing schtick, no? [1] If it's a core value, stand up for it.

[1] https://www.apple.com/privacy/ ("Privacy is a fundamental human right. At Apple, it’s also one of our core values. Your devices are important to so many parts of your life. What you share from those experiences, and who you share it with, should be up to you. We design Apple products to protect your privacy and give you control over your information. It’s not always easy. But that’s the kind of innovation we believe in.")

It’s a facade. Apple has its own advertising ID and it’s on by default even though it’s quite clearly (imho) in violation of the GDPR.


This is equivalent to including Google Analytics or any 3P analytics platform.

Has anyone tried the Lockdown app? https://9to5mac.com/2019/07/24/lockdown-ios-firewall-open-so...

It looks promising, and has been posted to HN a few times, but nobody has commented on it. https://news.ycombinator.com/item?id=20519456

There's also Guardian Firewall, which is built (or atleast started by) a well known iOS jailbreaker


Examples on Android for 'an intercepting proxy' are No Root Firewall, and even better, the paid/donate version of Netguard, which allows you to permanently kill these errant calls. Additionally, you can disable Google Services Framework with it too, if you'd like to try running your Android without Google tethered and watching, while you enjoy sweet, well-designed FOSS apps and services.

Can't apps start up the facebook SDK after someone has clicked the facebook login button? If someone has already logged in with facebook, set a flag in NSUserDefaults, and start the sdk then.

I'm pretty sure they can (though another comment states the opposite and might be right), and I'm definitely sure they can even implement the whole Facebook login through standard OAuth without using any of Facebook's code.

They choose not to because 1) it's easier and 2) Facebook gives them a "cut" of the revenue in the form of free analytics and insights into how their Facebook ads are performing.

The companies decided that the value they get out of Facebook ads & analytics are worth more than their customers' privacy. Until well-enforced regulations come into play (so not the GDPR) nothing will change.

Tons of websites do this as well

If they did, it would raise a ruckus.

There's a lot of developers that rely on these dependencies, and just blocking them would cause a major backlash.

I'm wondering why this post was so unpopular. It wasn't meant to be offensive or judgmental, in any way at all.

It's because Apple is one of the largest companies in the world and can survive a "backlash" much to the chagrin of small developers already subject to their mercurial policies. So the idea that that's what's stopping them doesn't really mesh with the reality that if it was, it would be stopping them from doing half the things they already do.

Banning 90% of the apps on the store until they remove the Facebook login SDK would cause a backlash much bigger than from a group of small developers.

It's not 90% of the apps on the store, and if it was then they should definitely ban it because there is no way 90% of the apps even need login accounts. The flashlight app still has no legitimate reason to access your contacts and location, much less Facebook.

Moreover, prohibiting this wouldn't actually remove the apps for more than five minutes because what would immediately follow is a version of the SDK that doesn't send any data to Facebook when you're not actually using a Facebook account.

This is a good point. I have been an Apple developer for 34 years.

If you know what that means, it means that I am a scarred, grizzled vet, with an eyepatch and a trick knee.

But do you have a gray beard and suspenders?

Nope. I do have grey hair, though.

Anyone who has been an Apple developer for more than a couple of decades, has had the experience of having the rug pulled out from under them by Apple.

That's one reason that I'm not hurrying to adopt SwiftUI. I really like it, and hope that it makes it (I despise Auto-Layout), but I have also seen other promising tech smothered in the crib (OpenDoc? QuickDrawGX?).

Some people on Hacker News hate Facebook so much, that they downvote factual information if it doesn't fit with their views.

Apple is busy deleting localStorage data from PWA users...

For those who did not read about it: https://andregarzia.com/2020/03/private-client-side-only-pwa...

I would love if Apple started treating app analytics like they do my GPS location or my camera permissions. Basically, if apps want to send analytics, they must go through an iOS API.

Then as a user, I can inspect what apps are sending and how frequently. I should be able to block requests or set myself as anonymous. Or allow apps for certain amounts of time etc.

Thank you st3fan. I will reference your comment in the future.

Apple isn't interested in that as well as most of the corporations out there. Data is one of the most valuable assets and this is easy cash flow.

Is this automatically part of any app using react/react native?

No. React and React Native are totally separate from the Facebook SDK.

As an app developer, I think that I've done "Facebook SDK integration" task over 10 times at the very least. I don't think I'm the only one. It's unrealistic to expect a mobile app not to offer a user the option to login through Facebook.

And yet, we don't need to integrate Facebook's binary blobs to use this SDK's main features. How about we implement the open version of Facebook SDK that uses their APIs but doesn't do anything that we don't want it to?

I'm certain you can implement Facebook login through standard OAuth and not have to rely on any of their code.

To clarify, having just worked with the Facebook SDK library for my company's codebase, I dont think it is possible to setup the SDK without this happening. Disclaimer: I do not know what the FacebookSDK does after you call it's launch methods but I am pretty certain that they are required for a least some versions of the SDK.

If you are a Zoom user who is not using a Facebook account, I believe the only info Facebook is getting is that the Zoom app was launched and nothing about the user itself. Unfortunately the side-effect of using the FBSDK is that Facebook can track your app's usage for all users.

I believe this is true of all apps with a "Login in with Facebook" button. FWIW, it does not appear that other OAuth's do this (including Google's)

FWIW, it's possible to use OAuth login without importing the SDK.

I did it on my last company's apps and webapps when we had to optimise for performance, and removed some dependencies.

Of course, now that I'm gone the SDK is back because one of the developers was bullish on using the SDKs at all costs (the webapp, for example, now loads FB, Google and Linkedin SDKs on launch).

This is a problem that we developers are creating.

This is a good idea. I will propose we do this

FWIW it looks like zoom did the same.

I imagine in an alternative universe that Snowden's book is about his time at Facebook.

Can't they fingerprint the device? The fact that Zoom was launched on a specific device is still a lot more information than I would be comfortable giving up if I don't use Facebook at all.

It looks like they are fingerprinting the device. Which means that if you have facebook but don't install the phone app, that your phone can be connected with your account and then you can be tracked across the web. There are also, of course, the facebook shadow accounts. Information connected with fingerprints, but not associated with a facebook account.

If they are it's against Apple's rules. Not sure what they are using to fingerprint anyways, given Apple has blocked access to the older system values that were being used.

> Not sure what they are using to fingerprint anyways

>> The Zoom app notifies Facebook when the user opens the app, details on the user's device such as the model, the time zone and city they are connecting from, which phone carrier they are using, __and a unique advertiser identifier__ created by the user's device which companies can use to target a user with advertisements

There are two identifiers: Identifier for Advertisers (IDFA) and Identifier for Vendors (IDFV).

IDFA is the same across all apps on a device. However, it can be reset by the user or disabled (in which case it returns all 0s). Also, apps have to disclose (to Apple) that they use the IDFA - not sure if that's visible to the user in the App Store anywhere.

IDFV is unique per vendor - that is, each app has a different ID, but two apps from the same developer will have the same ID. I believe this is also reset when resetting the device.

The FBSDK doesn't require developers to enable the IDFA, so the unique identifier in the phone home request is either the IDFV (effectively unique) or just a UUID that the FBSDK generates and stores on launch.

IIRC Apple made all device identifiers different between apps so it isn’t possible to do that.

Reminder: The NextDNS iOS app allows you to monitor and block these types of requests from all of your apps, via their DNS logging/filtering. (You can also configure the retention on the DNS logging, so as to not cause more toxic waste data.)

I can't recommend it enough. Until/unless we get something like Little Snitch for the phone (are you listening, Apple?!), this is the next best thing.

NextDNS is great, set it up on all my devices a few back when there was a post on here about it. Uninstalled a few apps just from seeing the number of requests they were sending even when I didn't use those apps frequently.

Like its mentioned in this discussion, using the FB SDK will result in apps sending requests to FB. Found a banking app I use was doing this...

Are there any guides for running your own setup with similar filtering functionality? Not keen to run all my traffic through some unknown VPN.

NextDNS doesn't send the traffic, just DNS (and, entirely encrypted via DoH, unlike normal DNS), but if you're concerned about it, something like pi-hole does much the same thing, and is self-contained/self-hosted.

I deployed a VPN+PiHole on a micro ec2 instance for use from my iOS devices. Works great. First i installed pihole and configured, then used this https://github.com/jawj/IKEv2-setup to setup the vpn. Took about 30 mins. Works great!

I have PiHole as well. How to verify that FB SDK requests are blocked by it?

Why do you trust AWS more with your traffic than an ISP?

Because AWS is going to have to answer to pissed off Enterprise customers if there was ever a story to come out that they're handling customer data inappropriately.

For me the value is more about having ad blocking at the dns level and the vpn is just a way to get that on iOS/Android devices where I don’t control dns servers. When out and about on 4G, pages load a lot faster with all the garbage blocked.

If it's not open source how do I confirm that for myself?

Agreed. I’d pay for a service that was a. open source and b. configurable to work with my own machines.

THIS! , thank you! just installed and found tons of queries to Uber (never used uber in past many months) , uninstalled it finally!

If you use any apps that have an 'uber here' integration (united, hyatt, opentable, stubhub, etc..) you'll probably still see that traffic.

Blokada has a similar feature set, with support for bundled advert/spyware/social media block lists as well as your own.

On the NextDNS website:

> "Try it now for free. No sign up required."

> I click the button

> "Sign In. Don't have an account? Sign up."

That's odd, I just tried it and that didn't happen for me at all.

How does its blocking compare to Blockada?

If you have a Facebook account and are curious what other apps and websites are sending data about you to Facebook, check out this link:


(click the area with the various app & website icons to expand into a more detailed view)

I was pretty surprised the first time I came across that list, there are a lot of apps on there that I never did a Facebook login with. For example right now I see that a map app I downloaded when I was travelling last year but only opened once or twice has sent 395 "interactions", the latest of which was 3 days ago. Actually, I should probably delete that now haha. Also, I'm using Firefox with the Facebook container, Privacy Badger, and uBlock Origin, and there are still many websites listed.

So I do not have facebook installed on my phone but I do have instagram and whatsapp.

A large amount of phone apps seem to appear in that list. I guess Whatsapp/Instagram creates a fingerprint of my device and then uses that for tracking?

I believe that is in fact the case. I removed all Facebook owned apps from my phone a few weeks ago and I stopped seeing reports show up there. Experimentally, it seems like an uninstall disassociates the ID from your facebook account.

That doesn’t mean facebook stopped getting those reports, only that they are no longer associating them with my account.

They are both owned by Facebook, so yes

You can disable it under: More Options -> Manage Future Activity: https://www.facebook.com/off_facebook_activity/future_activi...

Well everything that imports the Facebook SDK or allows sign in with Facebook does this so as long as an app has that blue button on the screen, you shouldn't be surprised that it will phone home to Facebook once the app is opened and initialised.

Too bad it isn't practical to have a system-wide blacklist of selected hosts on iOS. Maybe you can but requires a jailbreak, but that too can break some apps.

There are some “VPN” apps that can stop connections system-wide. I’m not sure about custom block lists, but take a look at the free Lockdown app (it’s FOSS). It does all processing on-device. There’s also a paid app (which for me is an expensive subscription) called Guardian Firewall, which uses its servers to process requests.

If you 'supervise' your iPhone you can also configure an adblocker with a proxy auto-config. Less fiddly than a VPN but harder to customise! Supervising requires a wipe too. https://github.com/essandess/easylist-pac-privoxy

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact