> The Zoom app notifies Facebook when the user opens the app, details on the user's device such as the model, the time zone and city they are connecting from, which phone carrier they are using, and a unique advertiser identifier created by the user's device which companies can use to target a user with advertisements
So Zoom is sending the fingerprints of mobile users to Facebook. Which helps Facebook better track users across the internet. Not only this, but Zoom is not disclosing this information (though it isn't like people read TOS and would be aware of this anyways).
Can we just stop sending data everywhere? If you don't need it, don't gather it.
I just recently attempted to set up Facebook adverts for an app I developed. When it came time for me to set the metric up I obviously chose "App Installs" as my metric to track.
To do this, Facebook told me I needed to install the Facebook SDK in my app to attribute an adverts conversion.
I didn't end up running the ad, but I can see why companies potentially have the SDK embedded in their apps to track ad-spend, hence the phoning-home to Facebook.
Edit: Just so people don’t have to dig to verify;
> Zoom, our third-party service providers, and advertising partners (e.g., Google Ads and Google Analytics) automatically collect some information about you when you use our Products, using methods such as cookies and tracking technologies (further described below).
“Well, at least they told us about it” is absolutely no solution to “so many of our tools are spying on us”.
There are a thousand different ways in which companies are not presently allowed to be operated, even if the owner sees fit to do so: worker safety, discrimination, collusion, et c.
I’m not saying these things are good or bad, or that there should be more or less of them. I’m saying that Zoom’s (and Facebook’s) spying is a problem, remains a problem, and is not solved by them putting some text on their webpage.
It absolutely isn't. We want to use services but we do not want to be subjected to surveillance capitalism. Privacy is more important than some business and if it can't operate without being invasive it should fail. If they insist on being hostile and tracking people despite their wishes, people will use the product anyway and they will find a way to break the tracking. They will delete the surveillance code, use network filters, send fake data, whatever it takes to stop the surveillance.
> It is the deception we do not allow (any more) with GDPR.
That law also says users have the right to object to what the service is doing with their data and that they must stop doing it if the objection is valid. Almost all data collection taking place today is objectionable, especially those related to marketing and advertisements.
Collecting data on people is not a god-given right. It is a privilege and it can be revoked. People trusted companies with that power because they thought companies would act in their best interests but they were exploited instead. Now it's time to take it away.
Who is the “we” you are referring to? I think most people care so little about this that they don’t even bother to skim the TOS before using a service.
I am not touching the "add value" bit, I will stick to the ethics. Some businesses are (imho) scum (Facebook, Google, Zoom, every tracker, every data aggregator, etc.)
They may uphold the law or they may ignore the law. Since we should not burn their buildings down in retribution, we can sue them (or whatever the local privacy laws state), we can stop giving them money (our free/paid information). But it is up to us. Zoom clearly needs a (sic) phat penalty by EU to get their stuff straight. Then every EU user should bombard both Zoom and FB with questions on their data practices and "right to be forgotten". Then we should burry them in the sand and move to other service providers.
I am adamant on the issue of privacy and the reason for that is that these scum KNOW they are violating our rights, and the voice in their minds tells them "screw them, £€¥$ goes first".
Stop giving Facebook your money. Surely there are other places to promote your app.
Companies like Facebook will complain loudly that they won't be able to survive, but that is not our problem. If we pass legislation with teeth, they will need to change their business model. That would be the point.
I work with (adjacent industry) HIPAA protected data, which is considered PII by virtue of knowing Bob Smith is in the system. If they're under a BAA and sending that information to Facebook they're in violation.
If one of my sub-processors did this my lawyer would be livid. But hey, it's Silicon Valley, don't harsh their buzz man.
Frankly I can't figure out why stalking a single person is illegal but stalking a billion people is considered good business.
Nope, they just have to want it to stop :)
The specific scenario isn't the point - but the fact that a semi-obvious scenario could be incorrect sorta is. Regulations are complex and tech has a terrible history of playing fast and loose with regulations so it's not like an imposition of regulations would be inappropriate or unwarranted - there are good and bad apples, and the bad apples spoil the bunch.
Kaiser Permanente will contact google analytics and doubleclick as you navigate their website, even when checking test results and contacting your doctor.
It just has to be correlatable if I understood it correctly, but I don't know if unique or not. To me it sounded like if there's only a small number of possible people it could identify (say 4) then its potentially PII, however I have no idea where the line is drawn. Clearly if k is 1, its PII. If k is 2, it probably is too. If k is 1000, its probably not. But at what point does it stop being PII? I have no idea!
The legal person basically said "its complicated, anything can become PII when combined with something else, even if neither on their own are PII". The bottom line is does some combination of information identify a person, then its PII (its in the name really!), but unfortunately that means there is no clear simple list of things that are or aren't PII, it really depends on each individual case.
Her advice was to think carefully about any data stored about or for users and to avoid storing it if possible, and if not possible, think carefully about whether or not it could identify a user in some way. Its not a very satisfying answer, I know. It also doesn't answer your question :(
Moreover, Zoom has a history of RCEs (leaving an active web server after you uninstall Zoom? so that a website can reinstall Zoom without any user interaction? why not! https://medium.com/bugbountywriteup/zoom-zero-day-4-million-...), and anti-privacy behavior: meeting host gets a copy of all private messages sent between participants (there is no notice of this; https://twitter.com/rcalo/status/1237957509324746752); host can monitor if your Zoom window is active (https://twitter.com/zoom_us/status/1241768006327336963); and Zoom has audio fingerprint tracing (so if you get a leaked recording Zoom can blame a particular participant: https://venturebeat.com/2019/01/22/zoom-is-bringing-ultrason...). Running it under strace reveals it is fingerprinting your device as well (idk if that gets sent anywhere but iOS app sends stuff to Facebook...).
Zoom is creepy and should not be used. I keep a separate VM for it, as it clearly can not be trusted.
Not a Zoom apologist I—I am also deeply creeped out by the fetish for covert data exfiltration in a platform that is so widely used in these quarantine days—but, as far as the tech goes, the story you linked seems to say that they do use WebRTC as of September 2019.
Zoom is obviously an extremely scummy company and I’d rather stay away from it entirely. Unfortunately they must be dumping cash into marketing because it’s now the biggest thing in video conferencing. It’s a shame, they now seem to have network effect going for them.
Click "no" on the invite when it asks to open zoom (instant if you dont have it installed) and underneath there is a link that goes to the web version.
How is that my fault?
If the health data they would otherwise use for that is "private" then that isn't allowed, so providing insurance is riskier, will have fewer competitors, and commands higher premiums.
It was created primarily to modernize the flow of healthcare information, stipulate how Personally Identifiable Information maintained by the healthcare and healthcare insurance industries should be protected from fraud and theft, and address limitations on healthcare insurance coverage.
Is the protected from fraud and theft part somehow incorrect?
"...objectives of the Act were to combat waste, fraud and abuse in health insurance..."
For example, I’ve met several founders who wanted to enable tele-medicine years ago but decided against it because “the lawyers cost more than the engineers”, and walking-on-eggshells destroys morale & iteration speed.
I’m not arguing to de-regulate heath data — my point is that we should selectively apply regulation.
It’s likely a great thing to regulate self-driving cars. But please keep the lawyers away from my niche online forums, 3rd-party clients for social apps, blogs, video games, calculators etc...
It doesn't take a suite of lawyers to enforce that, either. Health care is gigantic mess of bullshit in the US especially, because of the multiple different 'stakeholders' - customers, insurance companies, brokers, "networks", hospitals, doctors, etc., and every mistake is a gigantic lawsuit waiting to happen. It's a disaster however you cut it.
As for personal data for some arbitrary startup, any argument that "innovation" depends on being able to be careless or cavalier with that data is just ridiculous. Be careful with it. Store it properly. Only collect what you need, and delete the rest. Expunge data you no longer need. Never send it to any third party without asking the user, and provide clear information about where and with whom the data is processed and stored at rest.
There, now you're being careful with user data and you can still "innovate" decent products, as long as your business model isn't user-hostile from the start.
The problem they point out is that well intentioned businesspeople who want to provide you a useful service and store your data correctly are priced out.
If you want to deal with medical data of any kind, you need a lawyer. Full stop. It doesn't matter how good your intentions are, or how many "best practice" blog posts you follow. You need to hire a lawyer, and lawyers are incredibly expensive.
> Be careful with it. Store it properly. Only collect what you need, and delete the rest.
This is great advice, but that's not how laws work. Congress won't pass a law that says "store it properly". They are going to pass a law that describes how you can and cannot store data in 600+ pages of legalese. And no matter how properly you think you're doing things, you have to have a lawyer to know you're actually doing it properly.
Said another way: regulation always adds cost and barriers to entry. These affect the "good" business just as much as the "bad" business.
My parent was not doing that, and instead flippantly remarked that you should just store data correctly and everything is fine.
My point is that it is important to consider the implications of government action, because they are always numerous.
One is not entitled to do whatever one wants to generate a profit, at the detriment to uneducated or unsophisticated citizens, or society as a whole.
Well, that's what they're doing by not wanting it.
Nobody talks about regulated industries with duopoly or monopolies that everyone has to deal with. Tech industry is exotic ain't?
Big companies will still find a way to track you. That won't change. You can pull up a list of all the privacy focused laws released recently and you can still see Facebook and all their products working fine but you never hear about someone who wanted to bootstrap an idea and couldn't invest much upfront to deal with slow expensive law system.
We don't need more regulations. We need more selective punishments proportional to the damage and presence. Not a lame fine that is not proportional to what companies are profiting from . And if you know anything, Facebook is the one lobbying for privacy these days. They are pushing for some of the requirements they are already compliant with to be put into law .
Then the way to do this is to simplify laws and their understanding. A company shouldn't need a large legal team just to figure out if they are doing something legal or not. It kinda sounds ridiculous when you think about it. That you have to hire a bunch of lawyers to figure out if you are a criminal or not. That clearly means things are too complex. I get that there are places this should apply to, but not small businesses and startups.
You can have regulation that is both easy to understand and effective. There is also letter and spirit of the law. We should never let the letter hinder the spirit.
However, us wanting things to be a certain way doesn't change how things are. If Congress passed a "Data Protection Act" it would be indecipherable, full of technical illiteracy, and heavily influenced by the richest lobbyists (Facebook and Amazon, anyone?).
This is my objection. I would love for a real data protection act to be legislated. But Congress has its own agenda and ineptitudes. Do you really trust the people who wrote the Patriot act to protect your sensitive information?
Hell the whole infosec policy framework used everywhere is built off of NIST 800-53.
HHS uses NIST stuff to guide HIPPA. IRS is more prescriptive, but everything in IRS 1075 is still based on NIST stuff.
You have to separate the political puffery from reality. The Federal government is very good at establishing effective regulatory frameworks. They fall down with the long-term maintenance of regulations, as it's often difficult to keep the legal mandate up to date.
Wrong. HIPAA applies to any business that transmits and/or has access to PHI. You don't need to be storing data on your own hard drives to be subject to these laws.
This is exactly my point. You are thinking like an engineer, and Congress is not. You cannot assume anything. You need to hire a lawyer, or you are opening yourself up to serious liability.
Put all of that on the client, do your best to protect it but ultimately make it the clients responsibility.
I still haven't seen any lawsuits or regulation targeting software in that sense, apart from DRM.
This is again the difference between engineer vs policymaker.
As far as I understand it, Microsoft has no responsibility for PIIs e-mails going through the Outlook e-mail client. Maybe the US is different, but at least in Europe, the GDPR is clear that software vendors have no responsibility in data being processed locally when it's deployed and run by others.
Oracle has no liability for the data stored in their database.
If you have no way of touching the data, your servers (self-managed or otherwise) aren't touching data in any form, you have no legal liabilities wrt data (apart from agreements of course).
Or am I missing something?
I think pricing out the odd well-intentioned business person is a good tradeoff for avoiding the "move-fast and break things" snake-oil salesmen.
>Said another way: regulation always adds cost and barriers to entry.
And saves money and harm when things go bad.
Thankfully so - I wouldn't want my telemedicine to rely on eg. some random unsecured Mongodb instance.
That said, regulations might make it difficult for companies, but they're there because companies have abused data in the past at the expense of customers. So, I guess, it's too bad - regulations benefit me, they don't strangle businesses, they impede it - and it's an impediment that can be overcome if the business is useful enough to people.
If the only way your business can survive is carte-blanche regulations around privacy and security, and you fall over the instant that's threatened, one: maybe your business doesn't deserve to survive, and two: maybe you didn't build a very good business.
Niche online forums and all the examples you list there survived (and indeed thrived) in the days before rampant data collection, I have no doubt they'd evolve and survive once again.
The ethical way to preserve privacy is to change minds in a way that changes actions. Law is the threat of violent force, and should be wielded only with deep forethought about the underlying moral and practical realities.
Better implementations of that is exactly the kind of innovation we need now.
I'm seeing prominent VCs espousing this all over social media the past few weeks. Apparently there are even some advising Jared Kushner.
Don't let a good crisis go to waste.
It's turtles all the way down.
The only legit solution is the government, because they are the only ones without a profit motive in the whole system (although they can be bribed, but that's a whole other can of worms).
Which government? I mean, I'm certainly not going to be very impressed if the (for example) US government releases a statement saying "Yeah guys, app Foo is totally legit and doesn't track you or leak any of your private information".
LOL. As you even parenthetically recognize, government doesn't magically solve humanity's capacity for wickedness.
The true "legit" solution is institutional legitimacy, which is fundamentally founded in a devotion to rational integrity. Genuine legitimacy can only be earned by sustained integrity, its not minted with the legislator's pen.
I agree that you're right to be concerned with the bad incentives of pay for play, but you appear to be missing the unintended consequences of imposing a regulatory monopoly.
We’re working on a way to make that data accessible to app developers so they can learn from competitors, but having a simple way to tell if an app tracks you that doesn’t require signing up or anything like that could be an interesting idea.
Would you actually check an app before downloading/using it?
I would definitely appreciate it if there was a service that allows me to quickly check if an app is safe.
It is also build into the Aurora Store (alternative Google Play Store client for Android).
For developers, additional traffic becomes an incentive to comply.
Or someone posted a meme with the seal in it on Facebook.
For websites, agreed google would have to display it or it'd have to be after the initial page load which means you'd be tracked the first time you visited the site and you'd know not to visit a second time.
Perhaps for websites, a browser add-on that checked a certified ad tracking free database registry could be used.
Maybe an existing organization in this space such as the eff.org with name recognization could come up with a certification methodology and branding seal for websites and apps.
2) Donate to Marcel for key to get granularity
3) Turn off ALL apps, so when they launch they have no wan access.
4) Open each app one by one, noting what links and phoning home the app tries to perform
5) Watch in disgust as you begin to realize all of your efforts to secure your privacy up to this point has been in vain...antacids help.
6) Take control by granularly killing the links/calls in apps you decide to keep. Spotify calls FB, but you now can fix that. This is empowering but only a start.
R/privacy will make you paranoid, but a good place to get an idea what you are up against. I read Anon, join to post, and after some time delete my content, been doing that for decades now.
I've put lots of time into raising complaints with the ICO (the privacy regulator in the UK) and even for the few complaints that were upheld (they try to find every single possible excuse not to), it still didn't have any effect and I have yet to see a proper investigation into the company or a fine.
* please don't bring up the 50M Google fine about them forcing the creation of an account while setting up an Android device. Not only is it pocket money to them (aka the cost of doing business) but this ignores the much larger problem like Google Analytics stalking everyone's behaviour across the entire Internet.
> 1. Without prejudice to any available administrative or non-judicial remedy, including the right to lodge a complaint with a supervisory authority pursuant to Article 77, each data subject shall have the right to an effective judicial remedy where he or she considers that his or her rights under this Regulation have been infringed as a result of the processing of his or her personal data in non-compliance with this Regulation.
> 2. Proceedings against a controller or a processor shall be brought before the courts of the Member State where the controller or processor has an establishment. 2Alternatively, such proceedings may be brought before the courts of the Member State where the data subject has his or her habitual residence, unless the controller or processor is a public authority of a Member State acting in the exercise of its public powers.
That sounds like a lawsuit to me.
79 specifically specifically says that pursuing a judicial remedy under it is without prejudice to any available administrative or non-judicial remedy, including the right to lodge an Article 77 complaint. If this was meant as something that has to come after not getting satisfaction from an Article 77 complaint it would make no sense to say it is without prejudice to the right to lodge an article 77 complaint since that would have already had to have been lodged before getting here.
This also fits with what I saw on assorted EU and international law firm blogs, back when they were all writing articles on what GDPR would mean.
I did work on a GDPR compliance effort in an industry where it is big important (real time bidding ads) and our lawyers, while hedging, were not at all worried about random lawsuits. Their opinion was that we would easily be able to submit to the courts that the complaints needed to go through the authorities first.
If there are any, they probably won't be a big deal for the defendants. An individual lawsuit is limited to compensation for the damages suffered. In most cases, that just won't be very much, and so probably won't be worth the time and effort to pursue.
Only supervisory authorities can can impose punitive measures such as fines based on revenue. Those are the only things likely to actually make a difference.
Also I get the impression that European regulators are more responsive than US regulators. Europeans report things to regulators first and expect them to be taken care of. Contrast that to Americans who are far more likely to view the regulators as ineffective and turn first to a lawsuit.
Zoom is quickly gaining a reputation for doing the wrong thing anytime they have a choice between right and wrong.
95% of the applications installed (on both Android & iPhone), when they open "talk"/send a ping to Facebook. That includes all air-companies, Spotify, anything you can imagine. The only "clean" apps I have found are Amazon, eBay, Dropbox, Signal, Telegram, Skype.
Anyone using Android, do yourselves the favor, install (free) NoRoot Firewall. Once you "Start" the firewall check the tabs "Logs" and "Pending" and you will be surprised on what your apps are doing, especially if you leave them running in the background.I also use it to block trackers, ads, FB, etc.
But...but... consumer data is a gold mine! We have to sell it! Who will think of the shareholders?
We're not gonna get there with asking nicely. We need legislation, with teeth (read, fines worth 6 months or 1 year global turnover)
The real point that privacy advocates REFUSE to talk about is that FREE service wins over 99% market share over PAID service. So people does not want to pay for Facebook (greed) and Facebook does not want to make people pay (market share first in Silicon Valley mindset).
Well money for its thousand engineers has to be found ¯\_(ツ)_/¯
Now that the actors have collectively chosen the combo free services in exchange for private data, a small minority wants free service in exchange of nothing. How does it work, Freemium, government subsidies ?
What would make it further closer to a product would be (my suggestion):
1. No free version
2. And while you are at it, don't pool views. If you listen only to "Baby" by Bieber and only that once (or many times) in that month, then Bieber et al. gets your full $10. Edit: minus fees.
They still do services, but now they are not "organised around services".
Serious question. This wasn’t my comment but I think it’s true and I’ve said the same previously and was downvoted too. Is it because it’s obvious and well known? Did I miss the memo too?
If Facebook is encouraging the capture and transmission of this data and paying for it, does this mean that Facebook has indemnified Zoom?
If this isn't your argument, I apologize for misunderstanding. Could you elaborate a bit?
But yes, any business that sends your data to Facebook for advertising purposes is bad and should not be used.
I mean for a group of people supposedly interested in innovation it seems a bit odd to be downvoted for a “what if...” question.
It's not that I disagree with the facts here. Obviously they're doing so for some sort of monetary incentive. That doesn't make it any less evil. I don't care if it's legal. Companies that depend on the sale of user data to be profitable are acting unethically and (imo) unsustainable as those profits may not be available to them in a future with better regulation.
I ASKED if it was a revenue stream.
(Sorry I’m going to stop being obnoxious now, hope you understand why it’s easy to read your comment as such!)
There is no way Zoom is just sending secret data to FB out of the evilness (because it’s not goodness!) of their hearts for fun. So someone is trading value somewhere.
I guess I’m thinking that if this information that Zoom (a publicly traded company) was doing this, it would be bad for business, so doing it just optionally for fun, seems like all risk and that an appropriate reward would be a financial incentive.
As for it not appearing in their TOS, the "risk", it isn't unusual for the left hand to not know what the right is doing. Those drafting the legal text may not have known that this particular analytics system was being used.
There are many arguments for why this was overlooked, and why it exists, without it having to be direct revenue.
What if they need it... to make more money?
Even if you don't log in. The Facebook SDK sends data back.
Hook your device up to an intercepting proxy and start up a few apps. 99% of them do this.
I really wish Apple would put an end to this.
This is what really gives lie to the whole walled garden thing. Its selling point is supposed to be in Apple preventing things like this, but here we are in reality and they don't. Meanwhile they do e.g. prevent Signal from replacing Apple's default app for SMS, which has no purpose other than to create barriers for cross-platform competitors to the default apps.
The question is whether your products and your supposed competitor's products are really the same market. In other words, are your customers the same individuals? Are the products substitutes for one another?
For Clorox bleach and other bleach they are. All bleach is the same, they're perfect substitutes, if the store is out of Clorox bleach and you buy some other bleach you'll never even know the difference.
For iOS app stores and Android app stores, they're completely different markets. The customers for iOS apps are people with iOS devices and the customers for Android apps are people with Android devices -- almost completely disjoint sets of people. If the iOS app store is down, it's infeasible for the average user to get an app from Google Play instead -- they would have to spend hundreds of dollars to buy an Android phone, then replace all of their other apps, just to substitute one app. It would be like saying AT&T didn't have a monopoly in 1970 because you could change carriers by moving to Canada. They're completely different markets.
Difficult to figure out how to actually do this, especially so without a crazy UX.
They should figure out the default apps thing. Though I don't know what you'd need for SMS, there's not much system integration there besides Siri (which I think supports plugins) and maybe sms: links?
Have you read the guidelines? Many words requiring you to use and not discourage users from using Apple's in-app purchasing system (which they get a large cut of), prohibiting you from trying to compete with the App Store or similar, prohibiting app-alternatives they don't control (like remote desktop into a cloud server), requiring "Sign in with Apple" if you use another third party sign in service and that sort of thing.
So the policies with a compliance enforcement mechanism are the ones that benefit Apple and the ones that are supposed to benefit users in practice don't have one.
> Difficult to figure out how to actually do this, especially so without a crazy UX.
Actually not so hard in that specific case. They could run the app and not sign in with a Facebook account, and if it tries to contact Facebook servers anyway, reject it.
> They should figure out the default apps thing. Though I don't know what you'd need for SMS, there's not much system integration there besides Siri (which I think supports plugins) and maybe sms: links?
They prohibit it on purpose. Signal isn't allowed to send and receive SMS on iOS:
> Apple does not allow other apps to replace the default SMS/messaging app.
The "Firefox" on iOS isn't even actually Firefox, it's required to use Apple's browser engine.
Your proposed solution would not work, obviously, because how do you define what services an app is allowed to connect to? How do you know it's connecting to Facebook's servers? Just hope they always use facebook.com?
It's the true motive for the "walled garden" -- it explains why it continues to exist even though the stated reasons why it exists don't pan out in practice.
> Your proposed solution would not work, obviously, because how do you define what services an app is allowed to connect to?
Why is it allowed to connect to any services for no reason? If the app makes a network connection the developer should have to justify it by something other than enabling collection of user data.
> How do you know it's connecting to Facebook's servers? Just hope they always use facebook.com?
I feel confident that Apple has the resources to determine whether the servers every application using the Facebook SDK is contacting belong to Facebook.
I assume a lot of people just “slap” the SDK in there and call it a day and it starts sending data.
Zoom also buys ads on Facebook, so integrates the SDK for attribution.
 https://www.apple.com/privacy/ ("Privacy is a fundamental human right. At Apple, it’s also one of our core values. Your devices are important to so many parts of your life. What you share from those experiences, and who you share it with, should be up to you. We design Apple products to protect your privacy and give you control over your information. It’s not always easy. But that’s the kind of innovation we believe in.")
It looks promising, and has been posted to HN a few times, but nobody has commented on it. https://news.ycombinator.com/item?id=20519456
They choose not to because 1) it's easier and 2) Facebook gives them a "cut" of the revenue in the form of free analytics and insights into how their Facebook ads are performing.
The companies decided that the value they get out of Facebook ads & analytics are worth more than their customers' privacy. Until well-enforced regulations come into play (so not the GDPR) nothing will change.
There's a lot of developers that rely on these dependencies, and just blocking them would cause a major backlash.
Moreover, prohibiting this wouldn't actually remove the apps for more than five minutes because what would immediately follow is a version of the SDK that doesn't send any data to Facebook when you're not actually using a Facebook account.
If you know what that means, it means that I am a scarred, grizzled vet, with an eyepatch and a trick knee.
Anyone who has been an Apple developer for more than a couple of decades, has had the experience of having the rug pulled out from under them by Apple.
That's one reason that I'm not hurrying to adopt SwiftUI. I really like it, and hope that it makes it (I despise Auto-Layout), but I have also seen other promising tech smothered in the crib (OpenDoc? QuickDrawGX?).
For those who did not read about it: https://andregarzia.com/2020/03/private-client-side-only-pwa...
Then as a user, I can inspect what apps are sending and how frequently. I should be able to block requests or set myself as anonymous. Or allow apps for certain amounts of time etc.
And yet, we don't need to integrate Facebook's binary blobs to use this SDK's main features. How about we implement the open version of Facebook SDK that uses their APIs but doesn't do anything that we don't want it to?
If you are a Zoom user who is not using a Facebook account, I believe the only info Facebook is getting is that the Zoom app was launched and nothing about the user itself. Unfortunately the side-effect of using the FBSDK is that Facebook can track your app's usage for all users.
I believe this is true of all apps with a "Login in with Facebook" button. FWIW, it does not appear that other OAuth's do this (including Google's)
I did it on my last company's apps and webapps when we had to optimise for performance, and removed some dependencies.
Of course, now that I'm gone the SDK is back because one of the developers was bullish on using the SDKs at all costs (the webapp, for example, now loads FB, Google and Linkedin SDKs on launch).
This is a problem that we developers are creating.
FWIW it looks like zoom did the same.
>> The Zoom app notifies Facebook when the user opens the app, details on the user's device such as the model, the time zone and city they are connecting from, which phone carrier they are using, __and a unique advertiser identifier__ created by the user's device which companies can use to target a user with advertisements
IDFA is the same across all apps on a device. However, it can be reset by the user or disabled (in which case it returns all 0s). Also, apps have to disclose (to Apple) that they use the IDFA - not sure if that's visible to the user in the App Store anywhere.
IDFV is unique per vendor - that is, each app has a different ID, but two apps from the same developer will have the same ID. I believe this is also reset when resetting the device.
The FBSDK doesn't require developers to enable the IDFA, so the unique identifier in the phone home request is either the IDFV (effectively unique) or just a UUID that the FBSDK generates and stores on launch.
I can't recommend it enough. Until/unless we get something like Little Snitch for the phone (are you listening, Apple?!), this is the next best thing.
Like its mentioned in this discussion, using the FB SDK will result in apps sending requests to FB. Found a banking app I use was doing this...
> "Try it now for free. No sign up required."
> I click the button
> "Sign In. Don't have an account? Sign up."
(click the area with the various app & website icons to expand into a more detailed view)
I was pretty surprised the first time I came across that list, there are a lot of apps on there that I never did a Facebook login with. For example right now I see that a map app I downloaded when I was travelling last year but only opened once or twice has sent 395 "interactions", the latest of which was 3 days ago. Actually, I should probably delete that now haha. Also, I'm using Firefox with the Facebook container, Privacy Badger, and uBlock Origin, and there are still many websites listed.
A large amount of phone apps seem to appear in that list. I guess Whatsapp/Instagram creates a fingerprint of my device and then uses that for tracking?
That doesn’t mean facebook stopped getting those reports, only that they are no longer associating them with my account.
Too bad it isn't practical to have a system-wide blacklist of selected hosts on iOS. Maybe you can but requires a jailbreak, but that too can break some apps.