> There is nothing in the privacy policy that addresses [that data is being sent to Facebook]
> The Zoom app notifies Facebook when the user opens the app, details on the user's device such as the model, the time zone and city they are connecting from, which phone carrier they are using, and a unique advertiser identifier created by the user's device which companies can use to target a user with advertisements
So Zoom is sending the fingerprints of mobile users to Facebook. Which helps Facebook better track users across the internet. Not only this, but Zoom is not disclosing this information (though it isn't like people read TOS and would be aware of this anyways).
Can we just stop sending data everywhere? If you don't need it, don't gather it.
One thing i'll note here as to a potential reason why they do this
I just recently attempted to set up Facebook adverts for an app I developed. When it came time for me to set the metric up I obviously chose "App Installs" as my metric to track.
To do this, Facebook told me I needed to install the Facebook SDK in my app to attribute an adverts conversion.
I didn't end up running the ad, but I can see why companies potentially have the SDK embedded in their apps to track ad-spend, hence the phoning-home to Facebook.
Not sure why Vice called out the omission from the privacy policy – I’ve never seen one that actually lists all companies out by name. The GDPR mandates a list of subprocessors, though!
GitHub’s privacy policy is exceptional. Particularly the section on sub-processors[1] where they list out every company, don’t have any sort of CYA language that covers others that might not be listed, and make a commitment to update that page every time the sub-processors or the sub-processor’s function changes.
Their privacy policy even explicitly calls that out:
> Zoom, our third-party service providers, and advertising partners (e.g., Google Ads and Google Analytics) automatically collect some information about you when you use our Products, using methods such as cookies and tracking technologies (further described below).
It is their right to run their business as they see fit, and it is our right to not use them. It is the deception we do not allow (any more) with GDPR.
Most present societies and thus governments do not believe that people should get to run their businesses however they see fit.
There are a thousand different ways in which companies are not presently allowed to be operated, even if the owner sees fit to do so: worker safety, discrimination, collusion, et c.
I’m not saying these things are good or bad, or that there should be more or less of them. I’m saying that Zoom’s (and Facebook’s) spying is a problem, remains a problem, and is not solved by them putting some text on their webpage.
> It is their right to run their business as they see fit
It absolutely isn't. We want to use services but we do not want to be subjected to surveillance capitalism. Privacy is more important than some business and if it can't operate without being invasive it should fail. If they insist on being hostile and tracking people despite their wishes, people will use the product anyway and they will find a way to break the tracking. They will delete the surveillance code, use network filters, send fake data, whatever it takes to stop the surveillance.
> It is the deception we do not allow (any more) with GDPR.
That law also says users have the right to object to what the service is doing with their data and that they must stop doing it if the objection is valid. Almost all data collection taking place today is objectionable, especially those related to marketing and advertisements.
Collecting data on people is not a god-given right. It is a privilege and it can be revoked. People trusted companies with that power because they thought companies would act in their best interests but they were exploited instead. Now it's time to take it away.
> We want to use services but we do not want to be subjected to surveillance capitalism.
Who is the “we” you are referring to? I think most people care so little about this that they don’t even bother to skim the TOS before using a service.
To add to your point (which I fully agree) (and I am surprised of the downvotes - I don't care for the karma but it looks that I didn't write it clear enough and/or people misunderstood my comment 2-3 levels up).
I am not touching the "add value" bit, I will stick to the ethics. Some businesses are (imho) scum (Facebook, Google, Zoom, every tracker, every data aggregator, etc.)
They may uphold the law or they may ignore the law. Since we should not burn their buildings down in retribution, we can sue them (or whatever the local privacy laws state), we can stop giving them money (our free/paid information). But it is up to us. Zoom clearly needs a (sic) phat penalty by EU to get their stuff straight. Then every EU user should bombard both Zoom and FB with questions on their data practices and "right to be forgotten". Then we should burry them in the sand and move to other service providers.
I am adamant on the issue of privacy and the reason for that is that these scum KNOW they are violating our rights, and the voice in their minds tells them "screw them, £€¥$ goes first".
Caring about this shouldn't be necessary. When people sign up for a service, they shouldn't have to stop everything and wonder about the many, many ways their personal information could be abused. Nor should they have to scrutinize the terms of every single service out there just to know exactly how they're being exploited without being able to do anything about it. This constant paranoia about everything is not a good way to live.
It's past time for us to get serious and apply HIPAA-style protection to the storage and transmission of PII, without exemptions.
Companies like Facebook will complain loudly that they won't be able to survive, but that is not our problem. If we pass legislation with teeth, they will need to change their business model. That would be the point.
Zoom has allegedly HIPAA-compliant BAAs with users in the health space. If any PHI data is making it over to Facebook without a similar agreement from Facebook, Zoom is in for some trouble.
IP address, telephone number, city and other identifying information is ALL considered PII.
I work with (adjacent industry) HIPAA protected data, which is considered PII by virtue of knowing Bob Smith is in the system. If they're under a BAA and sending that information to Facebook they're in violation.
If one of my sub-processors did this my lawyer would be livid. But hey, it's Silicon Valley, don't harsh their buzz man.
I understand stalking involves more than taking notes and selling them though. The other person has to feel threatened or such. Now maybe you can/make the case that you feel threatened by Facebook, and maybe you can sue them individually (good luck), but I doubt you can make the case most people feel this way.
Depending on the specifics they may not be. I live in a Condo tower and my mailbox isn't visible from the street, so if you decided to take notes on me as I read my mail you'd be trespassing.
The specific scenario isn't the point - but the fact that a semi-obvious scenario could be incorrect sorta is. Regulations are complex and tech has a terrible history of playing fast and loose with regulations so it's not like an imposition of regulations would be inappropriate or unwarranted - there are good and bad apples, and the bad apples spoil the bunch.
Being in a public area in a private business doesn't afford you that privacy. The trepassing charge would be possible if a security guard asked the person to leave.
Kaiser Permanente will contact google analytics and doubleclick as you navigate their website, even when checking test results and contacting your doctor.
Had a briefing with our company lawyer a while back and any information can be considered PII when paired with other information. Eg that you bought 7 foo’s is not PII, but that you bought 7 foo’s on Tuesday might be if that can then be looked up in the purchase history and you were the only one who bought 7 on Tuesday.
I don't know "how unique" it needs to be. I'll ask if I get the opportunity.
It just has to be correlatable if I understood it correctly, but I don't know if unique or not. To me it sounded like if there's only a small number of possible people it could identify (say 4) then its potentially PII, however I have no idea where the line is drawn. Clearly if k is 1, its PII. If k is 2, it probably is too. If k is 1000, its probably not. But at what point does it stop being PII? I have no idea!
The legal person basically said "its complicated, anything can become PII when combined with something else, even if neither on their own are PII". The bottom line is does some combination of information identify a person, then its PII (its in the name really!), but unfortunately that means there is no clear simple list of things that are or aren't PII, it really depends on each individual case.
Her advice was to think carefully about any data stored about or for users and to avoid storing it if possible, and if not possible, think carefully about whether or not it could identify a user in some way. Its not a very satisfying answer, I know. It also doesn't answer your question :(
Most users of Zoom aren't choosing it–it is being chosen for them. Both of my children's schools (preschool and elementary) started using Zoom this week, so it is either use Zoom or they do not get to participate.
Except Zoom web version doesn't work: the incoming/outgoing audio is garbled (tested with Chrome, they do not support Firefox). This is in part because they were obviously too good for WebRTC native audio and instead gutted ffmpeg and compiled it to WebAssembly (I wish I was kidding but I'm not: https://webrtchacks.com/zoom-avoids-using-webrtc/).
> This is in part because they were obviously too good for WebRTC native audio and instead gutted ffmpeg and compiled it to WebAssembly (I wish I was kidding but I'm not: https://webrtchacks.com/zoom-avoids-using-webrtc/).
Not a Zoom apologist I—I am also deeply creeped out by the fetish for covert data exfiltration in a platform that is so widely used in these quarantine days—but, as far as the tech goes, the story you linked seems to say that they do use WebRTC as of September 2019.
Agreed! Been meaning to write something like that but a complete story probably needs second-sourcing all the bits, experiments, etc. Regrettably, the trend is clear.
How do you use it? I had to join a Zoom meeting and tried to use the webpage first but it tried to make me install a client. After the MacOS local web server debacle I will never do that. I figured the safest thing was to use the iOS app but wasn’t thrilled at the idea. I assume if there actually is a web client the process to access it is extremely user hostile. Is it a hidden link or something?
Zoom is obviously an extremely scummy company and I’d rather stay away from it entirely. Unfortunately they must be dumping cash into marketing because it’s now the biggest thing in video conferencing. It’s a shame, they now seem to have network effect going for them.
There is some shadow pattern where you need to refuse to install the app (or make it seems like the installation didn't work) and it will eventually give you a link to the web client. I'm not sure whether there is a reproducible way to get there.
This did work for me but I had to install Chrome, it told me to install a "modern, updated browser" in the most recent Firefox and Safari. I assume they are abusing some anti-user capability in Chrome but I trust it more than any Zoom client.
Click "no" on the invite when it asks to open zoom (instant if you dont have it installed) and underneath there is a link that goes to the web version.
They're not sending your name and address. They're sending the IDFA, device ID, of your device to Facebook. The fact that Facebook can link that device ID to your identity is on YOU. You logged into Facebook in their app to make that connection.
This uses unnecessarily accusatory wording. But it is also both unhelpful and just flat out wrong - Facebook gets data fed from a lot of sources - it can start stitching up that data into a picture of you without you ever creating a Facebook account.
note the reason hippa exists has nothing to do with protecting individuals; it was drafted to protect the insurance companies. it is absolutely not that health data is somehow "private" enough to warrant some special protection for the persons themselves
Insurance companies have incentives to get better data than their competitors, so they can offer less expensive coverage to lower risk people and leave the competing insurance companies with all the higher risk people. Until the competitors do the same thing. Then you're all just offering less expensive coverage to most of your customers and making less money. (That also tends to cause trouble for higher risk patients because insurance companies could more accurately predict ahead of time that they'll incur high costs and then charge them unaffordable premiums.)
If the health data they would otherwise use for that is "private" then that isn't allowed, so providing insurance is riskier, will have fewer competitors, and commands higher premiums.
It was created primarily to modernize the flow of healthcare information, stipulate how Personally Identifiable Information maintained by the healthcare and healthcare insurance industries should be protected from fraud and theft, and address limitations on healthcare insurance coverage.
Is the protected from fraud and theft part somehow incorrect?
You're now talking about a different section of the same act. There are some separate provisions in there to fight insurance fraud, but that doesn't really have a lot to do with privacy for medical records, except to the extent that having somebody else's medical records might make it easier to commit insurance fraud against their insurance policy.
The quote explicitly says that the act covers “how PII ... should be protected from fraud and theft.” HIPAA is ostensibly about protecting patient privacy and data. It’s certainly possible that the insurance industry went along with it because they figured it would help them keep their patient data proprietary, but that most certainly wasn’t the goal of the legislation.
What do you think "fraud and theft" mean in this context? Sick people aren't great fraud targets, they're frequently unable to work and have already lost what money they had to medical bills. The "fraud" is insurance fraud, for which the PII would be things like your name and policy number (i.e. what's needed to file a fraudulent claim against your policy) rather than your actual medical records. And the parties most interested in having access your medical records are the insurance companies themselves, as already mentioned. There is a fairly large financial incentive for a shady insurance company to use patient medical records to poach low risk patients.
In your proposed scenario, I find hard to believe the insurance companies won't form a cartel, keep the prices on the low risk customers, and price out the high risk customers. Somehow I don't believe the explanation.
Forming a cartel is a violation of antitrust laws and requires no one to form an insurance company that defects from the cartel in order to make higher profits for themselves. Passing a law against the practice requires neither.
I disagree with this — more regulation will make it harder to innovate.
For example, I’ve met several founders who wanted to enable tele-medicine years ago but decided against it because “the lawyers cost more than the engineers”, and walking-on-eggshells destroys morale & iteration speed.
I’m not arguing to de-regulate heath data — my point is that we should selectively apply regulation.
It’s likely a great thing to regulate self-driving cars. But please keep the lawyers away from my niche online forums, 3rd-party clients for social apps, blogs, video games, calculators etc...
If a company can't 'innovate' without sharing users' data with third parties or treating it recklessly through lax security (or uploading database dumps to publicly-accessible S3 buckets) then that company doesn't deserve to be in business.
It doesn't take a suite of lawyers to enforce that, either. Health care is gigantic mess of bullshit in the US especially, because of the multiple different 'stakeholders' - customers, insurance companies, brokers, "networks", hospitals, doctors, etc., and every mistake is a gigantic lawsuit waiting to happen. It's a disaster however you cut it.
As for personal data for some arbitrary startup, any argument that "innovation" depends on being able to be careless or cavalier with that data is just ridiculous. Be careful with it. Store it properly. Only collect what you need, and delete the rest. Expunge data you no longer need. Never send it to any third party without asking the user, and provide clear information about where and with whom the data is processed and stored at rest.
There, now you're being careful with user data and you can still "innovate" decent products, as long as your business model isn't user-hostile from the start.
The problem they point out is that well intentioned businesspeople who want to provide you a useful service and store your data correctly are priced out.
If you want to deal with medical data of any kind, you need a lawyer. Full stop. It doesn't matter how good your intentions are, or how many "best practice" blog posts you follow. You need to hire a lawyer, and lawyers are incredibly expensive.
> Be careful with it. Store it properly. Only collect what you need, and delete the rest.
This is great advice, but that's not how laws work. Congress won't pass a law that says "store it properly". They are going to pass a law that describes how you can and cannot store data in 600+ pages of legalese. And no matter how properly you think you're doing things, you have to have a lawyer to know you're actually doing it properly.
Said another way: regulation always adds cost and barriers to entry. These affect the "good" business just as much as the "bad" business.
Not every business has to be viable for a startup. I'd rather a company that can't afford a single lawyer not have access to my personal information. If that means pricing them out of it through regulation, then so be it.
Regulation exists to protect citizens at scale. “Don’t use the business” isn’t how we’ve built society, rightfully so. If you believe the regulation to be onerous, fix it.
One is not entitled to do whatever one wants to generate a profit, at the detriment to uneducated or unsophisticated citizens, or society as a whole.
We are trading the personal information of billions of people for the ability for tech startups to iterate quickly, who will for the most part decide on a freemium business model revolving around mining and selling private data
And may be trading away our ability of choice in the future and being stuck with a monopoly.
Nobody talks about regulated industries with duopoly or monopolies that everyone has to deal with. Tech industry is exotic ain't?
Big companies will still find a way to track you. That won't change. You can pull up a list of all the privacy focused laws released recently and you can still see Facebook and all their products working fine but you never hear about someone who wanted to bootstrap an idea and couldn't invest much upfront to deal with slow expensive law system.
We don't need more regulations. We need more selective punishments proportional to the damage and presence. Not a lame fine that is not proportional to what companies are profiting from . And if you know anything, Facebook is the one lobbying for privacy these days. They are pushing for some of the requirements they are already compliant with to be put into law .
> The problem they point out is that well intentioned businesspeople who want to provide you a useful service and store your data correctly are priced out.
Then the way to do this is to simplify laws and their understanding. A company shouldn't need a large legal team just to figure out if they are doing something legal or not. It kinda sounds ridiculous when you think about it. That you have to hire a bunch of lawyers to figure out if you are a criminal or not. That clearly means things are too complex. I get that there are places this should apply to, but not small businesses and startups.
You can have regulation that is both easy to understand and effective. There is also letter and spirit of the law. We should never let the letter hinder the spirit.
I completely agree with you. The legal system is entirely out of reach for the average citizen, and this is something we should fix.
However, us wanting things to be a certain way doesn't change how things are. If Congress passed a "Data Protection Act" it would be indecipherable, full of technical illiteracy, and heavily influenced by the richest lobbyists (Facebook and Amazon, anyone?).
This is my objection. I would love for a real data protection act to be legislated. But Congress has its own agenda and ineptitudes. Do you really trust the people who wrote the Patriot act to protect your sensitive information?
Congress would write a law with general objectives, and leave the regulatory work to an exec branch agency. The regulations generally either reference or draw inspiration from NIST.
HHS uses NIST stuff to guide HIPPA. IRS is more prescriptive, but everything in IRS 1075 is still based on NIST stuff.
You have to separate the political puffery from reality. The Federal government is very good at establishing effective regulatory frameworks. They fall down with the long-term maintenance of regulations, as it's often difficult to keep the legal mandate up to date.
If you don't store any data you won't need any lawyers. You don't need to store a single byte of data on your users or customers to provide a service or software using that data.
> If you don't store any data you won't need any lawyers.
Wrong. HIPAA applies to any business that transmits and/or has access to PHI. You don't need to be storing data on your own hard drives to be subject to these laws.
This is exactly my point. You are thinking like an engineer, and Congress is not. You cannot assume anything. You need to hire a lawyer, or you are opening yourself up to serious liability.
I worded that poorly. How about this: If you don't own, manage, solicit or control any servers having access to PHI or PII you don't have any risk of being liable.
Put all of that on the client, do your best to protect it but ultimately make it the clients responsibility.
I still haven't seen any lawsuits or regulation targeting software in that sense, apart from DRM.
There is no distinction between client vs server when it comes to the law. The same organization created and operates both and is liable as a data processor in both situations.
This is again the difference between engineer vs policymaker.
As far as I understand it, Microsoft has no responsibility for PIIs e-mails going through the Outlook e-mail client. Maybe the US is different, but at least in Europe, the GDPR is clear that software vendors have no responsibility in data being processed locally when it's deployed and run by others.
Oracle has no liability for the data stored in their database.
If you have no way of touching the data, your servers (self-managed or otherwise) aren't touching data in any form, you have no legal liabilities wrt data (apart from agreements of course).
>The problem they point out is that well intentioned businesspeople who want to provide you a useful service and store your data correctly are priced out.
I think pricing out the odd well-intentioned business person is a good tradeoff for avoiding the "move-fast and break things" snake-oil salesmen.
>Said another way: regulation always adds cost and barriers to entry.
This is innovation in the wrong direction... against our privacy and consent for the benefit of a company whom I may not want to give this data to and whom didn't clarify that's what was happening. It's corporate malfeasance, and if you think it should be unrestricted "innovation" then you're probably on the creepy side of the Big Brother-like data-gathering monster.
>I’ve met several founders who wanted to enable tele-medicine years ago but decided against it because “the lawyers cost more than the engineers”, and walking-on-eggshells destroys morale & iteration speed.
Thankfully so - I wouldn't want my telemedicine to rely on eg. some random unsecured Mongodb instance.
Why disagree with this, it actually will cause innovation. How, if someone is able to figure out the way to navigate the laws easily, they will then sale their solution as a service.
So when a FB, Goog or MS can figure it out, they will add it to their stuff. Also a group like EFF would make a tool to verify since it would mean that their existing tools would just be checking the server instead of each thing like Privacy Badger and their other apps do.
It was really easy to innovate the car (look, I made this out of hard pointy steel, who cares if anyone else dies). Until you had to actually made them safe, do you think society would be better off going to the old methods? Innovation is for a purpose, a lot of the stuff we see now seems to be to innovate for the purpose of innovations sake and then sell it to someone who cares.
Also do you really think people won't invest if their current methods don't work, so startup culture wouldn't die. Just system of having people who don't care about privacy not actually think things through ethically first.
Oh, BS. My wife is a medical practice management exec, I I do some IT consulting in the space. There is absolutely no end of telemedicine solutions and have been forever (I implemented an early one in the mid-90s). There is absolutely no end of new and innovative health care startups. Those 'founders' you talked to are the problem, not the solution: people who want to bitch about being required to do the right thing, pretend they're doing an 'Atlas Shrugged' by not putting in any effort, rather than innovate around doing the right thing.
The regulations that you likely hit when investigating tele-medicine previously are likely more closely related to why nearly no site will let anyone create an account unless they check a box confirming that they're over 13 and COPPA[1] is a pretty intense set of rules that, IMO, are way waaay overkill.
That said, regulations might make it difficult for companies, but they're there because companies have abused data in the past at the expense of customers. So, I guess, it's too bad - regulations benefit me, they don't strangle businesses, they impede it - and it's an impediment that can be overcome if the business is useful enough to people.
If the only way your business can survive is carte-blanche regulations around privacy and security, and you fall over the instant that's threatened, one: maybe your business doesn't deserve to survive, and two: maybe you didn't build a very good business.
Niche online forums and all the examples you list there survived (and indeed thrived) in the days before rampant data collection, I have no doubt they'd evolve and survive once again.
It makes it harder to innovate in anti-social directions, and may catch some useful things in the crossfire. But on balance, slowing the rate of "innovation" for these companies that want you to shovel all data everywhere into their gaping maw? Not a problem for me.
This nonchalant attitude around "guilty until proven innocent" regulation is precisely how we wind up with America's alphabet soup of monopoly building bureaucracy.
The ethical way to preserve privacy is to change minds in a way that changes actions. Law is the threat of violent force, and should be wielded only with deep forethought about the underlying moral and practical realities.
You can develop all of these services, just don't hoard data. With no centralized serverside database you have no liabilities. P2P is a solved problem.
Better implementations of that is exactly the kind of innovation we need now.
Sounds like a false dichotomy. Some regulation has some downsides, all regulations are bad. I think you can reform regulations enabling health care innovations and introduce legistlation to stop Facebook to do mass surveillance without opt-in.
Similar to how we have organizations which can certify whether produce is organic or not, we need organizations which can certify whether apps and websites are certified ad tracking free.
Ironically, those orgs have themselves been accused of being pay-to-play -- if you give them enough money they will certify you organic. Since it's industry run, there is no one checking that the organic certifiers are legit.
It's turtles all the way down.
The only legit solution is the government, because they are the only ones without a profit motive in the whole system (although they can be bribed, but that's a whole other can of worms).
Yes its more in the absence of a government initiative. IIRC, what happened with organics is that it started as pay to play and then the USDA came along and took the best practices from the pay to play initiatives and introduced the USDA Organic.
Which government? I mean, I'm certainly not going to be very impressed if the (for example) US government releases a statement saying "Yeah guys, app Foo is totally legit and doesn't track you or leak any of your private information".
They may be without a profit motive but individual bureaucrats and legislators certainly have a power motive which is often just as, if not more, insidious.
"The only legit solution is the government, because they are the only ones without a profit motive in the whole system"
LOL. As you even parenthetically recognize, government doesn't magically solve humanity's capacity for wickedness.
The true "legit" solution is institutional legitimacy, which is fundamentally founded in a devotion to rational integrity. Genuine legitimacy can only be earned by sustained integrity, its not minted with the legislator's pen.
I agree that you're right to be concerned with the bad incentives of pay for play, but you appear to be missing the unintended consequences of imposing a regulatory monopoly.
We (Appfigures) are scanning apps for SDKs so we know which trackers, analytics, location trackers apps are using.
We’re working on a way to make that data accessible to app developers so they can learn from competitors, but having a simple way to tell if an app tracks you that doesn’t require signing up or anything like that could be an interesting idea.
Would you actually check an app before downloading/using it?
I am very shy of downloading apps because I expect most of them to stalk me or rat me out to Facebook and don't have the time to check for myself (by using Charles Proxy or Burp Suite).
I would definitely appreciate it if there was a service that allows me to quickly check if an app is safe.
The key here would be for you to become a search engine of apps that are "3rd-party tracking free", and then build trust with customers who value that and become a destination for app discovery.
For developers, additional traffic becomes an incentive to comply.
Ideally the certification would be made visible by the app creator when the upload the app to Apple/Android App Store so people downloading it would be able see the designation.
The problem with that is that do you really believe the developer? It needs to be a 3rd party. Ideally the store itself, but there’s no incentive for the big stores to do that.
Personally, I'd want to have the source and run that software myself - how else could I trust that the app developer didn't bribe you or find an easy way to subvert your detection?
Valid point. Getting something you can scan for installed SDKs is pretty darn hard in many cases, so I doubt most people would want to set up and maintain something to do that for a few apps...
This is an interesting point. Either the government would then need businesses to disclose their "rating" (similar to movies) or businesses could opt in to show a seal (like Fairtrade bananas). The problem is, if there aren't enough (popular) sites with the seal, then the value of this declaration is lost.
Yeah. For many people I interact with Facebook, and to a lesser extent Google, are the internet. So they’d never see the seal unless Google put the sites in their top three results and didn’t scrape the relevant info.
Or someone posted a meme with the seal in it on Facebook.
For apps at least, you could add the branding seal to the Appstore metadata that you upload for your app to the Apple/Android App Store.
For websites, agreed google would have to display it or it'd have to be after the initial page load which means you'd be tracked the first time you visited the site and you'd know not to visit a second time.
Perhaps for websites, a browser add-on that checked a certified ad tracking free database registry could be used.
Interesting - the movie ratings is a good analogy - as far as I know the MPAA handles it[1].
Maybe an existing organization in this space such as the eff.org with name recognization could come up with a certification methodology and branding seal for websites and apps.
Exodus Privacy will tell you what trackers are present in an Android app. They don't certify apps as "ad tracking free" but leave that decision up to you. The only certification of being ad-free is probably to be open source as well, in which case there's a large selection of apps in F-Droid which meet that requirement.
3) Turn off ALL apps, so when they launch they have no wan access.
4) Open each app one by one, noting what links and phoning home the app tries to perform
5) Watch in disgust as you begin to realize all of your efforts to secure your privacy up to this point has been in vain...antacids help.
6) Take control by granularly killing the links/calls in apps you decide to keep. Spotify calls FB, but you now can fix that. This is empowering but only a start.
R/privacy will make you paranoid, but a good place to get an idea what you are up against. I read Anon, join to post, and after some time delete my content, been doing that for decades now.
Looks promising, though it doesn't seem to have caught the FB connection in Zoom[1] (though given that Zoom has the Internet Access permission I suppose it can really send data anywhere?) They do offer an on-device version, which is available via F-droid[2].
And as of right now (2 years after the law went into effect) privacy regulators still don't care and nothing suggests things will change. *
I've put lots of time into raising complaints with the ICO (the privacy regulator in the UK) and even for the few complaints that were upheld (they try to find every single possible excuse not to), it still didn't have any effect and I have yet to see a proper investigation into the company or a fine.
* please don't bring up the 50M Google fine about them forcing the creation of an account while setting up an Android device. Not only is it pocket money to them (aka the cost of doing business) but this ignores the much larger problem like Google Analytics stalking everyone's behaviour across the entire Internet.
> 1. Without prejudice to any available administrative or non-judicial remedy, including the right to lodge a complaint with a supervisory authority pursuant to Article 77, each data subject shall have the right to an effective judicial remedy where he or she considers that his or her rights under this Regulation have been infringed as a result of the processing of his or her personal data in non-compliance with this Regulation.
> 2. Proceedings against a controller or a processor shall be brought before the courts of the Member State where the controller or processor has an establishment. 2Alternatively, such proceedings may be brought before the courts of the Member State where the data subject has his or her habitual residence, unless the controller or processor is a public authority of a Member State acting in the exercise of its public powers.
The way it is written makes it seem like 79 is independent of 77 and 78. You can pursue a claim through a supervisory authority (77) and if you are not happy with the result go to court (78), or you can go to court directly against the processor/controller (79), or maybe even do both.
79 specifically specifically says that pursuing a judicial remedy under it is without prejudice to any available administrative or non-judicial remedy, including the right to lodge an Article 77 complaint. If this was meant as something that has to come after not getting satisfaction from an Article 77 complaint it would make no sense to say it is without prejudice to the right to lodge an article 77 complaint since that would have already had to have been lodged before getting here.
This also fits with what I saw on assorted EU and international law firm blogs, back when they were all writing articles on what GDPR would mean.
:shrug: I’m not a GDPR lawyer and further most of it doesn’t have precedent yet so we aren’t sure how its going to fall out.
I did work on a GDPR compliance effort in an industry where it is big important (real time bidding ads) and our lawyers, while hedging, were not at all worried about random lawsuits. Their opinion was that we would easily be able to submit to the courts that the complaints needed to go through the authorities first.
I did see one legal firm blog that said they expected a lot of lawsuits from individuals, but most blogs and reports I saw seemed to think it there won't be a lot of them.
If there are any, they probably won't be a big deal for the defendants. An individual lawsuit is limited to compensation for the damages suffered. In most cases, that just won't be very much, and so probably won't be worth the time and effort to pursue.
Only supervisory authorities can can impose punitive measures such as fines based on revenue. Those are the only things likely to actually make a difference.
Also I get the impression that European regulators are more responsive than US regulators. Europeans report things to regulators first and expect them to be taken care of. Contrast that to Americans who are far more likely to view the regulators as ineffective and turn first to a lawsuit.
Can we stop sharing data with facebook and google or literally ANY third party unless absolutely necessary? And if it is absolutely necessary, let users opt out and tell them what feature they lose?????
Zoom is quickly gaining a reputation for doing the wrong thing anytime they have a choice between right and wrong.
I have been using Android for the last few years (with NoRoot Firewall installed) and before that I was using iPhone (jailbroken with FirewallIP installed).
95% of the applications installed (on both Android & iPhone), when they open "talk"/send a ping to Facebook. That includes all air-companies, Spotify, anything you can imagine. The only "clean" apps I have found are Amazon, eBay, Dropbox, Signal, Telegram, Skype.
Anyone using Android, do yourselves the favor, install (free) NoRoot Firewall. Once you "Start" the firewall check the tabs "Logs" and "Pending" and you will be surprised on what your apps are doing, especially if you leave them running in the background.I also use it to block trackers, ads, FB, etc.
Mobile app developers need to gather install/purchase data so they can optimize user acquisition spend. Most are not using PII so I don’t see any problem.
My opinion about this is that many apps and websites don't have a real product offering. FB, Dropbox, Twitter and others are more of a service offering than a product offering and hence everything is rooted around services and tracking services and, finally, just plain old tracking.
I don't see how is selling a service different then selling a product ? If clients want it => pay for it.
The real point that privacy advocates REFUSE to talk about is that FREE service wins over 99% market share over PAID service. So people does not want to pay for Facebook (greed) and Facebook does not want to make people pay (market share first in Silicon Valley mindset).
Well money for its thousand engineers has to be found ¯\_(ツ)_/¯
Now that the actors have collectively chosen the combo free services in exchange for private data, a small minority wants free service in exchange of nothing. How does it work, Freemium, government subsidies ?
So just to give you an example of what would be a product. Spotify is half way between service and product.
What would make it further closer to a product would be (my suggestion):
1. No free version
2. And while you are at it, don't pool views. If you listen only to "Baby" by Bieber and only that once (or many times) in that month, then Bieber et al. gets your full $10. Edit: minus fees.
They still do services, but now they are not "organised around services".
Why do comments suggesting that data collection is paid for by Facebook/google get downvoted?
Serious question. This wasn’t my comment but I think it’s true and I’ve said the same previously and was downvoted too. Is it because it’s obvious and well known? Did I miss the memo too?
If Facebook is encouraging the capture and transmission of this data and paying for it, does this mean that Facebook has indemnified Zoom?
Those comments are downvoted because there is no evidence that anyone is paying for data from Zoom. IMO, speculation without support should be downvoted.
That's... the point of a company? If this argument boils down to "Zoom to FB is bad because it gets Zoom paid directly or indirectly and FB=bad", then I think the same argument could apply to almost any business that advertises on FB (a large number of businesses).
If this isn't your argument, I apologize for misunderstanding. Could you elaborate a bit?
> Why do comments suggesting that data collection is paid for by Facebook/google get downvoted?
It's not that I disagree with the facts here. Obviously they're doing so for some sort of monetary incentive. That doesn't make it any less evil. I don't care if it's legal. Companies that depend on the sale of user data to be profitable are acting unethically and (imo) unsustainable as those profits may not be available to them in a future with better regulation.
I’m not suggesting it’s OK. I’m saying that given how widespread this is I’d like to understand why. Why did Wacom collect data? Why is zoom? It doesn’t make sense to me unless there is some external driver. If FB or G are paying for these privacy violations then I think that’s really bad too!
Considering this is the default for the Facebook SDK, it’s not obvious to me at all that they’re doing it for a monetary incentive (not that that makes this acceptable).
I’ve long since given up trying to figure out why HN goes and goes against certain things. The vote system here is one of the clearest examples of “let’s build a bubble” that I can think of.
There is no way Zoom is just sending secret data to FB out of the evilness (because it’s not goodness!) of their hearts for fun. So someone is trading value somewhere.
Obviously someone at Zoom thinks they get value out if it. But that's quite different from "this is a revenue source", which would suggest Facebook paying money for it. Your typical company will happily put a tracking SDK in so some marketers get some more graphs to look at in Facebook Analytics.
I guess I’m thinking that if this information that Zoom (a publicly traded company) was doing this, it would be bad for business, so doing it just optionally for fun, seems like all risk and that an appropriate reward would be a financial incentive.
Even if it isn't a revenue stream it can still have value to them, it doesn't have to be for "fun". This may simply be one of their ways of acquiring metrics.
As for it not appearing in their TOS, the "risk", it isn't unusual for the left hand to not know what the right is doing. Those drafting the legal text may not have known that this particular analytics system was being used.
There are many arguments for why this was overlooked, and why it exists, without it having to be direct revenue.
This is what really gives lie to the whole walled garden thing. Its selling point is supposed to be in Apple preventing things like this, but here we are in reality and they don't. Meanwhile they do e.g. prevent Signal from replacing Apple's default app for SMS, which has no purpose other than to create barriers for cross-platform competitors to the default apps.
Apple has more than enough market share, especially in the US, to dictate what the market does. If it doesn't technically have a monopoly in raw numbers that doesn't mean it should be allowed to get away with some of this shit.
Apple doesn't have >90% market share for iOS app stores? Who is their competition? You can't run an Android app on iOS. There are no iOS apps on Amazon or Google Play. That makes it a separate market.
Saying "your own market" doesn't mean anything. Every monopolist has 100% of their own market. On the other hand, Clorox has 100% of the "Clorox bleach" market.
The question is whether your products and your supposed competitor's products are really the same market. In other words, are your customers the same individuals? Are the products substitutes for one another?
For Clorox bleach and other bleach they are. All bleach is the same, they're perfect substitutes, if the store is out of Clorox bleach and you buy some other bleach you'll never even know the difference.
For iOS app stores and Android app stores, they're completely different markets. The customers for iOS apps are people with iOS devices and the customers for Android apps are people with Android devices -- almost completely disjoint sets of people. If the iOS app store is down, it's infeasible for the average user to get an app from Google Play instead -- they would have to spend hundreds of dollars to buy an Android phone, then replace all of their other apps, just to substitute one app. It would be like saying AT&T didn't have a monopoly in 1970 because you could change carriers by moving to Canada. They're completely different markets.
Well, the issue isn't just that Apple doesn't put an end to it, it's that Apple doesn't let users toggle this sort of thing off. I think most HNers would say the best solution is for Apple to require apps get permission to do this sort of thing, and let the user decide if they want to have data sent back to FB. There needs to be transparency and a chance to opt in or or at the least to opt out.
Terrible take. They use it to apply many security and privacy policies! Just not the one we're talking about now.
Difficult to figure out how to actually do this, especially so without a crazy UX.
They should figure out the default apps thing. Though I don't know what you'd need for SMS, there's not much system integration there besides Siri (which I think supports plugins) and maybe sms: links?
> They use it to apply many security and privacy policies!
Have you read the guidelines? Many words requiring you to use and not discourage users from using Apple's in-app purchasing system (which they get a large cut of), prohibiting you from trying to compete with the App Store or similar, prohibiting app-alternatives they don't control (like remote desktop into a cloud server), requiring "Sign in with Apple" if you use another third party sign in service and that sort of thing.
There is a privacy section, but the dirty secret is that they have very little power to enforce it against premeditated abuses. Companies add a feature to their app that gives them a pretext for uploading your data to their servers, and then there is no way for the user or Apple to verify what happens to it from there or determine actual compliance with the privacy policy.
So the policies with a compliance enforcement mechanism are the ones that benefit Apple and the ones that are supposed to benefit users in practice don't have one.
> Difficult to figure out how to actually do this, especially so without a crazy UX.
Actually not so hard in that specific case. They could run the app and not sign in with a Facebook account, and if it tries to contact Facebook servers anyway, reject it.
> They should figure out the default apps thing. Though I don't know what you'd need for SMS, there's not much system integration there besides Siri (which I think supports plugins) and maybe sms: links?
They prohibit it on purpose. Signal isn't allowed to send and receive SMS on iOS:
The App Store having additional restrictions doesn't have anything to do with the privacy aspect of the walled gardens being a "lie". You can go on a tirade about the app store's limitations if you want, but that's not relevant.
Your proposed solution would not work, obviously, because how do you define what services an app is allowed to connect to? How do you know it's connecting to Facebook's servers? Just hope they always use facebook.com?
It's the true motive for the "walled garden" -- it explains why it continues to exist even though the stated reasons why it exists don't pan out in practice.
> Your proposed solution would not work, obviously, because how do you define what services an app is allowed to connect to?
Why is it allowed to connect to any services for no reason? If the app makes a network connection the developer should have to justify it by something other than enabling collection of user data.
> How do you know it's connecting to Facebook's servers? Just hope they always use facebook.com?
I feel confident that Apple has the resources to determine whether the servers every application using the Facebook SDK is contacting belong to Facebook.
I’m surprised Zoom is happy for Facebook to know exactly who its customers are. This is information that could be used against the company at some point, for example if FB made a video conferencing play.
I don't think many people want to mix their professional with their personal (facebook). Doesn't facebook have this "Facebook at work thing" that no one uses?
I would wager some or most developers do not realize that when they add a login with Facebook button, they are also sending analytics data for _all_ users to Facebook. It is, imho, a very dark pattern.
That would be quite a stretch of the CCPA. The businesses that are doing the collecting have to have to comply. In this case, Apple isn't collecting that info.
Likely, simply curious if questioning Apple about it is enough for them to nuke the Facebook SDK from further app inclusion at the review process. Privacy is their whole marketing schtick, no? [1] If it's a core value, stand up for it.
[1] https://www.apple.com/privacy/ ("Privacy is a fundamental human right. At Apple, it’s also one of our core values. Your devices are important to so many parts of your life. What you share from those experiences, and who you share it with, should be up to you. We design Apple products to protect your privacy and give you control over your information. It’s not always easy. But that’s the kind of innovation we believe in.")
Examples on Android for 'an intercepting proxy' are No Root Firewall, and even better, the paid/donate version of Netguard, which allows you to permanently kill these errant calls. Additionally, you can disable Google Services Framework with it too, if you'd like to try running your Android without Google tethered and watching, while you enjoy sweet, well-designed FOSS apps and services.
Can't apps start up the facebook SDK after someone has clicked the facebook login button? If someone has already logged in with facebook, set a flag in NSUserDefaults, and start the sdk then.
I'm pretty sure they can (though another comment states the opposite and might be right), and I'm definitely sure they can even implement the whole Facebook login through standard OAuth without using any of Facebook's code.
They choose not to because 1) it's easier and 2) Facebook gives them a "cut" of the revenue in the form of free analytics and insights into how their Facebook ads are performing.
The companies decided that the value they get out of Facebook ads & analytics are worth more than their customers' privacy. Until well-enforced regulations come into play (so not the GDPR) nothing will change.
It's because Apple is one of the largest companies in the world and can survive a "backlash" much to the chagrin of small developers already subject to their mercurial policies. So the idea that that's what's stopping them doesn't really mesh with the reality that if it was, it would be stopping them from doing half the things they already do.
Banning 90% of the apps on the store until they remove the Facebook login SDK would cause a backlash much bigger than from a group of small developers.
It's not 90% of the apps on the store, and if it was then they should definitely ban it because there is no way 90% of the apps even need login accounts. The flashlight app still has no legitimate reason to access your contacts and location, much less Facebook.
Moreover, prohibiting this wouldn't actually remove the apps for more than five minutes because what would immediately follow is a version of the SDK that doesn't send any data to Facebook when you're not actually using a Facebook account.
Anyone who has been an Apple developer for more than a couple of decades, has had the experience of having the rug pulled out from under them by Apple.
That's one reason that I'm not hurrying to adopt SwiftUI. I really like it, and hope that it makes it (I despise Auto-Layout), but I have also seen other promising tech smothered in the crib (OpenDoc? QuickDrawGX?).
I would love if Apple started treating app analytics like they do my GPS location or my camera permissions. Basically, if apps want to send analytics, they must go through an iOS API.
Then as a user, I can inspect what apps are sending and how frequently. I should be able to block requests or set myself as anonymous. Or allow apps for certain amounts of time etc.
As an app developer, I think that I've done "Facebook SDK integration" task over 10 times at the very least. I don't think I'm the only one. It's unrealistic to expect a mobile app not to offer a user the option to login through Facebook.
And yet, we don't need to integrate Facebook's binary blobs to use this SDK's main features. How about we implement the open version of Facebook SDK that uses their APIs but doesn't do anything that we don't want it to?
To clarify, having just worked with the Facebook SDK library for my company's codebase, I dont think it is possible to setup the SDK without this happening. Disclaimer: I do not know what the FacebookSDK does after you call it's launch methods but I am pretty certain that they are required for a least some versions of the SDK.
If you are a Zoom user who is not using a Facebook account, I believe the only info Facebook is getting is that the Zoom app was launched and nothing about the user itself. Unfortunately the side-effect of using the FBSDK is that Facebook can track your app's usage for all users.
I believe this is true of all apps with a "Login in with Facebook" button. FWIW, it does not appear that other OAuth's do this (including Google's)
FWIW, it's possible to use OAuth login without importing the SDK.
I did it on my last company's apps and webapps when we had to optimise for performance, and removed some dependencies.
Of course, now that I'm gone the SDK is back because one of the developers was bullish on using the SDKs at all costs (the webapp, for example, now loads FB, Google and Linkedin SDKs on launch).
This is a problem that we developers are creating.
Can't they fingerprint the device? The fact that Zoom was launched on a specific device is still a lot more information than I would be comfortable giving up if I don't use Facebook at all.
It looks like they are fingerprinting the device. Which means that if you have facebook but don't install the phone app, that your phone can be connected with your account and then you can be tracked across the web. There are also, of course, the facebook shadow accounts. Information connected with fingerprints, but not associated with a facebook account.
If they are it's against Apple's rules. Not sure what they are using to fingerprint anyways, given Apple has blocked access to the older system values that were being used.
> Not sure what they are using to fingerprint anyways
>> The Zoom app notifies Facebook when the user opens the app, details on the user's device such as the model, the time zone and city they are connecting from, which phone carrier they are using, __and a unique advertiser identifier__ created by the user's device which companies can use to target a user with advertisements
There are two identifiers: Identifier for Advertisers (IDFA) and Identifier for Vendors (IDFV).
IDFA is the same across all apps on a device. However, it can be reset by the user or disabled (in which case it returns all 0s). Also, apps have to disclose (to Apple) that they use the IDFA - not sure if that's visible to the user in the App Store anywhere.
IDFV is unique per vendor - that is, each app has a different ID, but two apps from the same developer will have the same ID. I believe this is also reset when resetting the device.
The FBSDK doesn't require developers to enable the IDFA, so the unique identifier in the phone home request is either the IDFV (effectively unique) or just a UUID that the FBSDK generates and stores on launch.
Reminder: The NextDNS iOS app allows you to monitor and block these types of requests from all of your apps, via their DNS logging/filtering. (You can also configure the retention on the DNS logging, so as to not cause more toxic waste data.)
I can't recommend it enough. Until/unless we get something like Little Snitch for the phone (are you listening, Apple?!), this is the next best thing.
NextDNS is great, set it up on all my devices a few back when there was a post on here about it. Uninstalled a few apps just from seeing the number of requests they were sending even when I didn't use those apps frequently.
Like its mentioned in this discussion, using the FB SDK will result in apps sending requests to FB. Found a banking app I use was doing this...
NextDNS doesn't send the traffic, just DNS (and, entirely encrypted via DoH, unlike normal DNS), but if you're concerned about it, something like pi-hole does much the same thing, and is self-contained/self-hosted.
I deployed a VPN+PiHole on a micro ec2 instance for use from my iOS devices. Works great. First i installed pihole and configured, then used this https://github.com/jawj/IKEv2-setup to setup the vpn. Took about 30 mins. Works great!
Because AWS is going to have to answer to pissed off Enterprise customers if there was ever a story to come out that they're handling customer data inappropriately.
For me the value is more about having ad blocking at the dns level and the vpn is just a way to get that on iOS/Android devices where I don’t control dns servers. When out and about on 4G, pages load a lot faster with all the garbage blocked.
(click the area with the various app & website icons to expand into a more detailed view)
I was pretty surprised the first time I came across that list, there are a lot of apps on there that I never did a Facebook login with. For example right now I see that a map app I downloaded when I was travelling last year but only opened once or twice has sent 395 "interactions", the latest of which was 3 days ago. Actually, I should probably delete that now haha. Also, I'm using Firefox with the Facebook container, Privacy Badger, and uBlock Origin, and there are still many websites listed.
So I do not have facebook installed on my phone but I do have instagram and whatsapp.
A large amount of phone apps seem to appear in that list. I guess Whatsapp/Instagram creates a fingerprint of my device and then uses that for tracking?
I believe that is in fact the case. I removed all Facebook owned apps from my phone a few weeks ago and I stopped seeing reports show up there. Experimentally, it seems like an uninstall disassociates the ID from your facebook account.
That doesn’t mean facebook stopped getting those reports, only that they are no longer associating them with my account.
Well everything that imports the Facebook SDK or allows sign in with Facebook does this so as long as an app has that blue button on the screen, you shouldn't be surprised that it will phone home to Facebook once the app is opened and initialised.
Too bad it isn't practical to have a system-wide blacklist of selected hosts on iOS. Maybe you can but requires a jailbreak, but that too can break some apps.
There are some “VPN” apps that can stop connections system-wide. I’m not sure about custom block lists, but take a look at the free Lockdown app (it’s FOSS). It does all processing on-device. There’s also a paid app (which for me is an expensive subscription) called Guardian Firewall, which uses its servers to process requests.
If you 'supervise' your iPhone you can also configure an adblocker with a proxy auto-config. Less fiddly than a VPN but harder to customise! Supervising requires a wipe too.
https://github.com/essandess/easylist-pac-privoxy
It’s generally not a good idea to clearly “wink wink” indicate how to abuse an endpoint, since that abuse can be easily interpreted under various criminal laws as malicious and worthy of prosecution. You could protect yourself against such accusations with more neutral language, starting with rewording the “litter” sentence.
A lot of apps are doing it without the developers even knowing about (ask me how I know). You just integrate their SDK for social login or something else and it will start sending data to the mothership.
This is one of the reasons why you are supposed to audit your dependencies and understand what they do. There is no excuse for an app developer to ship their app and not know what it does.
Exactly. It pains me to see so many developers just going with the flow and not exercising critical thinking to decide what all they need to do before and bringing a dependency.
In my experience developers that integrate the FB SDK into their apps just copy-paste whatever code snippet Facebook tells them to do, which is always maximum data capture, without thinking of any of the implications. There's usually a way to limit data leakage while using the minimum FB functionality you want, such as only using FB for login without sending every damn app event to Facebook.
On Android the first thing you notice when you install a firewall such as NetGuard is the amount of applications that try to access facebook servers. It's mind boggling, probably 50% are doing so. And I'm not even on facebook at all.
And it was sad to see in facebook offline activity how much data was linked to me, from apps which have the sdk. And you don't even need to log in via facebook or like/share. The sdk being present and working is enough.
Netguard proved to me that, despite never having a FB account, I surely had dozens upon dozens of shadow accounts. Pretty much any new hardware that had vanilla Play Store apps were ratting me out the entire time.
More breaking news: Almost every website sends data to google, even if you don't have a google account.
Singling out Facebook as the privacy nemesis while giving a free pass to "cute" conglomerates like Google reeks of class hatred and flavor-of-the-month-style pseudo journalism.
Every time I read an article with FaceBook in the title I'm a little more glad that I stopped using the service a while ago. Stuck using Zoom for work, but I do use it on a semi-quarantined device so it shouldn't be able to tie it back to my old Facebook account or online activity on my desktop.
The problem isn't using Facebook. In fact, if you were knowingly using Facebook it might be considered a fair trade-off that you get stalked in exchange for getting the service for free (not saying it is right, but at least you have all the facts and can decide whether using Facebook is worth it).
The problem is that Facebook stalks you regardless of whether you have an account or not (through their SDKs embedded in pretty much every app).
People crap on the web for its privacy record - justifiably - but at least you can open dev tools and see what the page is doing. Selling apps as being better for privacy just seems like a complete misstatement.
This is really somewhat sad, as it seems unneeded provided they have the funds to wait out the IT approval cycle.
They are handily beating WebEx, MS Teams, etc, on basic shit like showing more than four video feeds from participants, dealing with low bandwidth connections, etc.
Feels like they are doing revenue grabs too early. A little more patience and the contracts will roll in. Especially given how many stodgy companies are newly coming to terms with the WFH need.
Maybe temporarily extend the free plan from 45 minute meetings to 1 hour and grab some market share?
Vice has one of the worst privacy policies in the entirety of media, so it's kind of a curious thing to see them complaining about. They don't mention they phone Criteo and AdNexus on every page load, and I'm pretty confident I see them using Facebook events too.
I like the effort started by Objective Development (creators of little snitch) called IPA: Internet Access Policy [0]. An IAP is a document that defines to what endpoints does an application connect too. Apple should get on this bandwagon and enforce it and the OS level, so that any application must ship this IAP document and only be allowed to connect the endpoints listed in that document. Furthermore a user should have the option to see which endpoints/domains those are, and disable some of them.
I'd love to throw stones here but I'm just used to it. The official way of installing Ubuntu linux (and many other distros such as Mint) from a Mac, for example, uses a giant bloated piece of crap that includes not just the Facebook SDK but also the Google Analytics stack! I think it's a lost cause. There simply aren't enough good software developers active in the world and these SDKs can make it easy or possible for developers to ship product.
Somewhat tangential but with method swizzling Facebook SDK can figure out the location of your device if the host app has location permission. You don't need the Facebook app installed, as long as the host app has location permission (say you give location permission to reddit app which has FacebookSDK), Facebook can piggy back on that to get your location.
PS: Replace redidt with any app that uses Facebook login. IDK if reddit uses FacebookSDK.
Are you asking about React and GraphQL? I'm not sure whether they phone home but their development was certainly subsidized by abusing people's privacy.
I want to see the App Store list what entities a given app might communicate with without explicit user request.
If your SDK feeds FB, it needs to be on the label. If you talk to dodgy surveillance shops, ditto. Making this enforceable (plist authorizations, like microphone permissions) is a little tricky, but at the very least smoking out slimy crap like this would be much easier.
Yes. Yes I do. It can't happen fast enough. Sorry, not sorry, if your business model depends on selling the personal data of unsuspecting users and this makes you go under. If you are up front that this data exchange occurs and the user accepts, then that's fine. Hiding it in lawyer speak in an comprehensible EULA/TOS does not count as being up front about anything.
Apparently if you want to advertise on the internet, Facebook Ads are the way to go. I figure there's enough information to tune in their machine learning just by knowing if a user clicked an ad, but once you see how much information they gather, it starts to make more sense. For example, they send the document.title every time you navigate -- that's often sensitive information! So the more "responsible" companies embed all the facebook tracking scripts inside an iframe and interact with it via postMessage.
GDPR says it's illegal, but its weight is about the same as the UK law that says it's illegal to handle salmon in suspicious circumstances (https://en.wikipedia.org/wiki/Salmon_Act_1986).
A law is only good if there are actual consequences for breaking it and so far there hasn't been any for these kinds of large-scale breaches.
Yah,this is all part of profiling without your permission without it needing to know who you are but still being able to identify you. Thanks Zoom for being a trustworthy entity.
So facebook sent me a cease and desist threat for revealing that they were tracking all vehicles driving by their campus and then telling the city of menlo park of this.
So facebook, i want you to cease and desist in tracking anything and everything about me or anyone who wants nothing to do with your leviathan of bullshit tracking or pay out the ass and prove all my data has been deleted, and provide me a manner with which i can audit you for having no data on me.
If not, lets reveal all the other things you track on people who want nothing to do with you.
>So facebook sent me a cease and desist threat for revealing that they were tracking all vehicles driving by their campus and then telling the city of menlo park of this.
Do you have a blog or some such going into more details? This raises all sorts of curiousness. How did you find out this is what was going on? What justifications did FB claim for doing the tracking? What justifications did FB claim for stopping you from talking about it? As the Robot says "Data inadequate!"
Such "hail corporate"... How about you look at it another way: the stock price is increasing because the app just got super popular the last few days, but users might not be aware of the privacy implications of the app, and curious experts started digging into it to see if it's a safe app.
It'd be like saying the people who were investigating Dieselgate were doing it because they wanted to destroy VW's stock price, instead of caring about the health of humans.
If they did, what's that matter? Google started with a tag line of "Don't be evil". 9 out of 10 doctors used to say smoking was good for your health. Choosy moms choose JIF (not GIF).
This is exactly why singling out tracking on the web while ignoring tracking in native apps undermines Apple's claims to be protecting user privacy. At least on the web you have some tools to fight this kind of thing.
Why do people use zoom at all? I know big companies that use it. A little disconcerting that even large companies don’t ask the right questions or do the due diligence and when paying for it.
It actually works, which sadly makes it way above average.
I've had to switch off between WebEx, Zoom and Hangouts for the last month and Zoom is head and shoulders above the other two in terms of usability and call quality. And there's whatever Cisco's previous craptastic offering was (jabber?) which is far, far worse than any of those three.
Ahem. That's just Facebook Analytics. Yes, it should be mentioned in the privacy policy, especially if they operate in countries under the GDPR (they do).
But having a go at Zoom on this ground is unfair, given many developers do the exact same thing.
So maybe it's unfair, but unfair is perfectly OK in this kind of situation. The important thing is to call it out and make it stop, even if this has to be done for one perpetrator at a time.
Analytics that fingerprints a device, figerprints that facebook can then use to build shadow profiles of people not consenting to have their data processed and stored by facebook.
Monzo does this too, on every app launch on Android. They say it's to attribute referral sources but, as someone who actively chose not to have a Facebook account, I hate that they initiate any request to Facebook from my device. I'm blocking it with DNS66 but not everyone knows how or why they should consider doing this.
Here’s a reality for folks: right now, Zoom is literally saving entire corporations and jobs in the midst of a global pandemic. Outside this little bubble, no one is blinking twice at this, and neither is our government; frankly, and that’s how it should be. The benefits of Zoom actually _even working at all_ during this time is to be applauded and the engineers and customer service reps should be applauded. This... this right here is a privileged group of people with no conception of what real problems actually look like.
So, this is the one SV disruptor that is actually changing the world? I hope you didn't break your arm reaching around to pat yourself on the back with that one.
>Outside this little bubble, no one is blinking twice at this,
Might I suggest stepping out of your bubble to realize that the world is not going to end because of the lack of a video conference. People are sheep. Fire is hot and water is wet. "People are not blinking twice" is not the hill you want to die on. You might as well say "people are stupid, and we can take advantage of them". At least that would be honest.
Build an app. If it is good and people want it, they will pay for it. If you want to make it free by selling the data you harvest from the users, then be up front about it and let the users decide. As you stated, people are not "blinking twice". Since you are not up front about it, then any good will you might have earned is out the window.
Welcome, Zoom employee. The 'privileged' condescension tact will not win friends and influence people. Have you tried meet.jit.si? It's secure and free, no account or download needed. Cheers.
Facebook is also ad & analytics network. You can replace facebook, with any ad/analytics network and it will still be true.
Probably vice’s mobile app, or website, or any other app with ads, sharing this information.
Problem is trying to create fake news like this, with using popular names like zoom and facebook. (When I mean fake news, this is not news at all, things you can see on google analytics vs this is not even comparable)
Please don't misuse the term "fake news." Fake news has a specific meaning: false stories that have no basis in fact but have tantalizing headlines meant to attract attention.
Attempting to change the meaning of the term dilutes its importance and introduces unnecessary confusion.
If "Fake News" ever meant that for the majority of english speakers, it is gone. It seems to have skyrocketed in the modern lexicon as a way to attack media to shift the conversation. Whenever I hear someone use the term Fake News, I am immediately more critical of their words.
Actually this is exactly what fake news is, putting facts in different spotlight, and let people to reach wrong verdict: (also it is called propaganda)
Even when reading the article, author is going between two sides, but considering headline and sub-headline, it is obvious it is at least clickbait and biased.
- "Zoom iOS App Sends Data to Facebook Even if You Don’t Have a Facebook Account"
+ "This sort of data transfer is not uncommon, especially for Facebook; plenty of apps use Facebook's software development kits (SDK) as a means to implement features into their apps more easily, which also has the effect of sending information to Facebook."
+ "I think users can ultimately decide how they feel about Zoom and other apps sending beacons to Facebook, even if there is no direct evidence of sensitive data being shared in current versions,"
- "Zoom's privacy policy isn't explicit about the data transfer to Facebook at all."
- "That's shocking. There is nothing in the privacy policy that addresses that,"
+ Zoom's privacy policy says "our third-party service providers, and advertising partners (e.g., Google Ads and Google Analytics) automatically collect some information about you when you use our Products," but does not link this sort of activity to Facebook specifically.
No it is not, and you're doing exactly the thing that is the problem (calling Fake News something other than its original definition). Examples of Fake News include:
* "69% of veterans disapprove of [Political Candidate]" (not actual poll results)
* Claiming a rally for a political candidate is being organized that does not exist
Another indicia of fake news is that it looks like it's coming from a legitimate news outlet - the graphics are there and it's attributed to a legitimate-sounding outlet (like "America News Network") but is not one.
It's fine if you want to call propaganda propaganda, but don't call it fake news. They are not identical.
It is not wrong. There are other well-recognized and understood descriptive terms for things like what you are referring to: "questionable journalism," "propaganda," "hit piece," etc.
Having a common vocabulary helps establish an agreed-upon framework by which we can analyze these articles.
> The Zoom app notifies Facebook when the user opens the app, details on the user's device such as the model, the time zone and city they are connecting from, which phone carrier they are using, and a unique advertiser identifier created by the user's device which companies can use to target a user with advertisements
So Zoom is sending the fingerprints of mobile users to Facebook. Which helps Facebook better track users across the internet. Not only this, but Zoom is not disclosing this information (though it isn't like people read TOS and would be aware of this anyways).
Can we just stop sending data everywhere? If you don't need it, don't gather it.