Hacker News new | more | comments | ask | show | jobs | submit login
Mozilla Raises Concerns Over Facebook’s Lack of Transparency (blog.mozilla.org)
249 points by Manu1987 19 days ago | hide | past | web | favorite | 47 comments



Facebook is right to point out here that scraping data from their users' logged-in pages puts their privacy at risk and makes Facebook liable for it, even if there is consent (as happened in the cambridge analytica thing). There is clearly a conflict with EU's own laws here.

Also, don't forget that EU elections are nothing like federal elections, these are local elections which reflect each country's national politics rather than anything EU-wide. And, based on previous elections, i have found individual candidates themselves to be major violators of anti-spam laws.


Sorry, did you mean Mozilla is right to point out?


no, facebook.


This was Facebook's statement to Propublica [1]:

> “We regularly improve the ways we prevent unauthorized access by third parties like web browser plugins to keep people’s information safe,” Facebook spokesperson Beth Gautier said. “This was a routine update and applied to ad blocking and ad scraping plugins, which can expose people’s information to bad actors in ways they did not expect.”

Seriously, after all the scandals with third parties where action wasn't taken in a timely manner, it's clear that Facebook will do something if and only if it benefits from it.

Like with ad blockers, can't these extensions change this into an arms race where it becomes futile for Facebook to keep fighting them? Or are these organizations restraining themselves, hoping that Facebook will somehow change its mind?

[1]: https://www.propublica.org/article/facebook-blocks-ad-transp...


I understand the privacy concerns, the problem being that Facebook's ads targeting has no transparency, which is very problematic for issues of public interest.

For elections for example I'd like to be able to see what parties or what politicians are paying for Facebook ads, what demographics are they targeting and with what ads.

If Facebook exposed this data, it wouldn't be a privacy issue.


Some of what you want might be available here: https://www.facebook.com/ads/archive/ (https://www.facebook.com/business/help/2405092116183307).

(work for fb)


They can't do this for reasons of privacy of advertisers. /s


They actually make that data you mentioned publicly available already.

With an archive that allows you to go back 7 years or something


> it's clear that Facebook will do something if and only if it benefits from it.

Like any other business. Facebook is not a charity.


Elections were the breaking point for me on Facebook. I didn’t post much myself but my family and friends do. After seeing the amount of blatantly false information being passed by both political sides I finally had enough. This was 2016, after what I’ve learned last year I really what to delete my account instead of having it deactivated.


Are you saying that Facebook should decide which news is false or that you're just done with using social media to find news in general?


I find myself somewhat unimpressed by arguments like "Facebook / Google" should not be deciding what is false.

They already are - or rather, they are deciding what is visible and prominent, and what isn't. When you share a post, Facebook's algorithm determine what visible it has for each of your friends - whether it's on top of their feed, lower, or completely invisible.

At this point it seems more and more likely (though probably not proven) that those algorithms are skewed to promote content that gets high chances of engagement - and that means content that provokes outrage or rage is more likely to be visible than a long-form, balanced conversations.

So in short - Facebook is not (and should not) be the arbiter of truth. But surely it's not unreasonable to ask that they don't actively promote incivility and (literally, uncontroversially) blatantly fake news?


I imagine that, at some point, any specialized management of a 'social feed' will be considered human experimentation that generally leaves the people consuming it to be worse off mentally and emotionally. We already know that Facebook has done what are considered experiments against the wellbeing of users, but there is no real gap between that and trying to adjust the behavior of users in order to keep them on the site. Once that adjustment has been made, then a closed-source curated social feed becomes illegal, as it should be, IMO.

I think that the only way to manage a social feed is for it to be really dumb: blocks, maybe best friends, etc. Of course, once you do that you've basically returned to email.


The Corporate Authoritarian Solution terrifies me so much.

American society is still in toddler years in terms of political consciousness, but the worst damage happened in the years before the internet, not after. 2016 was a madhouse because that was us coming out of denial, thanks to the internet.


Facebook isn’t a news platform to me and that was never my use case. I’m tired of both the political ads and the political crap people share as “truth”.

If you consider HN or Reddit as social media, then I guess I’m not done with news and social media. I like link aggregator sites over sites were I share my personal photos/comments with news mixed in.


This ProPublica article has more information: https://www.propublica.org/article/facebook-blocks-ad-transp...


> It also prevents any developer, researcher, or organization to develop tools, critical insights, and research designed to educate and empower users to understand and therefore resist targeted disinformation campaigns.

Is Mozilla building some sort of visualization tool for the marketing done on the upcoming EU elections?


Maybe, maybe not. But I have a number of academic colleagues working on marketing/news/misinformation on online platforms, and Facebook has always been a problem for analysis. It's next to impossible to get raw data unless you're Facebook, which makes investigating these questions very difficult.

For comparison, Twitter is comparatively easy to work with (though getting harder as time goes on) and Reddit is very open. This is one reason why there are lots of papers based on Twitter data and very few based on analyses of Facebook.


Any legitimate reasons for not allowing access to raw data? Could it violate people's privacy for example?


Yes, there are substantial privacy implications from social media research. And they're particularly problematic on a platform where privacy controls restrict access to certain data/conversations to particular friends (and Facebook employees, of course).

Not everything is like this, though. For example, there's been next to no public data about how the news feed algorithm works and no way of tracking how ads are being targeted. Both of these are commercially sensitive (and it's clearly in Facebook's interest to keep its cards to its chest) but a royal pain compared with, say, newspaper research where you can see what is being served to whom along with which adverts on a day-by-day basis.

And there's essentially no public API even to get those things to which you do potentially have access - so there's no way of getting the membership of public groups without web scraping or recording by hand, AFAIK. And certainly the first of these is prohibited by TOS, which means many university ethics committees will reject the research even if the researcher does have the capability to get it.

It's basically a large black box. And, as a political scientist, black boxes with political salience concern me.


That's certainly something to be careful about. Although one would think that in at least some data about advertising could be made available without much risk - outside of specific targeting lists, how much personal data is gonna be in there?


I'm sure it could, but of course you can always anonymise data first. In any case it seems a bit rich for facebook to restrict access due to privacy concerns, in reality the issues is likely that reason is simply that academics can't pay as well as ad agencies.


Data anonymization is an incredibly complex problem and de-anonymization is an entire subfield. I’m by no means a Facebook apologist, but “we don’t trust our ability to deanonymize data so it would be suitable to publish” rings true.


Anyone know a marketplace for ppl to sell SARs to reseachers - with filters like: public funding, open results, xyz de-personalisation standards or auditors?

Come to think of it any proxy depersonalisation services that maybe cache xyz data then show you how you payload would look were you to participate in the study?

Edit: no need to cache, download some js, xslt, rdf or similar to allow you to verify what you were sending locally? Public comment section for discussion of implications of studies and releasing data sets.. lots of interesting possibilities in this space of taking ownership of your data but also sharing it.


In October Mozilla released "Ad Analysis for Facebook. It shows you why you were targeted, and how your targeting might differ from other users... The extension also displays a high-level overview of the top political advertisers based on targeting by state, gender, and age. You can view ads for each of these targeting criteria — the kinds of ads you would never normally see."

https://blog.mozilla.org/netpolicy/2018/10/18/getting-seriou...


That extension is really cool, unfortunately I have to disable my ad-blocking to see it in action


It is the very nature of online advertising to stalk usage behavior akin to spyware and be completely opaque about it. It is horribly subjective to single out Facebook for it when all online advertisers are basically doing the very same things.

The only difference between Facebook and other ad agencies is that Facebook has a structured system in place inviting better PII and relationship data submitted directly by the users. While people demonize Facebook for this behavior other ad agencies would kill to be in that position.


> It is the very nature of online advertising to stalk usage behavior akin to spyware and be completely opaque about it.

No, it's not. Online ad space is like any other ad space, and advertising has existed for decades with only minimal information about who views it. The only difference with online advertising is that there are no longer technological limitations to tracking everything. Something that's much harder to do with a billboard or a newspaper ad. Since Since this is the case, it's up to voters to insist on legal limits on tracking instead. But advertising will be fine either way.


Just because online advertising is different from billboards in that billboards don't stalk me doesn't in any ways redeem online advertising.

I have worked that business before and written that code for Travelocity. I created Travelocity's most successful advertising code before they were consumed by Expedia (an annoying popunder that achieved click-through rates of 14%). I later worked as the developer on their analytics team and worked hand-in-hand with their media (advertising) team. The name of the game is experimentation and analysis. The more data you have on a user the more accurate and targeted your decisions can be about that user. Spyware is absolutely the name of the game.


In the U.S., Facebook is allowed to lie, and to propagate lies of others. On what basis is Facebook required to be any different than the National Enquirer? The limitation is defamation, but in a political context defamation doesn't apply.

Mozilla's letter is to the European Commission, and as such I recognize different laws between the U.S. and E.U. are relevant.

However, the most simple test for double standard is to flip the narrative around: imagine a truth that makes many people angry to hear, and imagine that unpopular truth being put into an ad, and imagine the people arguing for ad targeting methods to be revealed are people who want to protect people from truth in order to allow them to keep on believing lies.

I appreciate the importance of developing tools to fight disinformation campaigns. But on what basis can any person or company be compelled to reveal their advertising methodology? What is public interest in a political ad campaign context where literally everyone involved is claiming moral high ground?

I think regulating speech is a trap, and people need to consider the double standard. What happens when your truth becomes difficult or impossible to disseminate, because it makes most people angry to hear? You really think all angry people need their emotional state coddled? Why protect them from becoming angry? This is why we need to do better in school teaching coping skills, and critical thinking.


As a fan of Firefox (see username) and user since 2006, and as someone who doesn't like fb- maybe Mozilla should start by being more transparent about:

1) why they don't install ublock origin by default

2) why they install pocket by default

3) why they keep doing ad-experiments?

Mozilla needs to stop pretending they are some saints. But thanks for calling fb out.


Is Mozilla right or wrong in their callout? Where did this idea come from that unless you are saint-like you have no business in calling out bad behavior? Newsflash everybody has their sins and everybody needs to be called out on their shit. And behavior needs to change. Why should Facebooks bad behavior be shielded by Mozillas infractions?


I already said they are right in calling them out very clearly.


And yet you focused nearly your entire comment on calling them out.


Mozilla is trying to find a way to sustain itself outside of Google. I don't fault them for that. They need money just like any other company.


Mozilla is a non-profit, not like any other company!

That said I strongly support their work... Disclaimer: I used to work at Mozilla.


Non-profits still need money to pay their bills...


Yes, but not like _any_ other company.


My hate for facebook borders on the irrational, but if Mozilla has money worries then maybe they should trim some of the fat and focus on making a web browser instead of burning resources playing vigilante against Facebook, fake news, and other internet villains.


If Firefox would add ublock by the fault they could kiss bye bye to default search engine provider money.


This is probably also the reason we have "Facebook Container" from Mozilla (https://addons.mozilla.org/en-US/firefox/addon/facebook-cont...) but not "Google Container" (it exists, but it's not from Mozilla: https://addons.mozilla.org/en-US/firefox/addon/google-contai...) And if if you're feeling cynical, it might also be why the UX of Multi-Account Containers remains so lackluster; confusing enough to ensure that most firefox users will not use it.


Political advertising is something that FB should have tackled closer to the stem.

First, they should have considered banning it entirely. I'm sure they had such conversations around (eg) tobacco, arms, alcohol, medicines and such. FB are far from immune to the fiduciary imperative, but they are very profitable. They were in a position to turn away potentially problem-making revenue. In hindsight, IMO, that would have been the better choice.

If they are going to do political advertising, they should be channeling & (internally) regulating it differently. Separate rules. ID your advertisers, and enforce local election rules by jurisdiction, like election day ad bans. Report on (or verify) spending, if local laws require it. Similar systems exist for licensed/regulated markets like financial services. You can't just start advertising savings accounts.

Without doubt, transparency should be part of the special rules for political ads.

On the bigger questions, to me, GDPR-like regulation is not the answer. Some parts, like reporting on breaches, are good. Other parts, like the emphasis on creating agreements between users and websites, are (IMO!) wrong. These things are contracts. The mental bandwidth required to understand a cookie policy and make informed is far too great.

Transparency is, potentially, an alternative.


Political ads are not inherently bad though. Ads in general are not actually.

Ads help political candidates to get the word out...i think that’s useful and I think it’s awesome that Facebook didn’t go the route of less resistance (banning political ads) here and rather is trying to get this right!

Political ads a a tiny slice of FBs revenue...so I don’t believe they are profit motivated here.

$1B was spend on digital ads in the 2016 presidential runs and Facebook has a 19.6% market share so probably pulled in about $196M which is about 0.35% of their annual revenue...so I don’t think they are in it for the money at all


>>Political ads are not inherently bad

Are they inherently good?

In any case, I didn't say they had to ban them, though I think in hindsight they should have considering (as you say) that they could afford to.

Since they decided not to, then they should have treated political ads more responsibly. For example, only a candidate can run a political campaign. No anonymous advertisers.


[flagged]


You are, of course, correct. Unfortunately this isn't the thread for it.


Didn't Debian do that already, with Iceweasel?


Iceweasel is only Firefox without the copyright branding of the logo and the name of Firefox. But otherwise, it's exactly the same code base.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: