Hacker News new | comments | ask | show | jobs | submit login
Testimony of Mark Zuckerberg – Hearing Before US House of Representatives [pdf] (house.gov)
327 points by uptown 9 months ago | hide | past | web | favorite | 258 comments



>But it’s clear now that we didn’t do enough to prevent these tools from being used for harm as well. That goes for [...] data privacy. We didn’t take a broad enough view of our responsibility, and that was a big mistake. It was my mistake, and I’m sorry.

I was going through some old news archives about Facebook and their privacy policies. I came across the dire EFF warning in December 2009 [1]:

>"The issue of privacy when it comes to Facebook apps such as those innocent-seeming quizzes has been well-publicized by our friends at the ACLU and was a major concern for the Canadian Privacy Commissioner, which concluded that app developers had far too much freedom to suck up users' personal data, including the data of Facebook users who don't use apps at all. Facebook previously offered a solution to users who didn't want their info being shared with app developers over the Facebook Platform every time a one of their friends added an app: users could select a privacy option telling Facebook to "not share any information about me through the Facebook API.""

Well, it turns out EFF was correct and accurately predicted the unethical scenario of Cambridge Analytica siphoning data from Facebook users who didn't even take their quiz.

The bullet points of "fixes" that MZ outlined don't really address the fundamental problem. Facebook's "data privacy" problem is not fixable if they have to ultimately run valuable ads against that data.

[1] https://www.eff.org/deeplinks/2009/12/facebooks-new-privacy-...


> that was a big mistake

Zeynep Tufecki does a good takedown of Facebook's "14-Year Apology Tour" [0].

It's a long-term, calculated, deliberate strategy to methodically abuse privacy with disastrous consequences, then when caught dead-to-rights, say "oops, sorry, made a mistake, we'll fix it".

This game worked amazingly well for 14 years, will people fall for it again this time?

[0] https://www.wired.com/story/why-zuckerberg-15-year-apology-t...


> "I hope you understand, this is not how I meant for things to go, and I apologize for any harm done as a result of my neglect to consider how quickly the site would spread and its consequences thereafter,” wrote a young Mark Zuckerberg. “I definitely see how my intentions could be seen in the wrong light.”

More like the Non-Apology Tour.


Here is a humorous and accurate blog post about testimony:

http://neverusefb.blogspot.com/2018/04/accurate-reponse-to-f...


The very idea that Facebook didn't know how their product was being used an that they did not intentionally leave those data policies in place so that companies could take advantage of them is beyond the pale. This is a premier data collection firm whose product is their users. They would have us believe that Facebook is completely incompetent regards to which data is being used extracted or available, or this apology is tantamount to a big fat lie.


I love that the URL says "15 years"


Apparently Wired's editors take care to fix even small errors of fact.


Future proof :)


Hopefully the EFF is getting a big surge in donations from this latest outcry over privacy.

You can donate here: https://supporters.eff.org/donate


I wish the EFF offered guidelines and a certification for companies to show they are following best practices for user data. I guess GDPR is the same thing but it’d be nice if some bucks went into the EFF’s pocket to fund lobbying on our behalf.

I have a data intensive startup and I would love to show we comply.


GDPR only applies in the EU.

Facebook already declared that they won't extend their GDPR compliance globally. So if you're in the US, you're not protected by it.


But it also applies to EU citizens living outside the EU. So the use cases get complicated.


I don't think this is true. It's not totally clear but GDPR seems to require people to be in the union. I believe that residency is the main requirement, not citizenship (a German person living in the US dealing with a US-only company is not covered).


Seriously? I did not know that. How would that work?

Edit: Just read Wikipedia and another source and both define it for people within the EU. They do mention whether the company is outside the EU and how they would need to handle the data for people within the EU.


Just got my hoodie.

That being said, I do not think giving money to eff is enough to get the changes necessary to be implemented. I am at a loss for what could be done you actually make change happen and am open if anyone has suggestions or other links to take a look at.


I don't think there's a better group to fund than the EFF. They're just about the only reliable lobbying group out there advocating for users, and considering how small they are, they're doing a hell of a lot of it.


You can call your representative or vote/elect/run for a position.

I admire the libertarian sentiment on HN but I think it would be greatly beneficial to everyone to have more thoughtful, smart and balanced people running our Governments.


Here's my bazillion-dollar question: How do you practically motivate those people to enter and stay in government? The insanity of the system is maddening, and even hard-fought positive progress is glacial compared to the private sector. I know smart, thoughtful people who considered government and finally decided they'd be far happier in the private sector.


Maciej (Pinboard / HN user 'idlewords') has been doing great work finding such people who are running and supporting them to get elected.


The progress is glacial because the private sector wants it to be and can afford to make it that way.

“Smart, thoughtful” people who are happier in the private sector is exactly who currently runs our governments. So your plan is already in action.


Start a local Electronic Frontier Alliance group and work in your local community to build additional support for issues you care about.

https://www.eff.org/fight


Large-scale change is hard, and I don't have good ideas for how to force it -- I'm just some random internet schmuck, not a clairvoyant genius. At a small scale, delete your FB account and install a tracking blocker. Going slightly larger, help your non-nerd friends to do the same. Network effects work both ways.

If you have time and money left over after that, the EFF isn't the worst place to spend them.


Worth noting that some payroll systems allow you to set up automatic donations to charities like the EFF. I know Gusto did.

edit: looks like some payroll systems may take a cut. Be wary, or set up recurring donation directly.


I wanted to do this, but 7.5% is taken out before the money gets to the charity. That is more than credit card processing fees! I no longer use that option and donate directly. Doing that does increase the amount of work to do your taxes if you itemize deductions though.

This is not going to gusto, but it taken out as part of the services provided by FirstGiving their partner to handle the donations: https://support.gusto.com/hc/en-us/articles/220929687-Charit....


Oh, I had no idea. In that case no I wouldn't use that, I'd set up a recurring donation directly. (I recently changed payroll processing and haven't transferred the donations yet).


>The bullet points of "fixes" that MZ outlined don't really address the fundamental problem. Facebook's "data privacy" problem is not fixable if they have to ultimately run valuable ads against that data.

I wholly disagree. They don't have to disclose the data to anyone in order to use it to target ads. Their targeting system works by allowing advertisers to specify targeting criteria, and then using logic on Facebook's own servers to match users to targeted ads. They aren't selling or disclosing the data to anyone. People keep conflating the CA situation with the business of targeted ads. One has nothing to do with the other.


Your argument makes sense as long as you consider something “private” if it’s shared with Facebook itself.

You might, if you consider your personal interests to be consistently and permanently aligned with the ToS which best serves shareholder interest of the entity which controls the exploitation of the historical data, and on-going surveillance capability.

The data is pretty much unregulated, and there’s no reason for it to ever be deleted; given it’s size and value it’s likely to live-on much longer than Facebook (the company/legal entity) as we know it today. If they got in financial trouble, the prudent thing would be to flog the lot to a data broker to be resold and repackaged indefinitely.

Facebook themselves have a long established reputation for doing whatever they like and apologizing only if and when they get caught, and doing the minimum possible to ward off regulation or churn.

Just like the use of the term “breach”, there’s a broader use of the term which makes sense given the expectations of a user-base that’s been kept in the dark for years (as opposed to technical people with an understanding of the industry).

For example, my contact list is not “private” if it’s exfiltrated from my phone without my knowledge or consent, to be used for whatever purpose makes most money to whoever controls it (currently Facebook), against a ToS which can change at any time without notice, with almost no legal protection, regulation, or oversight.


Your argument makes sense as long as you consider something “private” if it’s shared with Facebook itself.

If you consider something so “private” that you would be upset that an algorithm on a secure serve might use it to target an ad to you - after you read and agreed to exactly that behavior when signing up - perhaps you shouldn’t have given it to them in the first place. At what point do you take personal responsibility for giving away this information you consider to be so private and valuable?


> At what point do you take personal responsibility for giving away this information you consider to be so private and valuable?

Informed consent seems like a reasonable place to start.


Yes, you were informed when you signed up. Your failure to read TOS, but agree to them anyway, is nobody’s fault but your own. This whole culture of “well yeah, I did it, but obviously that doesn’t mean I am responsible for it” is ludicrous to me. We are all responsible for our own choices in life.


Where is the consent of users that are being tracked outside of Facebook and are not part of it?

I don't understand why nobody is talking about the even bigger issue that Facebook (and Google and every other ad service) gets to see what people are visiting and when (articles, videos, porn, ...)!

Imagine the detailed profiles you can create with that amount of data...


And when someone clicks on a targeted ad, is their identity still private? Or does the company get a handy referrer that allows them to correlate that user's Facebook profile (and hence real name) with the specific criteria they paid to filter by?

It seems to me they have quite a lot to do with each other, in that Facebook has an enormous amount of valuable personal information about people (which in many cases was extrapolated without the target's consent, as opposed to explicitly given), and is entirely reckless about where this data ends up, as long as it isn't actually "public" as such.


Yes their identity is still private. Facebook’s referrer policy does not allow the browser to send a full referrer. The receiving site simply sees Facebook.com as the referrer.


One word: Equifax

...another word: NSA


Your conclusion doesn't really make sense. The EFF was talking about this data being shared with app developers, they were not talking about Facebook collecting this data. Facebook only needs to do the latter to run ads against it.


> That goes for [...] data privacy. We didn’t take a broad enough view of our responsibility, and that was a big mistake.

Judging from [1], this wasn't any "mistake":

They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.

This was a deliberate policy of allowing people "on their side" to access and use these data. Now, when it turned out people on the "other side" can do it too, it became a "big mistake" suddenly.

[1] https://twitter.com/cld276/status/975568208886484997


> Facebook's "data privacy" problem is not fixable if they have to ultimately run valuable ads against that data.

Do you have a citation for this? I've been trying to clarify the argument recently and am looking for other perspectives.


With up to 2 billion people already profiled by unethical 3rd parties using big data to spot the correlations between likes and behavioural patterns, the privacy ship has in effect already sailed unless all those users delete all the post likes, pages they follow etc. Facebook can’t fix this without throwing the advertising revenue baby out with the micro-targetting bathwater.


There's some irony in the current situation. FB created the Skinner box that harvested the information, then essentially gave it all away to anyone who could write a "viral" app, from Cambridge Analytica to Zynga. The data's out there, and if FB makes changes to let users actually delete their copies, it doesn't matter for anything harvested before 2015 or so.

To use a wildly over-dramatic metaphor: this is what nuclear winter for data looks like. Sure, all the explosions are done, but the fallout will continue for quite some time.


I think your wildly over dramatic metaphor is still wildly optimistic in assuming that the explosions are already done. I agree the fallout will continue, but I seriously doubt we have seen the worst of how bad this can get.


Sadly you may be right, but quite a few of the major surveillance companies have already been looted: Yahoo, then Equifax, then Facebook. Maybe Google hasn't (yet, that we know of), but it's probably just a matter of time. Fortunately, other than the "psychographic" woo-woo promoted by various organizations, that stuff has a relatively short half-life, and it seems like quite a few people are aware of the problem.

Think of it like nuclear power: it has both advantages and problems, and it seemed inevitable until Three Mile Island, Chernobyl, and Fukushima killed it. What if we're watching the general public become aware of the downsides of the surveillance economy?


> It was my mistake, and I’m sorry.

read in the voice of the South Park version of Tony Hayward.


"What We Are Doing" under the "Cambridge Analytica" section is crap, i.e. lightweight or hypothetical (e.g. "we’re in the process of investigating every app that had access to a large amount of information before we locked down our platform in 2014").

By contrast, the same section under "Russian Election Interference" is well thought out. There's some hand-wavy stuff (e.g. "in the U.S. Senate Alabama special election last year, we deployed new AI tools that proactively detected and removed fake accounts from Macedonia trying to spread misinformation"). But requiring "every advertiser who wants to run political or issue ads...confirm their identity and location" and mandating the ads "show...who paid for them" is meaningful. That they're "starting this in the U.S. and expanding to the rest of the world in the coming months" is more encouraging. I'm also genuinely optimistic about their "tool that lets anyone see all of the ads a page is running" and "searchable archive of past political ads."

With Cambridge Analytica, a core component of Facebook's advertising business model is threatened. Hence the inaction. With Russia, Facebook and political advertisers' interests are aligned. Hence, action.


> With Cambridge Analytica, a core component of Facebook's advertising business model is threatened. Hence the inaction.

If they already disabled the API that CA was using, is "inaction" really the right word?


> If they already disabled the API that CA was using, is "inaction" really the right word?

You don't find yourself testifying to the Congress because of a single technical loophole. This happens when a series of failures occur and fail to be remedied. In some cases they may not even be seen, internally, as failures [1][2].

Zuckerberg is there to tell the Congress "you can trust me." The Congress wants him to say that publicly to gauge if there's political will, amongst voters, to go after Facebook. Treating this as a narrow reaction to limited failures as opposed to a general questioning of the viability of an internationally-scoped ad-driven politically-volatile social network is what I expect Zuckerberg to do, and why I expect him to fall down.

[1] https://www.theverge.com/2018/3/30/17179100/facebook-memo-le...

[2] https://www.forbes.com/sites/amitchowdhry/2018/03/25/faceboo...


> If they already disabled the API that CA was using, is "inaction" really the right word?

If nothing else, it's a classic case of closing the barn doors after the horses are long gone. And Facebook's barn was built with every wall as an open door, and they've been slowly locking them up as people get outraged enough about things to demand it. But there are still tons of open doors.


Transparency at this moment would demand a complete list of all of the apps that could have had access to your data. That reveal hasn't happened.


I think the result of this may be regulation, or even breaking Facebook up. A few months ago I couldn’t have imagined feeling justified saying that was likely, but my god this story has legs. On CBS news (which is mediocre at best) they had someone from Mozilla explaining that this whole mess isn’t a breach or mistake, but their business model. The piece played at least three times in an hour.

This feels like a real change in public awareness to me. I’ve never seen this kind of real talk about privacy in the non technical press before, and it just keeps going and going. How can Facebook thrive as people become aware of just how crooked they are?


Validation of genuine political identity is a logical next step. One that will require an Army of human curation (not an AI problem).

However, I wonder about the reliability of "PO Box" as proxy to political identity.

Or how FB is going to stop Mom-and-Pop retail pages from advertising false political messages ("2-for-1 sale! All proceeds go towards stopping the Trump-ordered seal beatings in Antartica!")


Unfortunately, what a lot of people seem to miss is a lot of online political advertising is done through consultants or agencies. Disclosing that information not only hurts the politically independent agencies, but it also means there's not going to be a direct link from the ad to the candidate like people think.


Or, just how much political speech is really influenced by belief? There are many people that genuinely believe stuff which is factually inaccurate (homeopathy, e.g.). If they post about the FDA being run by "big pharma" what do you do?


While belief in homeopathy is clearly factually inaccurate, I'm pretty sure the claim `FDA is run by "big pharma"` is at least partly accurate due to regulatory capture.


Exactly! Nuance is the enemy of most moderation. We have tried in the past (I think) to favor permissiveness and inaccuracy than attempt to step into the role of “arbiter of truth”. I don’t envy anyone who needs to fill that role. At best you are judicial, at worst, Orwellian.


Well, it could be because they've had 6+ months to figure out what to do about one, and just a few weeks for the other.


>, and just a few weeks for the other.

Facebook knew about the Cambridge Analytica data harvesting way back in 2015. The true extent of it was only exposed to the public on March 2018 because a CA whistleblower[1] talked to news outlets about it.

Therefore, Facebook had ~2 years before the negative stories were blasted everywhere.

[1] https://www.google.com/search?q=Christopher+Wylie


> Well, it could be because they've had 6+ months to figure out what to do about one, and just a few weeks for the other.

Because, as we all know, Facebook only really starts trying to figure out how to solve a problem once it becomes a scandal.


Not true. They actually announced the end of the friend's data permission BEFORE the C.A. scandal broke in 2015. (Namely, they announced it in 2014.)

https://techcrunch.com/2015/04/28/facebook-api-shut-down/


> With Cambridge Analytica, a core component of Facebook's advertising business model is threatened. Hence the inaction. With Russia, Facebook and political advertisers' interests are aligned. Hence, action.

I think more conspiratorial thinking is necessary. With [fake] political ads, the ability of FB to usurp the ruling class in the US is apparent. In this case, it was Russia but in another hypothetical case it could be FB themselves (picking and choosing the propaganda err messages to show). Since the powers that be won't take kindly to that, action.

The powers that be don't actually care about privacy, in fact they actively don't want people to have privacy. Hence, inaction for CA.


I do this often:

1) Go to pcpartpicker, pick Storage, sort by Price/GB:

https://pcpartpicker.com/products/internal-hard-drive/#sort=...

3TB (for $57.50)

2) Go to Google, look up number of people in the United States:

https://www.google.com/search?q=us+population

325.7 million

3) Divide

You can store 9.2 kilobytes for every single person in the United States, for just $57.50.

9.2 kilobytes is actually a pretty decent amount of data. For comparison, this is Chapter 40 of Pride and Prejudice, at 9kb:

http://www.kellynch.com/e-texts/Pride%20and%20Prejudice/Prid...

The complete works of Shakespeare fit into 5 MB.

To store the equivalent of the complete works of Shakespeare for every person in the US would cost: $31,222.50.

Heck, I know small businesses that could afford that, let alone the mega-corp media conglomerates.

How much data does Verizon store about me? Comcast? Target? Visa?


Just storing it is only a tiny fraction of the cost. It requires at least a datacenter full of systems + supporting staff to actually collect it and make use of it.


This line of thinking is extremely reductionist to the point of uselessness. Anyone who's built products at scale can tell you this. There are thousands of things you aren't thinking of.


Why is it useless?


Say you buy those drives.

What's the cost to implement GDPR? What about storing high definition photos? What does availability across regions look like? What does latency look like? What does backup look like?

Do you see where I'm going with this?


Say I am an unhinged individual with $120 and I buy that one (1) drive.

>What's the cost to implement GDPR?: I laugh and continue to compile a list of where all crossbow enthusiasts with type-O blood have been in the past 6 months.

>What about storing high definition photos?: Facebook can handle that. All I want to do is store a 9KB index file on every US citizen, because I believe I am a vampiric J. Edgar Hoover.

>What does availability across regions look like?: The drive sits in my mom's basement as I continue to scrape crossbowforums.org against the Red Cross donors list.

>What does latency look like?: I guess however well a 50-centimeter SATA cable does.

>What does backup look like?: I buy two drives at once for $115 and get free shipping on the second one.


"From now on, every advertiser who wants to run political or issue ads will need to be authorized. To get authorized, advertisers will need to confirm their identity and location. Any advertiser who doesn’t pass will be prohibited from running political or issue ads. We will also label them and advertisers will have to show you who paid for them. We’re starting this in the U.S. and expanding to the rest of the world in the coming months."

Interesting!


There are so many other ways to "influence" people's opinions by distorting what they see on their news feed, front page Google results, etc.

For instance (from Wikipedia):

> In April 2016, Correct the Record announced that it would be spending $1 million to find and confront social media users who post unflattering messages about Clinton.[1][4] The organization's president, Brad Woodhouse, said they had "about a dozen people engaged in [producing] nothing but positive content on Hillary Clinton" and had a team distributing information "particularly of interest to women".


Obama’s team bragged that Facebook was actively helping them in 2012. So we can assume that political ads that fit the narrative will be rubber-stamped.


> There are so many other ways to "influence" people's opinions by distorting what they see on their news feed, front page Google results, etc.

... and Hacker News, Reddit, Twitter, etc.

When I'm discussing politics with friends on Facebook, at least I know I'm talking to real people whom I've met in person. This isn't true with most other platforms. (Except some followers on Twitter...)


Facebook has a really annoying habit of showing you things that your friends have liked or commented on, which may not be from real people ..


There's a difference between targeting people who believe inaccurate things and presenting them with correct information, and finding people who believe inaccurate things and targeting them with even more fake news.

It's just rather difficult to distinguish between the two.


> There are so many other ways to "influence" people's opinions

But those ways didn't help Donald Trump get elected.


How do you know that?


This crafty term "political or issue ads". Is that a legal / enforceable term, or is this something that Facebook gets to decide?

Whatever it is, we're going to see that "ads which influence the public for political aims" are going to bleed juuuust on the other edge of that definition.

Also interesting that Facebook's response to this issue is to collect more data -- this time about "advertisers with specific political agendas", which seems like an interesting database to mine (although maybe not that hard to collect, idk).


> Is that a legal / enforceable term, or is this something that Facebook gets to decide?

In this context, Facebook decides. Broadly, the FEC has jurisdiction over defining and regulating "electioneering communications" [1].

[1] https://transition.fec.gov/pages/brochures/spec_notice_broch...


Nevermind that Facebook is precisely an “advertiser with specific political agendas”


Also, extremely scary. This means that if you want to do any political action, you need to entrust your most detailed personal identification to Facebook. And if Facebook is latter served with a subpoena to disclose these data, they will be only too happy to oblige. It may not be too much a concern in a country where there is a functioning democracy and strong judiciary protections, but in countries where you could disappear for criticizing the government it makes Facebook completely unfit for use by anyone but government propaganda outlets. It will also have chilling effects for speech even in the US - there are numerous examples of people being subject to bullying and personal destruction campaigns for either publishing or even sponsoring messages that some influential groups did not like. Of course, Facebook has no legal responsibility to support this - or any kind of - speech, but elimination of this kind of speech from any public place hurts the democratic process much more than minuscule influence of some bad actors.

Anonymous or pseudonymous political speech is very valuable for a robust democratic debate, and it is extremely sad - though not exactly surprising - that it is being eradicated under the guise of "transparency" and "protecting the elections from foreign influence" and all kinds of bullshit like that.


Was the problem political ads though? I think a lot of the content shared across Facebook was meme-like images that received shares and likes organically after an initial push by bots and Russian facebook users?


"initial push" being the operative term ... posts can be shared to a broad audience by giving fb money, which they did.


So boosting the post, but that wouldn't count as a political ad. I'd like to see what Facebook would count as a "political ad".

The top image here (http://nymag.com/selectall/2017/11/house-democrats-release-r...) is one of these Russia ads. However, I wouldn't call that a political ad, just a terrible meme that happens to be about Clinton and the election.


Obviously we don't know yet, but I would be absolutely shocked if they don't basically flag everything about well-known politicians as "political", and paying to boost something like that should fall under the new restrictions. It would be trivial for most large tech companies to make an automatic "political post" classifier at this point. The post "boosting" use case is a pretty easy target too because blocking someone from paying to boost something is fairly low-risk.


You'd be surprised at how influential memes are these days ... as ridiculous as it is to consider, they are absolutely political speech, and can be used as propaganda.


Is this a serious question?

Maybe ask why America is the only democratic nation with uncapped campaign spending.


America is also the only democratic nation with First Amendment. In other democratic nations (like Britain) you can now be investigated, fined or jailed for posting a video or retweeting or sharing something on Facebook. I somehow prefer the former.


And you can in America too.

"Jailed for a Facebook post:" https://www.theguardian.com/us-news/2017/may/18/facebook-com...


No, it's not jailed for facebook post, it's jailed for a facebook post calling to an immediate specific violent action against a specific person. If a mafia boss tells his capo "let's whack that mofo", then it'd be very disingenuous to claim the mafia boss was prosecuted for speech, not for organizing a murder. Obviously, free speech protections do not apply when the speech is a part of a crime - either one that happened or one that is about to happen. Saying "let's burn this specific guy's house down" is well within the limits that should get police's attention.

Even then, in this case, Peralta was not prosecuted. Which is probably OK if he was just extremely stupid and not actually intended to proceed with burning the house down. But specific violent threat is not the same as just speech, and one should never present it as such.


You read his comment right? He publicly posted "...It's time to strike back. Let’s burn this motherfucker’s house down." About a police officer. It is not right to post that.


First amendment is just symbolic of course. Once your speech encroaches on enough profit, it will be brought down in no time by the same forces that prevent campaign finance caps.

In America you can also freely speak slander on the internet and the victim will have to live with it for life.


> Once your speech encroaches on enough profit, it will be brought down in no time by the same forces that prevent campaign finance caps.

It is fascinating that you do not recognize how contradictory your statement is. "Campaign finance caps" is the limitation of speech, so if "the forces" prevent it from happening, they are not shutting down any speech - they enable it. With that, I have never heard any of the organization opposing campaign finance caps ever shutting down any political speech. Can you name 2-3 cases?

I can name a case where a prominent politician, known for support of campaign finance limitations (and running an extremely well-financed campaign), tried to suppress clearly political speech against that politician, under the guise of campaign finance laws. You know this case as https://en.wikipedia.org/wiki/Citizens_United_v._FEC

> In America you can also freely speak slander on the internet and the victim will have to live with it for life.

Slander is a verbal spoken defamation. Defamation is a persistent form is called libel. Both are actionable offenses in the US. Of course, proving libel is not easy. Neither it should be.


> Maybe ask why America is the only democratic nation with uncapped campaign spending.

Where on earth do you get an idea like that?

No, the US is not the only democratic nation with uncapped campaign spending, for any plausible definition of "democratic" and "campaign spending".


Actually since Citizens United this is effectively pretty much the case.

The caps in other countries might not implemented in an umbrella dollar amount but as rules like no TV ads, or the dollar amount could only apply to certain items. Often it is a cap on donations which effectively caps spending. At the end of the day, democracies generally must limit campaigning influence because if they don’t, they might end up with skewed democratic process, possibly a reality show star as president.


> Actually since Citizens United this is effectively pretty much the case.

This is a gross misunderstanding of the Citizens United ruling.


Some more interesting tidbits from the testimony:

"For even greater political ads transparency, we have also built a tool that lets anyone see all of the ads a page is running [and are] creating a searchable archive of past political ads."

"In order to require verification for all of these pages and advertisers, we will hire thousands of more people."

Sounds like Facebook has a lot more work to do.


I don't see (yet?) how this directly solves the problem.

Let's say I'm a foreign intelligence agency and want to influence people through ads. Feels like I just need to set up a shell corporation based in the states. Once I'm authorized, I can load the creative offshore.


> I don't see (yet?) how this directly solves the problem.

It solves the problem of Facebook getting bad PR.

I'd be shocked if Zuck gives 2 shits about any of this, beyond thinking about how he can utilize facebook to get himself elected.


Lol welcome to post Citizen's United. Anyone with an LLC can spend unlimited money on an election legally. No need to hide foreign money, just spend $50 and make a new corp based in the US.


Another gatekeeper in the political arena. Just what we don't need. This is not a good outcome.


There are existing prohibitions to how foreigners may participate in US elections. This seems to make it easier for Facebook to make sure it's following those laws.

https://www.fec.gov/updates/foreign-nationals/


Maybe displaying ads should require a license.


If we cant even zone districts without controversy I highly doubt we'll ever be able to distribute licenses in an effective manner


What does "zone districts" mean?


It could keep out some honest actors but transparency in funding is important.


No it is not. You don't have the right to invade privacy of groups of people no more than I have the right to invade your privacy and inspect your bank accounts. Maybe you're Russian troll? Maybe you are paid by Putin? Would like to make your bank accounts public for everybody to ensure it is not the case? Probably not. Then why should other people who make political statements?


The sane and obvious solution is to cap campaign spending. Anything else, willful or not, is either corruption or mere oligarchy.


We've now seen what happens without gatekeepers and there's no way we can call that "better."


The electoral college is another gatekeeper, and many advocating for Facebook's better political advertising controls advocated for abolishment of the electoral college.


To the extent that the EC's purpose is to prevent any demagogue from capturing the attention of voters by telling them lies they want to hear, it's not functioning as intended but would likely be welcomed if it was.


What has happened? I'm not USian so I didn't follow the calamity that has befallen America.


You're literally commenting on an article about it.


I have to be honest, I read the GP as a political statement. Maybe conflated with the GGP.


I did too, but I tried to take the most charitable interpretation. I've had enough conversations with dang about engaging with political trolls recently so I'm trying this new thing where I'm being nice to people. It's harder than you might think.

I don't know if HN has taken any steps to combat the right-wing/Russian propaganda campaign that's impacting most US social websites, but it'd be naive to think the site hasn't been targeted.


Whatever Facebook decides to do they are still a gatekeeper...


If anything, we need more gatekeepers like this that are shedding light on the dark money used for political advertising.


This is something that should have been a day-1 feature, IMHO


I actually feel for anyone that buys ad space for politics on Facebook as they obviously don't handle it particularly well. Tons of times in 2016 I saw ads for where someone was running for in-state legislature of another state entirely (one I had never even been to or driven through) being rendered for me. What value is an impression if the receiver couldn't vote for you if they wanted? I did actually comment on a couple of them saying good luck from a guy out of state (and that the ad showed up in my state, hoping they can recoup some of their advertising costs).


>Tons of times in 2016 I saw ads for where someone was running for in-state legislature of another state entirely

I think that's a Facebook targeting problem, not a problem with the people buying the ads.

Somehow Facebook has no idea where I am, and I get local ads for all kinds for businesses in random cities to which I've never been.

There's also the concept of extended targeting. For example, as an advertiser, you run Los Angeles ads in Palm Springs because there's a lot of people who live in or frequently visit Palm Springs who also have homes/businesses in Los Angeles. Ditto to LA-Las Vegas, SFO-Reno, Chicagoland-Wisconsin, Minneapolis-Southern Manitoba/Western Ontario, Houston-Austin, San Antonio-Padre, Cincinnati-Carolinas, etc...


I probably didn't specify "they" well in "they obviously don't handle it particularly well", but meant it as Facebook, sorry.

Of course, there may be controls the user has when creating ads that may've been messed up, I'm not sure, but I honestly only recall a few ads that were in my area so that I could, if inclined, act on them. The vast majority? The only way it would work is if they sold me on their platform so hard that I sold my house and bought into their district in their state.


> I actually feel for anyone that buys ad space for politics on Facebook as they obviously don't handle it particularly well.

Why,? The world would be a better place with less pointless political advertising


because there are quite a few people at the local levels nationally (and internationally as well I imagine) that run for various positions that don't have the financial backing of a Koch or similar that deserve an opportunity to have the same footing as someone as financially backed.

If we want to get special interests out of everything like we claim, then we need to have the platforms for people to get themselves out there to be both effective and cheap enough to do so.


>Why,? The world would be a better place with less pointless political advertising

Who gets to define "pointless?"


Obviously, political advertisement you disagree with is "pointless" and the world would be a better place without it. </sarc>

More specifically, the filtering would be informed by the biases and prejudices of those who control the gates. As it always is.


Every once and a while society can and does use common sense, it's not a Boolean flag


The UK already has this as a requirement for printed political material: https://www.labourprint.co.uk/single-post/2014/12/16/Your-Qu...

It doesn't apply to online ads, which has become a serious problem with the amount of money flowing into Brexit from illegally-anonymous sources.


I wonder how will this go together with GDPR. For showing paid ads facebook does not need to know identity of the advetiser, so they should probably have a right to refuse to provide it under GDPR, right?


Does this include the half dozen or so things that my grandpa likes from "Conservatives for Canada" and the like?


Edit: I mistook this from being a government action, not a private company action. I totally agree a private company is allowed to restrict things in this way.

>Any advertiser who doesn’t pass will be prohibited from running political or issue ads.

How does this interact with the First Amendment? If I'm paying for a local TV company to run an add, wouldn't restricting my ability be in violation? What if I buy the local TV company and choose which ads are ran?

Edit: What about non-political or non-issue ads that still have a political component? For example, if I was a billionaire wanting to cause some certain political divides, I could definitely create ads that still cause great controversy. For example, spend some money developing a bullet proof school outfit, and then advertise it heavily. It is a bunch of extra work and expenditure I wouldn't do if I could just run gun control ads, but if those were banned are you going to ban any advertisements related to any merchandise that is related to political issues?


A private advertising medium (Facebook, twitter, the TV station, etc) are not responsible for upholding your first amendment right to free speech. They are a private entity that get to decide what gets published on their platform.

The first amendment only frees your speech from state suppression.


Yeah, upon rereading it I realize it was speaking from the point of Facebook. I though it had been speaking from the point of a government requirement, as in making a suggestion of what should be required by law. That one was my mistake.


They are a private entity that get to decide what gets published on their platform

Except when they want to hide behind the legal defence of “common carrier”. Can’t have it both ways.


> How does this interact with the First Amendment?

It doesn't interact with the First Amendment at all. Private corporations can make their own rules about what speech they will and will not allow. The First Amendment only deals with the government's ability to censor individuals, not a private entity's policies regarding allowable communications. Most Americans don't seem to understand this.


Interestingly, this was not always the case in the United States.[0]

[0]: https://en.m.wikipedia.org/wiki/Equal-time_rule


It's noteworthy that this rule existed for broadcast networks, under the rationale that they were entrusted users of a scarce shared/public resource (slices of the radio spectrum). Cable distribution of television doesn't involve a public resource and was treated differently.


There's however the broader value behind the First Amendment. While FA does only limit the government, the values of free speech and robust democratic debate are not limited to the government. Legal enforcement and government coercion can only go so far, and so can governmental support. If the society wants working democracy, FA - or other free speech protections - is a necessary condition but not a sufficient one. This should be also supported in private actions and common values informing them. Otherwise free speech can not survive, even without direct governmental coercion - it would just be eliminated by other means. FA is not the source of those values, it is the consequence of them, and thus limiting the discussion only to FA ignores the wider meaning of it. I think many Americans understand this very well, and thus resorting to narrow legalistic view of the issue misunderstands their position.


If Americans were invoking the spirit of the First Amendment rather than appealing to a misunderstanding of its limits, then why not appeal to other precedents? By your logic folks should be invoking William and Mary's Bill of Rights rather than its American successor.

> I think many Americans understand this very well, and thus resorting to narrow legalistic view of the issue misunderstands their position.

You're being disingenuous here or at the very least you put way too much faith in the average American's knowledge of basic civics / their own government. I've had to point out that the First Amendment doesn't apply to private employment numerous times in the past (i.e., this is not my first comment) in situations where the other commenter literally did not understand the Constitution and the basic history surrounding it. The demographic reading Hacker News is not at all indicative of the U.S. population at large and usually consists of college-educated, middle class knowledge workers.

[edit] Clearly the original parent does understand the First Amendment.


FA is by far the most famous embodiment of the concept.

> You're being disingenuous here or at the very least you put way too much faith in the average American's knowledge of basic civics / their own government.

You don't need to known details of government functioning to value free speech. Valuing free speech is one of those natural values most people - at least in American culture - embrace by default, without much need of educating them on the details.

> [edit] Clearly the original parent does understand the First Amendment.

That very well may be (though OP recognized their error in an edit), but the comment I responded to talked about "most Americans", not just one online commenter.


People are saying it isn't affected by the first amendment, but it seems pretty gray to me: Facebook is doing this because they need to sell a good story to congress. If they don't have that story, congress seems likely to bring the hammer down on them.

Obviously the threat is assumed. But can congress explicitly threaten a company? Is this all congress needs to do to run around the first amendment in the 21st century - threaten internet companies to do it for them?


The First Amendment is about government restriction of speech.


It does not clash at all, except in cases where the courts decide that you effectively control all expression in a public service.

Just like it's not a First Amendment violation for you to refuse me to paint racist slogan on the side of my house, does not mean it's a First Amendment violation for me to refuse to run your ads on your site.


The first amendment is not applicable in any way, shape or form


>We also learned about a disinformation campaign run by the Internet Research Agency (IRA) — a Russian agency that has repeatedly acted deceptively and tried to manipulate people in the US, Europe, and Russia. We found about 470 accounts and pages linked to the IRA, which generated around 80,000 Facebook posts over about a two-year period. Our best estimate is that approximately 126 million people may have been served content from a Facebook Page associated with the IRA at some point during that period. On Instagram, where our data on reach is not as complete, we found about 120,000 pieces of content, and estimate that an additional 20 million people were likely served it.

This part seems to be rather interesting. The number of people who viewed the content created by IRA in general is appalling.


Is this appalling? What's appalling to me is the number of people who can't tell the difference between fantasy and reality.

I'm sure governments are going to come down hard on Zuck for "allowing disinformation to be spread", but won't give a second thought about cutting education budgets.

It's hard for politicians to make it illegal to lie or to run a platform where people might lie. People are always going to lie and try to deceive others. It would be more effective for these politicians to actually educate their constituents. This is just one of many benefits of having educated citizenry...


> I'm sure governments are going to come down hard on Zuck for "allowing disinformation to be spread", but won't give a second thought about cutting education budgets.

Prevention is always cheaper than treatment, but much harder to get people to understand the value of because the threat isn't in front of them.


It can be easy for some people to forget that most people are generally spiteful, vindictive, petty jerks who can't look past the chips on their own shoulders and only rarely act kindly towards people who they want something from.

It's not that people don't deserve compassion and empathy, but don't ever expect positive things from others if you want to avoid spending your life perpetually disappointed.


Most people? No. But there are enough "spiteful, vindictive, petty jerks" out there that they need to be taken into consideration.


Well, I envy you your life experience but that doesn't look like the case from my viewpoint.


Educating people definitely helps. But even the most educated one's could give into impulse and get triggered to false propaganda.


> It would be more effective for these politicians to actually educate their constituents.

Why would politicians want to educate their constituents, when it's so much easier to get reelected by uneducated ones?


Oh give me a break. What Russia did was "inbound marketing" to stump for Trump. It worked remarkably. They did a viral campaign, the exact thing social media is designed for. There are no safeguards against misrepresentation, bots, etc.

This is squarely Facebook's fault. They had completely vulnerable systems in spite of being a source for news and information for hundreds of millions of americans. If we didn't have the DMCA they should and would probably be shut down.


How's that pottery class that's getting cut going to help people tell the difference between fantasy and reality?

Is it possible in your philosophy for two "educated" people to disagree? On more than just personal preference like whether blue is better than green, but indeed on seemingly foundational issues like what is real and what is imaginary?


What's a theoretical pottery class got to do with this? Feel free to look up why teachers are currently on strike in parts of the US, or why the current US secretary of education is so controversial. This has been a problem for a good long while.

I'm not talking about educated people agreeing or disagreeing, I'm not sure where you're going with that. I'm talking about leaving people with the skills to tell the difference between something reported as fact and an opinion. It's about knowing how to verify, or debunk, claims presented without evidence. It's about being able to detect complete fabrication. After all, what "educated" people thought Pizzagate was real?


The theoretical pottery class is the practical result of what happens when education budgets are reduced. Something has to go, so it's typically the optional electives with higher-than-normal base costs and average-or-less college prep relevance like pottery, shop, dance, etc. Still, is this a great loss in terms of preparing people to distinguish reality? If not, why is cutting education spending such a big problem?

Basically I'm asking you to define "education" and why the solution to so many problems is simply more of it. After all, what "educated" people thought Christ was crucified and then resurrected?


> After all, what "educated" people thought Christ was crucified and then resurrected?

How many come to believe that having never heard it prior to completing their education?


An interesting question. When is education completed, anyway? But a potential way to answer the question is to look at conversion rates. Some digging would probably bring up some data for one religion in particular but at the very least we could establish an upper bound of ~87k in South Korea for that religion, granting the wrong assumption of all current members being converts post-education https://www.mormonnewsroom.org/facts-and-statistics/country/... Still, I'd bet it's > 0.

Ben Casnocha wrote: "Rule of thumb: Be skeptical of things you learned before you could read. E.g., religion."


You can convince a non-zero number of people of pretty much anything.


The theoretical pottery class is the practical result of what happens when education budgets are reduced.

Ok, you're arguing on a false premise then. When education budgets are reduced, kids don't have up to date textbooks. Classrooms don't have proper supplies and too many kids get crammed into too few classrooms, which is not a situation conducive to teaching or learning.


I don't see how it's an argument from a false premise given that I'm just asking questions and hoping for some conversation, not making a formal argument yet. Still, suppose you are inferring an argument I'm making ulteriorly through my questions, and have inferred that I've argued "reducing budgets implies only such classes are cut", then sure, that's technically false. But I'm not making that argument, ulteriorly or otherwise. In my followup I thought I sufficiently clarified that things like pottery and other classes being cut are typical examples of what goes, as something must, but of course not the only things. (And of course necessarily, if such things are already gone, something else must go, like small class sizes.)

This is a whole side-show commentary however. My main line of questioning remains: how do you define education and in what way does lacking it matter for the purposes you claim to care about (being able to distinguish reality and fantasy)? Do you want to even venture at broaching the religious question? I'm ok with leaving that to the side.

Maybe we can start with the downsides you've extracted out. What subjects are the important ones for which kids must have up to date textbooks in-class, the lack of which will hurt them in distinguishing reality and fantasy? What supplies are proper? What are some objective impacts to learning something when there are student-teacher ratios of 10:1 vs 20:1 vs 50:1? Does the extent of the impact matter by subject? (For the last I'd bet yes given many well funded universities have ratios in the hundreds for certain courses with apparently few if any seriously ill effects.)

(To head off another possible point of miscommunication, I don't dispute that nebulous unfactored "education" can have more benefits than just distinguishing reality and fantasy and that various positive aspects might not impact that particular aspect at all, I just want to target that particular one for what you'd like to focus on when "more education" is pumped in, either by the schools that are having their budgets reduced or by the politicians themselves. Maybe that's another form of the question to ask instead? How specifically do you expect the ideal politician to education their constituents (and what's the ideal educator:educatee ratio?))


Does view mean seeing in your feed or actually clicking an outbound link?


> In 2007, we launched the Facebook Platform with the vision that more apps should be social. Your calendar should be able to show your friends’ birthdays, your maps should show where your friends live, and your address book should show their pictures. To do this, we enabled people to log into apps and share who their friends were and some information about them.

Anyone who was in a FB sales meeting when OpenGraph was launched knows this is a very calculated understatement. I've heard them explicitly sell the information of an entire user's friends list to anyone willing to pay.


>I've heard them explicitly sell the information of an entire user's friends list to anyone willing to pay.

So does FB actually sell data to third parties? I've seen many on here say that is not the case.


This is really the key thing to understand why FB is so much more evil than the other companies it is often lumped together with. They don't just use your data to target ads. They will literally just sell your data to anyone who waves a dollar under their noses.


> I've heard them explicitly sell the information of an entire user's friends list to anyone willing to pay.

Source? If this were true and they'd do it for anyone willing to pay, where can I give them money for user's personal data?


There is no source, because it's bullshit. FB has never sold access to their API. Access to the friend's data permission, like all other permissions, was free.


> "It’s not enough to just connect people, we have to make sure those connections are positive. It’s not enough to just give people a voice, we have to make sure people aren’t using it to hurt people or spread misinformation."

And be sure to report your fellow citizens for reeducation when they spread "negativity" and "misinformed opinions".


Before anyone says Facebook's testimony is just fluff, we're a business that relies on Facebook's API to monitor activity, and we've been severely impacted. We no longer can get data as before - it's having a large effect on our business. So we definitely feel that action is being taken.


Would you be able to explain what you used to do and have access to and what has changed? It would be interesting to hear an advertisers view.


We no longer can get data as before - it's having a large effect on our business

What sort of honest business depends on the Facebook API?


We were scraping data for a client for public events like music shows, business special days/etc, and that's been severely shut down and apparently on hold to approve any new apps. Again this is just getting like the date/location/ticket URL for a public page FB event that presumably businesses want people to know about - no private data about attendees/humans at all.


That seems like a legitimate use case.

(it was a genuine question!)


I am not greatly concerned with Facebook's privacy policy. People voluntarily give up their personal information in order to use this free service.

I am more concerned about his identification of "fake news" and "hate speech" as issues without any clarification. These are both subjective descriptions with the potential for damaging abuse, which we are already seeing with the banning of Diamonds and Silk from Facebook.

In my opinion, Facebook, Google, and other social media platforms should be required to be content neutral and its users given first amendment protection (with its concomitant limits). This would require federal legislation.


I get a chill down my spine any time someone uses the phrase "hate speech". Who gets to decide what qualifies? The first amendment must be protected at all costs.


Whoever's exploiting the network effects decides what qualifies, and the First Amendment doesn't apply to them.

It's heartening that OStatus (GNU Social, Mastodon, Pleroma, etc.) are getting some uptake, but existing federated networks are even more poorly equipped to compete with Facebook than with Twitter and Tumblr. The nice thing about Facebook is its privacy settings -- I can, say, post about going to Pride and make sure no one in my family sees it.

The problem is, Facebook could set up its rules so that campaign messaging for candidates they dislike just happens to violate them, and campaign messaging for candidates they like just happens not to. There's a lot of potential for abuse of power here.

It seems like a lot of people aren't worried about that, because they figure the targets of the current uses of this power really need to be suppressed. But there's no reason this power can't be directed elsewhere. I bet there are people who thought it was OK to fire us for being gay who are now pretty mad that people can be fired for donating to Prop 8.


There's no good stopping point. Britain recently convicted a comedian for putting up a youtube video where he trained a dog to raise his paw (like a Nazi salute) when he said "gas the Jews". Poor taste? Yes. Crime? Shouldn't be one.


Seems weak, I hope some of the representatives have read Tim Wu.

For example when Zuck says:

>My top priority has always been our social mission of connecting people, building community and bringing the world closer together. Advertisers and developers will never take priority over that as long as I’m running Facebook.

This is either misleading or a lie. Facebook is a corporation, what Zuckerberg wants to do as an individual isn't even relevant, what's relevant is what the corporation "wants" to do to increase profit. So far that has been pursuing connecting people and building communities and then taking that information and selling it to developers and advertisers. The phrasing places community and monetization in competition but Facebook's model isn't of competition between those it's model is a monetization of community. So the testimony never speaks to the fundamental problem of having advertisers as any level of priority here.

Also to say that Cambridge Analytica abused the system sidesteps the issue of whether it was in Facebook's interest to allow this data to be collected and then misused. Cambridge Analytica purchased data which made Facebook the most attractive advertising platform for them because of the targeting they could do. That is good for Facebook's bottom line. They also don't address the total amount of information that may be out there, floating through advertisers outside of Facebook's control right now. Despite new restrictions how much about me is already out there? That's what I want to know.

Overall, it's basically what I expected but I really hope there are a few representatives who can set aside the specifics of this one incident and attack the wider notion of Facebook's purpose.


> Facebook is a corporation, what Zuckerberg wants to do as an individual isn't even relevant

Zuck is a majority shareholder, so what he wants to do as an individual can actually be fairly significant here.


Fair. Perhaps the most salient criticism is that Zuckerberg's top priority is connecting people because his company is designed to use those connections in order to sell lots of advertising.


Advertising is the best way to pay for the servers, lawyers, developers, office space, and everything else FB needs, while keeping the cost for users low.

I believe they are investigating options for letting you pay for FB to turn off advertisements, however.

In the end, it's hard to know for sure what Mark's true motivations are, just as it is for anyone else. The only person who knows Mark's true intentions is Mark. That said, I personally believe him. When I listen to him talk about what he wants to do with Facebook, it seems obvious to me that he has good intentions at heart.

He's about my age, and when I was young the internet let me communicate with people from different areas of the world, different backgrounds, and helped me expand my own understanding of humanity. The internet made me a better person -- a more tolerant, more thoughtful, less prejudiced person. I hoped that the internet could facilitate the same personal growth in the rest of humanity, too. By drawing people together from the disparate corners of the world online, we could become a more tolerant, more understanding species. Mark has said similar things.

Unfortunately, things haven't quite worked out as he, or I, hoped, it seems. =(


about same age too and you express well how I feel about the situation.

We are not done here though, the fight to use the internet to connect the world continues!


Facebook doesn't sell data, it sells ads. It's in Facebook's interest to prevent advertisers and developers from collecting and misusing data so they buy more ads.


Yes but it's in facebook's interest to have the best, most accurately targeted advertising platform. In this case the facebook data CA obtained wedded it to facebook's advertising platform and that's what is in facebook's interest. FB would love to release tons of data to all advertisers because then those advertisers would advertise on facebook. The pressure against this release is negative public perception. So fb's profit motive incentivizes it to walk a tightrope with our data in hand. That's the core problem.


Releasing that information means those advertisers no longer need Facebook to do the ad targeting. They can then use other exchanges to do targeting without paying Facebook anything.

It is in Facebooks interests to guard that information carefully.


Yeah, and sharing it with the political partiy it thinks deserves to win the next election; even if it otherwise guards it carefully.

There's a reason why we have the separation between the head of state, the Church, Congress and the federal courts.


@frgtpsswrdlame could you elaborate how Tim Wu is relevant here? I'm asking from a self-education perspective.


He's done a mini media blitz, which you can see here:

https://www.nytimes.com/2018/04/03/opinion/facebook-fix-repl...

https://www.npr.org/2018/03/27/597221954/facebook-previously...

https://www.pbs.org/newshour/show/mark-zuckerberg-promises-c...

Also there's this older piece from him here:

https://www.newyorker.com/business/currency/facebook-should-...

Basically Wu has a much more radical critique of facebook which is probably best explained in his own words (from the npr piece):

>I think the problem lies here. It's actually a very fundamental one, which is Facebook is always in the position of serving two masters. If its actual purpose was just trying to connect friends and family, and it didn't have a secondary motive of trying to also prove to another set of people that it could gather as much data as possible and make it possible to manipulate or influence or persuade people, then it wouldn't be a problem. For example, if they were a nonprofit, it wouldn't be a problem.

>I think there's a sort of intrinsic problem with having for-profit entities with this business model in this position of so much public trust because they're always at the edge because their profitability depends on it.


> selling it to developers

FB doesn't sell access to their API. Many other businesses sell access to their API. FB doesn't.

> Also to say that Cambridge Analytica abused the system sidesteps the issue of whether it was in Facebook's interest to allow this data to be collected and then misused.

It absolutely wasn't in FB's interest as this media firestorm indicates. Also, CA purchased data illegitimately from someone other than FB. FB did not sell your data to CA or anyone else. (FB does sell advertisements, and those ads can be targeted to specific types of users.)

> Despite new restrictions how much about me is already out there? That's what I want to know.

FB is coming out soon with a feature that will let you see if CA had access to your profile.


When you buy ads programmatically, and define those purchases with parameters instead of through a GUI, aren't you arguably buying access to an API?


Fuck their profits. Corporations can and are socially responsible at the expense of a few extra dollars.



It saddens me that IBM is setting the expectations so low for natural language processing that they're basically reducing it to a Myers-Briggs test.


Interesting. He really puts Facebook into the role of policing its users (and advertisers). He describes taking down thousands of fake accounts, investigating whether certain pages have connections with IRA and taking them down if so.

This will probably sound reassuring to legislators, but it pretty much permanently accepts the burden of responsibility for misinformation on the site. It sounds likely to force Facebook into a costly permanent arms race against every malicious political actor in the world. I wonder how regular users will get caught in the crossfire.

Not that I feel bad for FB here -- they make tons of money because they have the most captive attention of any entity in the world. Attention is valuable because it allows platforms to influence people's behavior toward certain actions. Advertisers are not the only ones who realize this, and FB has to take responsibility for all forms of influence on its site, not just commercial ads.


Actually under section 310, he doesn't bear liability for failures.

However SESTA and FOSTA have eroded that for "sex trafficking". I expect more exceptions to sneak in.


As shitty as Facebook is as a service, I don't think they should have any legal obligation to 'protect' users from their own neglect.

I don't use facebook, because I think a free-software based & decentralized social network would be the right way to do social networking, but if the rest of the people want to give away their info, fine by me.

If someone I know wants to give my info to fb, it's fine too, but he/she loses part of my trust.

I value the freedom to do whatever you want with the data you have more than the convenience of the government protecting whatever data I foolishly gave away to someone.


There's a point where "everyone for themselves!" doesn't quite cut it. The most obvious example is probably fraud. You could argue everyone must check whether a deal they get involved with is legit and accept the risks of being screwed. But that's not how it works since we must enforce a basic level of trust by law for society to function. A slightly less obvious but still rather well known example is addiction (drugs, gambling, etc).

The reason we elect politicians rather than deciding how to deal with big, social issues like that ourselves is because it's a full time job. Most people don't have the time and have no obligation to research "free-software based & decentralized social networks". They don't know about it and don't have the time or education to even know there's alternatives. We're so cynical about politics that it's hard to even trust politics to solve any of this since it's all so backwards and corrupt but I believe, because of the sheer impact it has on society, this is a political problem and needs to be solved through actual laws.


> The reason we elect politicians rather than deciding how to deal with big, social issues like that ourselves is because it's a full time job.

I agree that being a politician is a full time job, but it's not the job you appear to think it is. The job of a politician is to get reelected. Any good the politician happens to do is a side effect that you can't count on happening again.


It's the ideal job we're electing them to fulfill, our broken election system makes this very much not their job, but the statement is correct in theory.


> It's the ideal job we're electing them to fulfill

But we don't elect ideal politicians, we elect real ones. There has never been any such thing as a political system where the "ideal job" actually existed, and I don't see any prospect of such a system ever existing. So the specifications of the "ideal job" are irrelevant; we need to look at the real jobs that real politicians actually do.


This sounds awfully close to the "why would I care about surveillance, I'm not doing anything wrong" argument. The reason it's important for Facebook to be legally required to protect user data is because you have no choice in being part of that data. You, as an individual, can do quite a bit, but you can't prevent people from sharing their contact books, pictures, and location. With the ubiquity of Facebook, everyone has some sort of footprint on the site.

If the data is valuable and worthy of protection, and you have no ability to provide that protection, then it is reasonable for protection to be required from the layer that is removing the individual's agency.


You make it sound like the users aren't even on the Facebook site. Remember that the 'users' are using a Facebook service, a consumer product provided by Facebook. They are responsible for what the product is, and does, regardless of what the users think.


Do we need to keep reminding people that even if you never created an account, you have a facebook user associated with your real life identity.


No, in this case they are not.


> I don't use facebook, because I think a free-software based & decentralized social network would be the right way to do social networking, but if the rest of the people want to give away their info, fine by me.

I don't see how a decentralized and free social network wouldn't have any privacy issues. As long as you are putting your information online, it's ripe for attacks. If you have a friend in your decentralized network that uses a malicious app, that app now has access to his/her friends list and thus your information. CA got a ton of data even from users who didn't take their quiz, but rather through friends of people who took the quiz.

The best social network, IMO, is the one you have IRL.


> I don't think they should have any legal obligation to 'protect' users

They do have legal obligations if they're going to sell political ads.


Zuckerberg will tell to the whole world they are sorry and they will do everything in their power that CA will not happen again and people will forget. But the issue is not in CA abusing the access. The issue is they are having the data. And they can and they are abusing them in same manner as CA did. CA is just a good example how much power you have with the information the FB has. And they have more. FB is public enemy #1 on global scale and they should be shot down. They're corrupting the whole society and this has to stop wherever you like them or not.


I stopped reading at that paragraph. That Zuckerberg is seriously out of its league.

>> It’s not enough to just connect people, we have to make sure those connections are positive.

Does he really think he can define what "positive" means with a platform hosting hundred of millions of communications ? There is no positive, there's human nature. Regulating that by FB itself won't work but worse, it will be a tyranny. The gated wall of FB will mean gated psychology, sociology, etc. FB could be great if it was run by people who think about people, not shareholders...


All this says is they don't have control over their network and they're just bleeding out data everywhere. The primary takeaway especially with regard to the election tampering is that this (Facebook) is a huge, free, open tool to abuse and control elections.


Your credit data had been stolen, your health data had been stolen, your government records every word you say online, your "smart" fridge is ddosing wikipedia, you worry about being unreachable by silly spam that may or may not effect your voting for one of the two equally horrible politicians


"This is me telling you what you want to hear, but I don't mean a word of it and I will continue to get away with whatever I can until I get caught at which time I will tell you what you want to hear again."


“Dumb fucks”


Why is the date set in the future? April 11?


This is the notes he will be presenting.


This really confused me, thanks for the clarification


That's when he actually testifies.


Oh I thought these are minutes of something that actually happened.


Usually people submit their written testimony to the committee members ahead of time so members can study/fact check the information and respond with appropriate questions.


What exactly is this, and why is it dated 2 days from now?



It's good to see Zuckerberg taking swift and decisive action to try and plug the leaks related to Cambridge Analytica. I think that it is sorely needed, and it is probably too late for many on the Facebook platform.

However, what's more interesting to me is the disturbing details about the spread of political advertising on Facebook. It seems that elections were much more heavily data mined and influenced than before. A nation state level attack on elections seems plausible and highly achievable, and was done not just in the US but France, Germany and elsewhere.

I will be closely following whether the radical transparency measures proposed will have much impact given the upcoming elections in much more corruption prone countries such as India.


I am not an infosec expert, merely an end-user:

From my perspective, I would not call this action "swift and decisive", quite honestly: If he were making quick and decisive actions in the interest of privacy, then he would apply the GDPR features to every user.



Thanks - was not aware that he had replied. This concerns me though, because I think that FB has shown us, that any wiggle room they have - they will take it:

"Facebook says that some laws elsewhere in the world conflict with GDPR’s new laws for Europe so they can’t be extended everywhere, and that the interface for some of these tools may vary"

But in all fairness: I am running into this very same issue RE: GDPR in light of some other compliance laws from the EU and US.


You can't plug a data leak. It's too late. The data is out there in the wild and will be forever: https://slate.com/technology/2018/04/facebook-cant-clean-up-...


It does become less relevant, moreso when the existence of the data itself is a major story.


I guess it depends what use somebody can make of the data itself. Presumably human psychology doesn't change very much, so you could still use it to train and validate models years or decades later.


Not to mention using it to support a status quo that prevents the information from aging so quickly, maintaining its relevance. News about rocks will remain popular if the spear is never invented.


Sure, but there's no reason to let the ship sink if it's not submerged already.


swift in response to public reaction maybe, they were aware of these issues years ago, they only started acting on them when they got air time. That is in no way a responsible stance.


Did anyone look at the pictures of him at the hearing? I am now certain zuckerberg sent a robot instead of him. You can tell by the lifeless emotiona.


I am fascinated to watch his testimony. Regardless of what is submitted, it will be highly interesting to see him deliver these words and stand up to questioning.


This written testimony is offered in addition to his in person testimony, he won't repeat it.

That way Congress has both the information "for the record" and the opportunity to grandstand.


Chrome web store should be locking things down as well. But that's a bit more difficult.


> "Facebook is an idealistic and optimistic company. For most of our existence, we focused on all the good that connecting people can bring." - Mark Zuckerberg 2018

> "They trust me — dumb fucks." - Mark Zuckerberg 2010

Interesting placement of "most"


That second quote is from 2004, although it wasn't reported until 2010


Plus, he was a kid.


and FB zucks, always did and always will


Zuck: Yeah so if you ever need info about anyone at Harvard

Zuck: Just ask

Zuck: I have over 4,000 emails, pictures, addresses, SNS

[Redacted Friend's Name]: What? How'd you manage that one?

Zuck: People just submitted it.

Zuck: I don't know why.

Zuck: They "trust me"

Zuck: Dumb fucks


Please, enough with this. The tedium is overwhelming.


[flagged]


Please don't take HN threads on partisan goose chases that nothing new can come of.

We detached this subthread from https://news.ycombinator.com/item?id=16794319 and marked it off-topic.


I should have made it clear in my comment that I did not have political intentions.

The Obama campaign innovated the use of social media data in elections. I wanted to know if the EFF had anything to say about it at the time. That's it.


Ah I see. Sorry for misreading you, but it's unfortunately almost always the correct interpretation, unless the comment was packed with enough flame retardant to disambiguate. Even that doesn't work much of the time.


I'm not sure of the relevance of your comment. It sounds like you're trying to take a total privacy/anonymity clusterfuck and turn it into some weird kind of Reps-vs-Dems crap.

The EFF, libertarians, and other privacy advocates have been warning about this utter disaster for many years. Everybody laughed about it. In fact, they all laughed at anybody who was brave enough to bring it up.

There are domestic political concerns here. It wasn't until both major political parties in the U.S. got good and stepped on that anybody seemed to really take the matter seriously. It remains to be seen if any of this concern will still be around 2 weeks from now.

My money says there will be a lot of heat and smoke, but no fire. At the end of the day, some Congressperson will come up with yet another Orwellian-named bill, probably the "Privacy Overarching Overall Personhood" bill, that will promise to fix everything and regulate social media. Through regulatory capture FB will bitch and moan to make appropriate gestures of servitude, then things will continue on just like they have been. Facebook isn't going anywhere. (And the problem isn't going away anytime soon.)


The sad thing about this is that you're probably exactly right. I don't think the politicians on either side of this truly care about our privacy and Facebook has never been good about privacy. The EFF and other advocates have long been consistent on the fact that FB was bad for your privacy and I never joined FB as a result. And the sad thing is, I know they still have quite a bit of data on me due to various relatives, etc.


I should have made it clear in my comment that I did not have political intentions.

The Obama campaign innovated the use of social media data in elections. I wanted to know if the EFF had anything to say about it at the time. That's it.

zaksoup 9 months ago [flagged]

I don't think the comment you're responding to was made in good faith

https://en.wikipedia.org/wiki/Whataboutism


Maybe so.

I try to assume positive intent. Many times I end up looking like somebody who can't take a joke, but it's extremely difficult to sort out nuance by reading a brief sentence or two.


I think the effort at sincerity is a worthwhile one, so good on you.

Popular irony has so circled back on itself that it’s nearly lost all meaning. It seems like a waste of time, because if it isn’t funny then it’s only purpose is to humiliate or politicize.


I should have made it clear in my comment that I did not have political intentions.

The Obama campaign innovated the use of social media data in elections. I wanted to know if the EFF had anything to say about it at the time. That's it.


appreciate your genuine followup. As dang said, given that this exact comment has been made, more or less verbatim, as intentional whataboutism it's become my default assumption.

A frustrating part of whataboutism is that it works so well because of good-faith contributors still being able to cause the same derailment/distraction that a cleverly placed bad-faith comment can.


It's part of why I hate seeing that as a dismissal, it seems to only contribute to the damage rather than limit it.

Anyhow, back on the original question, I honestly don't remember the question of Obama's data mining getting much attention at all. There were a handful of laudatory articles, but most of the focus was on how Obama outmaneuvered Hillary via strong electoral strategies, rather than voter targeting. It is somewhat interesting that Hillary's losses both look very similar in those regards, though.

So it was noticed on some level, but it just didn't get much scrutiny that I can recall.


I disagree. Saying "this is off-topic/derailing" whether or not it's genuine is the only preventative measure. You can say "this is whataboutism, ask this elsewhere" and still engage with OP as if they were acting in good faith.


Yes. The trick is to identify what something looks like and then assume positive intent and carry on -- as I did in my post.

For the record, I shared several Obama/data stories with friends, specifically mentioning how this was the sign of a really bad thing. (Not the details of what they did, the concept of using deep personal data and friend networks as a way to monitor and persuade voters.)

I am especially concerned that the public can't seem to focus on this problem unless there are clearly set up good guys, bad guys, and some kind of simplistic emotionally-manipulative narrative. So, for instance, the current narrative is foreign powers using FB in a way most users never expected to (perhaps) deeply influence a U.S. election. For many news consumers, that's got a bunch of cartoon figures acting almost like villains in a movie.

Much more concerning to me is the fact that data, once collected, can go and do anything. Being too much for any one person to understand, we rely on programming to do things with it. In effect, we are creating systems that we ourselves do not understand. So how could we understand the downstream effects? For every Hollywood Standard Movie Plot news story, how many other bad downstream effects are happening that we don't know and might never figure out?

I know this sounds a little bit like arguing from ignorance, but my point is that these are systems about which we cannot reason. So Zuckerberg may have a noble cause and be on a mission to connect the world so that it can be a better place. In the process the things he created could end up killing millions. And it might be a hundred years before we figure it out, if ever. (Likewise, he may actually cause more good than harm in the world -- and 20 years from now somebody steals the data and uses it to start a war killing tens of millions. Is that Zuckerberg's fault? With something like, it's not simply a matter of "stealing". If I were creating and collecting H-bombs in my basement and somebody broke in, the crime involved is not simply "theft". Something much worse is going on.)


I've never once seen it help and I've moderated forums since the 90s.

People used to put up FAQs and just link those. Now it's like the direction of the conversation is supposed to be decided before even talking and there's no concept of dialog or discussion which is where all of the interactions that were interesting to me happened.


Do you think it's meaningfully different when the discussion is under active attack by bad actors?

How do you moderate to keep genuine propaganda techniques from successfully working on your discussion without calling them out?

I'm asking because I think your point is really good: it sucks that we can't follow the directions of a conversation organically, and I find it a bummer that I'm personally advocating for tactics that will reduce that organic development. So what are the techniques moderators and good-faith participants can apply, in your experience?


> Do you think it's meaningfully different when the discussion is under active attack by bad actors?

I defend my positions instead of attacking people based on largely unproveable accusations. Sure, there really are a lot of bots/shills/etc. but the times you get that wrong hurt people and turn them away.

> How do you moderate to keep genuine propaganda techniques from successfully working on your discussion without calling them out?

By defeating the argument instead of the person. The shill type never engages in real conversation, anyhow, in general. They're like pellicans--fly in, crap over things, fly out.

> So what are the techniques moderators and good-faith participants can apply, in your experience?

Smaller communities where you can actually recognize posters over time. Being able to give elevator pitch style summaries of points I want to make without having to resort to copypasta of 800 links the other person will never actually read and maybe brief FAQs if something just comes up too often.

But another part of it is to try to engage the other person, to converse with them. I'm not just here to say "accept everything I tell you or leave the forum!" but I'll actually engage with and try to understand the other person's point. I'm not perfect, there are things I can learn.

I have a friend who is dyslexic, but loves to argue. A lot of people treat him like he's some idiot troll even though he's highly educated because he can't spell to save his life. He has interesting ideas that are worthy of thought and constant frustration communicating them.

These days people are like "you're a troll!" and won't even talk any more. But sometimes the conversations that go off the rails actually venture into interesting territory.

But maybe I'm just weird because I can type really fast and compose something like this post in just a few minutes. I do think that might help a bit too, because I can rattle off words on the keyboard just about as fast as I can talk so it's not so bad to explain something for the 27th time in an hour.

Anyhow, thank you for listening.


I appreciate a lot of what you're saying about engaging but the success of trolling often lies in taking up your attention at the expense of other things

> By defeating the argument instead of the person.

When literally every thread in HN about GDPR/FB has a top-3 comment asking "What about Obama?" how does continuing to defeat that point actually discourage or defeat trolling? The question has been asked and answered, but that doesn't stop people, troll and genuine, from continuing to ask it ad nauseam.

At what point does 'defeat the argument' mean we have to entertain conversations about global warming potentially being a hoax, or Obama's birth certificate being fake, or any other number of obviously political and mostly bad-faith because 1 out of 100 commenters have 'genuine' interest in 'conversation' about it?


You always have to defeat the argument or people will see it succeed and be convinced of the opposite because they've never seen any good rebuttals. There are something like 9 lurkers for every poster and sometimes you're posting for them too.

You'll never stop trolling or spam whatever you do, anyhow.

For global warming, you can give a simple elevator pitch version about how we have data on a global rise in temperatures and while yes, it was really warm long ago, it's still bad for the people here now.

For Obama, well, what does it even matter at this point? I don't think anyone disputes his mother being a citizen so it's pretty much whatever at this point.

Regarding Facebook, well, there is a valid point that breaking the Facebook ToS probably isn't the thing we should be the most upset about, and we really do have to deal with the thousands of other companies who compiled massive databases of info whether or not they happened to break the Facebook ToS.

Then point out that, yes, privacy advocates have long been against Facebook hoovering everyone's data, including when Obama was president and people are freaked out now because 'privacy violation' didn't mean much to them before but after CA, it's starting to click.


No, because it was within the agreements/facebook policies of that time.

https://www.snopes.com/fact-check/obama-campaign-use-tactics...


Quoted from the linked article:

Although the Obama campaign in 2012 did target potential voters using information gathered from Facebook profiles, there were key differences. The Obama for America organization accessed voters’ Facebook information when they logged on to the campaign web site via Facebook. Obama supporters were given a permission screen in which they could approve or deny the request, which clearly came from the Obama campaign.

Although Obama for America did collect data on users’ friends, it was at the time in line with Facebook policy. A Facebook spokesperson told us both candidates Obama and Republican Mitt Romney had access to the same tools. In 2015, Facebook changed the rules so that apps could no longer target the friends of users who downloaded them.

In the case of Cambridge Analytica, information was gathered from users and given to a third party under false pretenses.


This would seem to raise the question here whether the big problem was breaking Facebook's ToS or the loss of privacy. I'm somewhat surprised to find people flagging the former as the bigger concern, but then again, this sort of thing is why I never created a Facebook account.


It is possible for two things to be wrong at the same time.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: