I was going through some old news archives about Facebook and their privacy policies. I came across the dire EFF warning in December 2009 :
>"The issue of privacy when it comes to Facebook apps such as those innocent-seeming quizzes has been well-publicized by our friends at the ACLU and was a major concern for the Canadian Privacy Commissioner, which concluded that app developers had far too much freedom to suck up users' personal data, including the data of Facebook users who don't use apps at all. Facebook previously offered a solution to users who didn't want their info being shared with app developers over the Facebook Platform every time a one of their friends added an app: users could select a privacy option telling Facebook to "not share any information about me through the Facebook API.""
Well, it turns out EFF was correct and accurately predicted the unethical scenario of Cambridge Analytica siphoning data from Facebook users who didn't even take their quiz.
The bullet points of "fixes" that MZ outlined don't really address the fundamental problem. Facebook's "data privacy" problem is not fixable if they have to ultimately run valuable ads against that data.
Zeynep Tufecki does a good takedown of Facebook's "14-Year Apology Tour" .
It's a long-term, calculated, deliberate strategy to methodically abuse privacy with disastrous consequences, then when caught dead-to-rights, say "oops, sorry, made a mistake, we'll fix it".
This game worked amazingly well for 14 years, will people fall for it again this time?
More like the Non-Apology Tour.
You can donate here: https://supporters.eff.org/donate
I have a data intensive startup and I would love to show we comply.
Facebook already declared that they won't extend their GDPR compliance globally. So if you're in the US, you're not protected by it.
Edit: Just read Wikipedia and another source and both define it for people within the EU. They do mention whether the company is outside the EU and how they would need to handle the data for people within the EU.
That being said, I do not think giving money to eff is enough to get the changes necessary to be implemented. I am at a loss for what could be done you actually make change happen and am open if anyone has suggestions or other links to take a look at.
I admire the libertarian sentiment on HN but I think it would be greatly beneficial to everyone to have more thoughtful, smart and balanced people running our Governments.
“Smart, thoughtful” people who are happier in the private sector is exactly who currently runs our governments. So your plan is already in action.
If you have time and money left over after that, the EFF isn't the worst place to spend them.
edit: looks like some payroll systems may take a cut. Be wary, or set up recurring donation directly.
This is not going to gusto, but it taken out as part of the services provided by FirstGiving their partner to handle the donations: https://support.gusto.com/hc/en-us/articles/220929687-Charit....
I wholly disagree. They don't have to disclose the data to anyone in order to use it to target ads. Their targeting system works by allowing advertisers to specify targeting criteria, and then using logic on Facebook's own servers to match users to targeted ads. They aren't selling or disclosing the data to anyone. People keep conflating the CA situation with the business of targeted ads. One has nothing to do with the other.
You might, if you consider your personal interests to be consistently and permanently aligned with the ToS which best serves shareholder interest of the entity which controls the exploitation of the historical data, and on-going surveillance capability.
The data is pretty much unregulated, and there’s no reason for it to ever be deleted; given it’s size and value it’s likely to live-on much longer than Facebook (the company/legal entity) as we know it today. If they got in financial trouble, the prudent thing would be to flog the lot to a data broker to be resold and repackaged indefinitely.
Facebook themselves have a long established reputation for doing whatever they like and apologizing only if and when they get caught, and doing the minimum possible to ward off regulation or churn.
Just like the use of the term “breach”, there’s a broader use of the term which makes sense given the expectations of a user-base that’s been kept in the dark for years (as opposed to technical people with an understanding of the industry).
For example, my contact list is not “private” if it’s exfiltrated from my phone without my knowledge or consent, to be used for whatever purpose makes most money to whoever controls it (currently Facebook), against a ToS which can change at any time without notice, with almost no legal protection, regulation, or oversight.
If you consider something so “private” that you would be upset that an algorithm on a secure serve might use it to target an ad to you - after you read and agreed to exactly that behavior when signing up - perhaps you shouldn’t have given it to them in the first place. At what point do you take personal responsibility for giving away this information you consider to be so private and valuable?
Informed consent seems like a reasonable place to start.
I don't understand why nobody is talking about the even bigger issue that Facebook (and Google and every other ad service) gets to see what people are visiting and when (articles, videos, porn, ...)!
Imagine the detailed profiles you can create with that amount of data...
It seems to me they have quite a lot to do with each other, in that Facebook has an enormous amount of valuable personal information about people (which in many cases was extrapolated without the target's consent, as opposed to explicitly given), and is entirely reckless about where this data ends up, as long as it isn't actually "public" as such.
...another word: NSA
Judging from , this wasn't any "mistake":
They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.
This was a deliberate policy of allowing people "on their side" to access and use these data. Now, when it turned out people on the "other side" can do it too, it became a "big mistake" suddenly.
Do you have a citation for this? I've been trying to clarify the argument recently and am looking for other perspectives.
To use a wildly over-dramatic metaphor: this is what nuclear winter for data looks like. Sure, all the explosions are done, but the fallout will continue for quite some time.
Think of it like nuclear power: it has both advantages and problems, and it seemed inevitable until Three Mile Island, Chernobyl, and Fukushima killed it. What if we're watching the general public become aware of the downsides of the surveillance economy?
read in the voice of the South Park version of Tony Hayward.
By contrast, the same section under "Russian Election Interference" is well thought out. There's some hand-wavy stuff (e.g. "in the U.S. Senate Alabama special election last year, we deployed new AI tools that proactively detected and removed fake accounts from Macedonia trying to spread misinformation"). But requiring "every advertiser who wants to run political or issue ads...confirm their identity and location" and mandating the ads "show...who paid for them" is meaningful. That they're "starting this in the U.S. and expanding to the rest of the world in the coming months" is more encouraging. I'm also genuinely optimistic about their "tool that lets anyone see all of the ads a page is running" and "searchable archive of past political ads."
With Cambridge Analytica, a core component of Facebook's advertising business model is threatened. Hence the inaction. With Russia, Facebook and political advertisers' interests are aligned. Hence, action.
If they already disabled the API that CA was using, is "inaction" really the right word?
You don't find yourself testifying to the Congress because of a single technical loophole. This happens when a series of failures occur and fail to be remedied. In some cases they may not even be seen, internally, as failures .
Zuckerberg is there to tell the Congress "you can trust me." The Congress wants him to say that publicly to gauge if there's political will, amongst voters, to go after Facebook. Treating this as a narrow reaction to limited failures as opposed to a general questioning of the viability of an internationally-scoped ad-driven politically-volatile social network is what I expect Zuckerberg to do, and why I expect him to fall down.
If nothing else, it's a classic case of closing the barn doors after the horses are long gone. And Facebook's barn was built with every wall as an open door, and they've been slowly locking them up as people get outraged enough about things to demand it. But there are still tons of open doors.
This feels like a real change in public awareness to me. I’ve never seen this kind of real talk about privacy in the non technical press before, and it just keeps going and going. How can Facebook thrive as people become aware of just how crooked they are?
However, I wonder about the reliability of "PO Box" as proxy to political identity.
Or how FB is going to stop Mom-and-Pop retail pages from advertising false political messages ("2-for-1 sale! All proceeds go towards stopping the Trump-ordered seal beatings in Antartica!")
Facebook knew about the Cambridge Analytica data harvesting way back in 2015. The true extent of it was only exposed to the public on March 2018 because a CA whistleblower talked to news outlets about it.
Therefore, Facebook had ~2 years before the negative stories were blasted everywhere.
Because, as we all know, Facebook only really starts trying to figure out how to solve a problem once it becomes a scandal.
I think more conspiratorial thinking is necessary. With [fake] political ads, the ability of FB to usurp the ruling class in the US is apparent. In this case, it was Russia but in another hypothetical case it could be FB themselves (picking and choosing the propaganda err messages to show). Since the powers that be won't take kindly to that, action.
The powers that be don't actually care about privacy, in fact they actively don't want people to have privacy. Hence, inaction for CA.
1) Go to pcpartpicker, pick Storage, sort by Price/GB:
3TB (for $57.50)
2) Go to Google, look up number of people in the United States:
You can store 9.2 kilobytes for every single person in the United States, for just $57.50.
9.2 kilobytes is actually a pretty decent amount of data. For comparison, this is Chapter 40 of Pride and Prejudice, at 9kb:
The complete works of Shakespeare fit into 5 MB.
To store the equivalent of the complete works of Shakespeare for every person in the US would cost: $31,222.50.
Heck, I know small businesses that could afford that, let alone the mega-corp media conglomerates.
How much data does Verizon store about me? Comcast? Target? Visa?
What's the cost to implement GDPR? What about storing high definition photos? What does availability across regions look like? What does latency look like? What does backup look like?
Do you see where I'm going with this?
>What's the cost to implement GDPR?:
I laugh and continue to compile a list of where all crossbow enthusiasts with type-O blood have been in the past 6 months.
>What about storing high definition photos?:
Facebook can handle that. All I want to do is store a 9KB index file on every US citizen, because I believe I am a vampiric J. Edgar Hoover.
>What does availability across regions look like?:
The drive sits in my mom's basement as I continue to scrape crossbowforums.org against the Red Cross donors list.
>What does latency look like?:
I guess however well a 50-centimeter SATA cable does.
>What does backup look like?:
I buy two drives at once for $115 and get free shipping on the second one.
For instance (from Wikipedia):
> In April 2016, Correct the Record announced that it would be spending $1 million to find and confront social media users who post unflattering messages about Clinton. The organization's president, Brad Woodhouse, said they had "about a dozen people engaged in [producing] nothing but positive content on Hillary Clinton" and had a team distributing information "particularly of interest to women".
... and Hacker News, Reddit, Twitter, etc.
When I'm discussing politics with friends on Facebook, at least I know I'm talking to real people whom I've met in person. This isn't true with most other platforms. (Except some followers on Twitter...)
It's just rather difficult to distinguish between the two.
But those ways didn't help Donald Trump get elected.
Whatever it is, we're going to see that "ads which influence the public for political aims" are going to bleed juuuust on the other edge of that definition.
Also interesting that Facebook's response to this issue is to collect more data -- this time about "advertisers with specific political agendas", which seems like an interesting database to mine (although maybe not that hard to collect, idk).
In this context, Facebook decides. Broadly, the FEC has jurisdiction over defining and regulating "electioneering communications" .
Anonymous or pseudonymous political speech is very valuable for a robust democratic debate, and it is extremely sad - though not exactly surprising - that it is being eradicated under the guise of "transparency" and "protecting the elections from foreign influence" and all kinds of bullshit like that.
The top image here (http://nymag.com/selectall/2017/11/house-democrats-release-r...) is one of these Russia ads. However, I wouldn't call that a political ad, just a terrible meme that happens to be about Clinton and the election.
Maybe ask why America is the only democratic nation with uncapped campaign spending.
"Jailed for a Facebook post:"
Even then, in this case, Peralta was not prosecuted. Which is probably OK if he was just extremely stupid and not actually intended to proceed with burning the house down. But specific violent threat is not the same as just speech, and one should never present it as such.
In America you can also freely speak slander on the internet and the victim will have to live with it for life.
It is fascinating that you do not recognize how contradictory your statement is. "Campaign finance caps" is the limitation of speech, so if "the forces" prevent it from happening, they are not shutting down any speech - they enable it. With that, I have never heard any of the organization opposing campaign finance caps ever shutting down any political speech. Can you name 2-3 cases?
I can name a case where a prominent politician, known for support of campaign finance limitations (and running an extremely well-financed campaign), tried to suppress clearly political speech against that politician, under the guise of campaign finance laws. You know this case as https://en.wikipedia.org/wiki/Citizens_United_v._FEC
> In America you can also freely speak slander on the internet and the victim will have to live with it for life.
Slander is a verbal spoken defamation. Defamation is a persistent form is called libel. Both are actionable offenses in the US. Of course, proving libel is not easy. Neither it should be.
Where on earth do you get an idea like that?
No, the US is not the only democratic nation with uncapped campaign spending, for any plausible definition of "democratic" and "campaign spending".
The caps in other countries might not implemented in an umbrella dollar amount but as rules like no TV ads, or the dollar amount could only apply to certain items. Often it is a cap on donations which effectively caps spending. At the end of the day, democracies generally must limit campaigning influence because if they don’t, they might end up with skewed democratic process, possibly a reality show star as president.
This is a gross misunderstanding of the Citizens United ruling.
"For even greater political ads transparency, we have also built a tool that lets anyone see all of the ads a page is running [and are] creating a searchable archive of past political ads."
"In order to require verification for all of these pages and advertisers, we will hire thousands of more people."
Sounds like Facebook has a lot more work to do.
Let's say I'm a foreign intelligence agency and want to influence people through ads. Feels like I just need to set up a shell corporation based in the states. Once I'm authorized, I can load the creative offshore.
It solves the problem of Facebook getting bad PR.
I'd be shocked if Zuck gives 2 shits about any of this, beyond thinking about how he can utilize facebook to get himself elected.
I don't know if HN has taken any steps to combat the right-wing/Russian propaganda campaign that's impacting most US social websites, but it'd be naive to think the site hasn't been targeted.
I think that's a Facebook targeting problem, not a problem with the people buying the ads.
Somehow Facebook has no idea where I am, and I get local ads for all kinds for businesses in random cities to which I've never been.
There's also the concept of extended targeting. For example, as an advertiser, you run Los Angeles ads in Palm Springs because there's a lot of people who live in or frequently visit Palm Springs who also have homes/businesses in Los Angeles. Ditto to LA-Las Vegas, SFO-Reno, Chicagoland-Wisconsin, Minneapolis-Southern Manitoba/Western Ontario, Houston-Austin, San Antonio-Padre, Cincinnati-Carolinas, etc...
Of course, there may be controls the user has when creating ads that may've been messed up, I'm not sure, but I honestly only recall a few ads that were in my area so that I could, if inclined, act on them. The vast majority? The only way it would work is if they sold me on their platform so hard that I sold my house and bought into their district in their state.
Why,? The world would be a better place with less pointless political advertising
If we want to get special interests out of everything like we claim, then we need to have the platforms for people to get themselves out there to be both effective and cheap enough to do so.
Who gets to define "pointless?"
More specifically, the filtering would be informed by the biases and prejudices of those who control the gates. As it always is.
It doesn't apply to online ads, which has become a serious problem with the amount of money flowing into Brexit from illegally-anonymous sources.
>Any advertiser who doesn’t pass will be prohibited from running political or issue ads.
How does this interact with the First Amendment? If I'm paying for a local TV company to run an add, wouldn't restricting my ability be in violation? What if I buy the local TV company and choose which ads are ran?
Edit: What about non-political or non-issue ads that still have a political component? For example, if I was a billionaire wanting to cause some certain political divides, I could definitely create ads that still cause great controversy. For example, spend some money developing a bullet proof school outfit, and then advertise it heavily. It is a bunch of extra work and expenditure I wouldn't do if I could just run gun control ads, but if those were banned are you going to ban any advertisements related to any merchandise that is related to political issues?
The first amendment only frees your speech from state suppression.
Except when they want to hide behind the legal defence of “common carrier”. Can’t have it both ways.
It doesn't interact with the First Amendment at all. Private corporations can make their own rules about what speech they will and will not allow. The First Amendment only deals with the government's ability to censor individuals, not a private entity's policies regarding allowable communications. Most Americans don't seem to understand this.
> I think many Americans understand this very well, and thus resorting to narrow legalistic view of the issue misunderstands their position.
You're being disingenuous here or at the very least you put way too much faith in the average American's knowledge of basic civics / their own government. I've had to point out that the First Amendment doesn't apply to private employment numerous times in the past (i.e., this is not my first comment) in situations where the other commenter literally did not understand the Constitution and the basic history surrounding it. The demographic reading Hacker News is not at all indicative of the U.S. population at large and usually consists of college-educated, middle class knowledge workers.
 Clearly the original parent does understand the First Amendment.
> You're being disingenuous here or at the very least you put way too much faith in the average American's knowledge of basic civics / their own government.
You don't need to known details of government functioning to value free speech. Valuing free speech is one of those natural values most people - at least in American culture - embrace by default, without much need of educating them on the details.
>  Clearly the original parent does understand the First Amendment.
That very well may be (though OP recognized their error in an edit), but the comment I responded to talked about "most Americans", not just one online commenter.
Obviously the threat is assumed. But can congress explicitly threaten a company? Is this all congress needs to do to run around the first amendment in the 21st century - threaten internet companies to do it for them?
Just like it's not a First Amendment violation for you to refuse me to paint racist slogan on the side of my house, does not mean it's a First Amendment violation for me to refuse to run your ads on your site.
This part seems to be rather interesting. The number of people who viewed the content created by IRA in general is appalling.
I'm sure governments are going to come down hard on Zuck for "allowing disinformation to be spread", but won't give a second thought about cutting education budgets.
It's hard for politicians to make it illegal to lie or to run a platform where people might lie. People are always going to lie and try to deceive others. It would be more effective for these politicians to actually educate their constituents. This is just one of many benefits of having educated citizenry...
Prevention is always cheaper than treatment, but much harder to get people to understand the value of because the threat isn't in front of them.
It's not that people don't deserve compassion and empathy, but don't ever expect positive things from others if you want to avoid spending your life perpetually disappointed.
Why would politicians want to educate their constituents, when it's so much easier to get reelected by uneducated ones?
This is squarely Facebook's fault. They had completely vulnerable systems in spite of being a source for news and information for hundreds of millions of americans. If we didn't have the DMCA they should and would probably be shut down.
Is it possible in your philosophy for two "educated" people to disagree? On more than just personal preference like whether blue is better than green, but indeed on seemingly foundational issues like what is real and what is imaginary?
I'm not talking about educated people agreeing or disagreeing, I'm not sure where you're going with that. I'm talking about leaving people with the skills to tell the difference between something reported as fact and an opinion. It's about knowing how to verify, or debunk, claims presented without evidence. It's about being able to detect complete fabrication. After all, what "educated" people thought Pizzagate was real?
Basically I'm asking you to define "education" and why the solution to so many problems is simply more of it. After all, what "educated" people thought Christ was crucified and then resurrected?
How many come to believe that having never heard it prior to completing their education?
Ben Casnocha wrote: "Rule of thumb: Be skeptical of things you learned before you could read. E.g., religion."
Ok, you're arguing on a false premise then. When education budgets are reduced, kids don't have up to date textbooks. Classrooms don't have proper supplies and too many kids get crammed into too few classrooms, which is not a situation conducive to teaching or learning.
This is a whole side-show commentary however. My main line of questioning remains: how do you define education and in what way does lacking it matter for the purposes you claim to care about (being able to distinguish reality and fantasy)? Do you want to even venture at broaching the religious question? I'm ok with leaving that to the side.
Maybe we can start with the downsides you've extracted out. What subjects are the important ones for which kids must have up to date textbooks in-class, the lack of which will hurt them in distinguishing reality and fantasy? What supplies are proper? What are some objective impacts to learning something when there are student-teacher ratios of 10:1 vs 20:1 vs 50:1? Does the extent of the impact matter by subject? (For the last I'd bet yes given many well funded universities have ratios in the hundreds for certain courses with apparently few if any seriously ill effects.)
(To head off another possible point of miscommunication, I don't dispute that nebulous unfactored "education" can have more benefits than just distinguishing reality and fantasy and that various positive aspects might not impact that particular aspect at all, I just want to target that particular one for what you'd like to focus on when "more education" is pumped in, either by the schools that are having their budgets reduced or by the politicians themselves. Maybe that's another form of the question to ask instead? How specifically do you expect the ideal politician to education their constituents (and what's the ideal educator:educatee ratio?))
Anyone who was in a FB sales meeting when OpenGraph was launched knows this is a very calculated understatement. I've heard them explicitly sell the information of an entire user's friends list to anyone willing to pay.
So does FB actually sell data to third parties? I've seen many on here say that is not the case.
Source? If this were true and they'd do it for anyone willing to pay, where can I give them money for user's personal data?
And be sure to report your fellow citizens for reeducation when they spread "negativity" and "misinformed opinions".
What sort of honest business depends on the Facebook API?
(it was a genuine question!)
I am more concerned about his identification of "fake news" and "hate speech" as issues without any clarification. These are both subjective descriptions with the potential for damaging abuse, which we are already seeing with the banning of Diamonds and Silk from Facebook.
In my opinion, Facebook, Google, and other social media platforms should be required to be content neutral and its users given first amendment protection (with its concomitant limits). This would require federal legislation.
It's heartening that OStatus (GNU Social, Mastodon, Pleroma, etc.) are getting some uptake, but existing federated networks are even more poorly equipped to compete with Facebook than with Twitter and Tumblr. The nice thing about Facebook is its privacy settings -- I can, say, post about going to Pride and make sure no one in my family sees it.
The problem is, Facebook could set up its rules so that campaign messaging for candidates they dislike just happens to violate them, and campaign messaging for candidates they like just happens not to. There's a lot of potential for abuse of power here.
It seems like a lot of people aren't worried about that, because they figure the targets of the current uses of this power really need to be suppressed. But there's no reason this power can't be directed elsewhere. I bet there are people who thought it was OK to fire us for being gay who are now pretty mad that people can be fired for donating to Prop 8.
For example when Zuck says:
>My top priority has always been our social mission of connecting people, building community and bringing the world closer together. Advertisers and developers will never take priority over that as long as I’m running Facebook.
This is either misleading or a lie. Facebook is a corporation, what Zuckerberg wants to do as an individual isn't even relevant, what's relevant is what the corporation "wants" to do to increase profit. So far that has been pursuing connecting people and building communities and then taking that information and selling it to developers and advertisers. The phrasing places community and monetization in competition but Facebook's model isn't of competition between those it's model is a monetization of community. So the testimony never speaks to the fundamental problem of having advertisers as any level of priority here.
Also to say that Cambridge Analytica abused the system sidesteps the issue of whether it was in Facebook's interest to allow this data to be collected and then misused. Cambridge Analytica purchased data which made Facebook the most attractive advertising platform for them because of the targeting they could do. That is good for Facebook's bottom line. They also don't address the total amount of information that may be out there, floating through advertisers outside of Facebook's control right now. Despite new restrictions how much about me is already out there? That's what I want to know.
Overall, it's basically what I expected but I really hope there are a few representatives who can set aside the specifics of this one incident and attack the wider notion of Facebook's purpose.
Zuck is a majority shareholder, so what he wants to do as an individual can actually be fairly significant here.
I believe they are investigating options for letting you pay for FB to turn off advertisements, however.
In the end, it's hard to know for sure what Mark's true motivations are, just as it is for anyone else. The only person who knows Mark's true intentions is Mark. That said, I personally believe him. When I listen to him talk about what he wants to do with Facebook, it seems obvious to me that he has good intentions at heart.
He's about my age, and when I was young the internet let me communicate with people from different areas of the world, different backgrounds, and helped me expand my own understanding of humanity. The internet made me a better person -- a more tolerant, more thoughtful, less prejudiced person. I hoped that the internet could facilitate the same personal growth in the rest of humanity, too. By drawing people together from the disparate corners of the world online, we could become a more tolerant, more understanding species. Mark has said similar things.
Unfortunately, things haven't quite worked out as he, or I, hoped, it seems. =(
We are not done here though, the fight to use the internet to connect the world continues!
It is in Facebooks interests to guard that information carefully.
There's a reason why we have the separation between the head of state, the Church, Congress and the federal courts.
Also there's this older piece from him here:
Basically Wu has a much more radical critique of facebook which is probably best explained in his own words (from the npr piece):
>I think the problem lies here. It's actually a very fundamental one, which is Facebook is always in the position of serving two masters. If its actual purpose was just trying to connect friends and family, and it didn't have a secondary motive of trying to also prove to another set of people that it could gather as much data as possible and make it possible to manipulate or influence or persuade people, then it wouldn't be a problem. For example, if they were a nonprofit, it wouldn't be a problem.
>I think there's a sort of intrinsic problem with having for-profit entities with this business model in this position of so much public trust because they're always at the edge because their profitability depends on it.
FB doesn't sell access to their API. Many other businesses sell access to their API. FB doesn't.
> Also to say that Cambridge Analytica abused the system sidesteps the issue of whether it was in Facebook's interest to allow this data to be collected and then misused.
It absolutely wasn't in FB's interest as this media firestorm indicates. Also, CA purchased data illegitimately from someone other than FB. FB did not sell your data to CA or anyone else. (FB does sell advertisements, and those ads can be targeted to specific types of users.)
> Despite new restrictions how much about me is already out there? That's what I want to know.
FB is coming out soon with a feature that will let you see if CA had access to your profile.
This will probably sound reassuring to legislators, but it pretty much permanently accepts the burden of responsibility for misinformation on the site. It sounds likely to force Facebook into a costly permanent arms race against every malicious political actor in the world. I wonder how regular users will get caught in the crossfire.
Not that I feel bad for FB here -- they make tons of money because they have the most captive attention of any entity in the world. Attention is valuable because it allows platforms to influence people's behavior toward certain actions. Advertisers are not the only ones who realize this, and FB has to take responsibility for all forms of influence on its site, not just commercial ads.
However SESTA and FOSTA have eroded that for "sex trafficking". I expect more exceptions to sneak in.
I don't use facebook, because I think a free-software based & decentralized social network would be the right way to do social networking, but if the rest of the people want to give away their info, fine by me.
If someone I know wants to give my info to fb, it's fine too, but he/she loses part of my trust.
I value the freedom to do whatever you want with the data you have more than the convenience of the government protecting whatever data I foolishly gave away to someone.
The reason we elect politicians rather than deciding how to deal with big, social issues like that ourselves is because it's a full time job. Most people don't have the time and have no obligation to research "free-software based & decentralized social networks". They don't know about it and don't have the time or education to even know there's alternatives. We're so cynical about politics that it's hard to even trust politics to solve any of this since it's all so backwards and corrupt but I believe, because of the sheer impact it has on society, this is a political problem and needs to be solved through actual laws.
I agree that being a politician is a full time job, but it's not the job you appear to think it is. The job of a politician is to get reelected. Any good the politician happens to do is a side effect that you can't count on happening again.
But we don't elect ideal politicians, we elect real ones. There has never been any such thing as a political system where the "ideal job" actually existed, and I don't see any prospect of such a system ever existing. So the specifications of the "ideal job" are irrelevant; we need to look at the real jobs that real politicians actually do.
If the data is valuable and worthy of protection, and you have no ability to provide that protection, then it is reasonable for protection to be required from the layer that is removing the individual's agency.
I don't see how a decentralized and free social network wouldn't have any privacy issues. As long as you are putting your information online, it's ripe for attacks. If you have a friend in your decentralized network that uses a malicious app, that app now has access to his/her friends list and thus your information. CA got a ton of data even from users who didn't take their quiz, but rather through friends of people who took the quiz.
The best social network, IMO, is the one you have IRL.
They do have legal obligations if they're going to sell political ads.
>> It’s not enough to just connect people, we have to make sure those connections are positive.
Does he really think he can define what "positive" means with a platform hosting hundred of millions of communications ? There is no positive, there's human nature. Regulating that by FB itself won't work but worse, it will be a tyranny. The gated wall of FB will mean gated psychology, sociology, etc. FB could be great if it was run by people who think about people, not shareholders...
Edit - Explanation: https://news.ycombinator.com/item?id=16795723
However, what's more interesting to me is the disturbing details about the spread of political advertising on Facebook. It seems that elections were much more heavily data mined and influenced than before. A nation state level attack on elections seems plausible and highly achievable, and was done not just in the US but France, Germany and elsewhere.
I will be closely following whether the radical transparency measures proposed will have much impact given the upcoming elections in much more corruption prone countries such as India.
From my perspective, I would not call this action "swift and decisive", quite honestly: If he were making quick and decisive actions in the interest of privacy, then he would apply the GDPR features to every user.
"Facebook says that some laws elsewhere in the world conflict with GDPR’s new laws for Europe so they can’t be extended everywhere, and that the interface for some of these tools may vary"
But in all fairness: I am running into this very same issue RE: GDPR in light of some other compliance laws from the EU and US.
That way Congress has both the information "for the record" and the opportunity to grandstand.
> "They trust me — dumb fucks." - Mark Zuckerberg 2010
Interesting placement of "most"
Zuck: Just ask
Zuck: I have over 4,000 emails, pictures, addresses, SNS
[Redacted Friend's Name]: What? How'd you manage that one?
Zuck: People just submitted it.
Zuck: I don't know why.
Zuck: They "trust me"
Zuck: Dumb fucks
We detached this subthread from https://news.ycombinator.com/item?id=16794319 and marked it off-topic.
The Obama campaign innovated the use of social media data in elections. I wanted to know if the EFF had anything to say about it at the time. That's it.
The EFF, libertarians, and other privacy advocates have been warning about this utter disaster for many years. Everybody laughed about it. In fact, they all laughed at anybody who was brave enough to bring it up.
There are domestic political concerns here. It wasn't until both major political parties in the U.S. got good and stepped on that anybody seemed to really take the matter seriously. It remains to be seen if any of this concern will still be around 2 weeks from now.
My money says there will be a lot of heat and smoke, but no fire. At the end of the day, some Congressperson will come up with yet another Orwellian-named bill, probably the "Privacy Overarching Overall Personhood" bill, that will promise to fix everything and regulate social media. Through regulatory capture FB will bitch and moan to make appropriate gestures of servitude, then things will continue on just like they have been. Facebook isn't going anywhere. (And the problem isn't going away anytime soon.)
I try to assume positive intent. Many times I end up looking like somebody who can't take a joke, but it's extremely difficult to sort out nuance by reading a brief sentence or two.
Popular irony has so circled back on itself that it’s nearly lost all meaning. It seems like a waste of time, because if it isn’t funny then it’s only purpose is to humiliate or politicize.
A frustrating part of whataboutism is that it works so well because of good-faith contributors still being able to cause the same derailment/distraction that a cleverly placed bad-faith comment can.
Anyhow, back on the original question, I honestly don't remember the question of Obama's data mining getting much attention at all. There were a handful of laudatory articles, but most of the focus was on how Obama outmaneuvered Hillary via strong electoral strategies, rather than voter targeting. It is somewhat interesting that Hillary's losses both look very similar in those regards, though.
So it was noticed on some level, but it just didn't get much scrutiny that I can recall.
For the record, I shared several Obama/data stories with friends, specifically mentioning how this was the sign of a really bad thing. (Not the details of what they did, the concept of using deep personal data and friend networks as a way to monitor and persuade voters.)
I am especially concerned that the public can't seem to focus on this problem unless there are clearly set up good guys, bad guys, and some kind of simplistic emotionally-manipulative narrative. So, for instance, the current narrative is foreign powers using FB in a way most users never expected to (perhaps) deeply influence a U.S. election. For many news consumers, that's got a bunch of cartoon figures acting almost like villains in a movie.
Much more concerning to me is the fact that data, once collected, can go and do anything. Being too much for any one person to understand, we rely on programming to do things with it. In effect, we are creating systems that we ourselves do not understand. So how could we understand the downstream effects? For every Hollywood Standard Movie Plot news story, how many other bad downstream effects are happening that we don't know and might never figure out?
I know this sounds a little bit like arguing from ignorance, but my point is that these are systems about which we cannot reason. So Zuckerberg may have a noble cause and be on a mission to connect the world so that it can be a better place. In the process the things he created could end up killing millions. And it might be a hundred years before we figure it out, if ever. (Likewise, he may actually cause more good than harm in the world -- and 20 years from now somebody steals the data and uses it to start a war killing tens of millions. Is that Zuckerberg's fault? With something like, it's not simply a matter of "stealing". If I were creating and collecting H-bombs in my basement and somebody broke in, the crime involved is not simply "theft". Something much worse is going on.)
People used to put up FAQs and just link those. Now it's like the direction of the conversation is supposed to be decided before even talking and there's no concept of dialog or discussion which is where all of the interactions that were interesting to me happened.
How do you moderate to keep genuine propaganda techniques from successfully working on your discussion without calling them out?
I'm asking because I think your point is really good: it sucks that we can't follow the directions of a conversation organically, and I find it a bummer that I'm personally advocating for tactics that will reduce that organic development. So what are the techniques moderators and good-faith participants can apply, in your experience?
I defend my positions instead of attacking people based on largely unproveable accusations. Sure, there really are a lot of bots/shills/etc. but the times you get that wrong hurt people and turn them away.
> How do you moderate to keep genuine propaganda techniques from successfully working on your discussion without calling them out?
By defeating the argument instead of the person. The shill type never engages in real conversation, anyhow, in general. They're like pellicans--fly in, crap over things, fly out.
> So what are the techniques moderators and good-faith participants can apply, in your experience?
Smaller communities where you can actually recognize posters over time. Being able to give elevator pitch style summaries of points I want to make without having to resort to copypasta of 800 links the other person will never actually read and maybe brief FAQs if something just comes up too often.
But another part of it is to try to engage the other person, to converse with them. I'm not just here to say "accept everything I tell you or leave the forum!" but I'll actually engage with and try to understand the other person's point. I'm not perfect, there are things I can learn.
I have a friend who is dyslexic, but loves to argue. A lot of people treat him like he's some idiot troll even though he's highly educated because he can't spell to save his life. He has interesting ideas that are worthy of thought and constant frustration communicating them.
These days people are like "you're a troll!" and won't even talk any more. But sometimes the conversations that go off the rails actually venture into interesting territory.
But maybe I'm just weird because I can type really fast and compose something like this post in just a few minutes. I do think that might help a bit too, because I can rattle off words on the keyboard just about as fast as I can talk so it's not so bad to explain something for the 27th time in an hour.
Anyhow, thank you for listening.
> By defeating the argument instead of the person.
When literally every thread in HN about GDPR/FB has a top-3 comment asking "What about Obama?" how does continuing to defeat that point actually discourage or defeat trolling? The question has been asked and answered, but that doesn't stop people, troll and genuine, from continuing to ask it ad nauseam.
At what point does 'defeat the argument' mean we have to entertain conversations about global warming potentially being a hoax, or Obama's birth certificate being fake, or any other number of obviously political and mostly bad-faith because 1 out of 100 commenters have 'genuine' interest in 'conversation' about it?
You'll never stop trolling or spam whatever you do, anyhow.
For global warming, you can give a simple elevator pitch version about how we have data on a global rise in temperatures and while yes, it was really warm long ago, it's still bad for the people here now.
For Obama, well, what does it even matter at this point? I don't think anyone disputes his mother being a citizen so it's pretty much whatever at this point.
Regarding Facebook, well, there is a valid point that breaking the Facebook ToS probably isn't the thing we should be the most upset about, and we really do have to deal with the thousands of other companies who compiled massive databases of info whether or not they happened to break the Facebook ToS.
Then point out that, yes, privacy advocates have long been against Facebook hoovering everyone's data, including when Obama was president and people are freaked out now because 'privacy violation' didn't mean much to them before but after CA, it's starting to click.
Although the Obama campaign in 2012 did target potential voters using information gathered from Facebook profiles, there were key differences. The Obama for America organization accessed voters’ Facebook information when they logged on to the campaign web site via Facebook. Obama supporters were given a permission screen in which they could approve or deny the request, which clearly came from the Obama campaign.
Although Obama for America did collect data on users’ friends, it was at the time in line with Facebook policy. A Facebook spokesperson told us both candidates Obama and Republican Mitt Romney had access to the same tools. In 2015, Facebook changed the rules so that apps could no longer target the friends of users who downloaded them.
In the case of Cambridge Analytica, information was gathered from users and given to a third party under false pretenses.