Hacker News new | past | comments | ask | show | jobs | submit login
Hard Questions (fb.com)
103 points by panic on June 16, 2017 | hide | past | favorite | 35 comments



No mention of what Facebook's business is anywhere on the page.

Here are some actual hard questions that would probably get you fired from the position of "Vice President for Public Policy and Communications" were you to post them to the official FB blog:

- can a company which makes money as an advertising platform and data broker really protect the privacy of its users when that is directly at odds with its fundamental mission?

- is there value for Facebook in figuring out how to remove fake news or content from terrorists or dead users from its platform vis-a-vis continuing on with business-as-usual?

- is there actually any room for someone within the organization at Facebook to ask questions like this without violating implicit or explicit company norms about how Facebook is discussed, internally or publicly?


> - can a company which makes money as an advertising platform and data broker really protect the privacy of its users when that is directly at odds with its fundamental mission?

Good question; but I'm not sure that's one that Facebook could answer themselves.

I do think there are ways to protect identity from wide distribution while still allowing the type of ad targeting that Facebook does; but I also think that the cat is largely out of the bag at this point. Facebook is far from the only DMP out there and most people outside of HN couldn't name a single one of them, so you could argue that Facebook has more incentive than most to be a good steward of that data because of the very public nature of their service.

> - is there value for Facebook in figuring out how to remove fake news or content from terrorists or dead users from its platform vis-a-vis continuing on with business-as-usual?

Yes; the next phase in the information revolution is going to be centered around trust. If Facebook becomes widely associated with "fake news" in the public zeitgeist, that's bad. If Facebook becomes censored by governments because it is seen to be facilitating communication between terrorists, that's bad.

I think Facebook has recognized that some form of content governance is inevitable. But given the nature of partisan politics around the world, I'm not so sure we want to hand that content governance off to partisan governments either.

> - is there actually any room for someone within the organization at Facebook to ask questions like this without violating implicit or explicit company norms about how Facebook is discussed, internally or publicly?

For question 1, I don't think so -- but only because any answer they would come up with would be inherently biased.

For question 2, I think that's the exact conversation that the original post is trying to start. And yes, I think that there is a ton of value to the customer in providing a more curated content discovery experience. Whether it's of value to society remains to be seen.


> can a company which makes money as an advertising platform and data broker really protect the privacy of its users when that is directly at odds with its fundamental mission?

It's not directly at odds with its fundamental mission. If FB violates the trust of its users and maybe even law, then they lose customers and revenue. It's important for them to respect privacy as much as it allows them to avoid controversy. For example, if a news story reported that FB violates privacy laws, and then they were sued for it, it would be a huge loss for them.

Ultimately the real issue is that there are no laws concerning how social media users' data is protected. There are huge fines for leaking personal medical information, but only for medical instituons. This is effective, have you ever heard of a nurse that leaked std info about a celebrity? Similarly there are fines for leaks of data if a company can accept credit cards.

However social media sites can store whatever information they want with no protections, and it's fine. To me, this is a tremendous oversight by the government to not protect the privacy.


Well, in America.

Data protection in EU is stronger and only getting stronger with https://en.wikipedia.org/wiki/General_Data_Protection_Regula... which will target EU citizen data stored worldwide.


Interesting, must admit I am skeptical of the earnesty here (PR...). I am at work so I couldn't flesh it out properly, but I sent a cobbled up list of topics that I would love an official consideration on (hell, if they want I would love to work with them on some of this)

1. Digital selfdetermination. Having the right to not have pictures of you made public (where is a setting that automatically UN-tags me and blurs my face in pictures I don't approve?) as a private person vs. having a right to talk about public individuals, companies, filtering etc. without impedance

2. Following on that, collaboration with law enforcement instead of deleting/censoring.

3. Retroactive denial of usage. This applies more in the EU, but how does facebook plan to address the idea/need to revoke usage of my personal data after it has been used/possibly sold (IIRC you don't sell data directly, but that might have changed since then)?

4. Data takeout: what about making it easy to download and archive all posted content, but also uploaded content related to the individual (tagged pictures, mentions)

5. The status of facebook as a transnational, as "infrastucture" and it's relationship to the democratic and social systems in different countries

6. The ethics of curating and counter curating topics in the feed/related content for powerusers

7. The role of the defacto largest dataset on human communications and the use of that for AI research as well as the impact of AI and automation on society. Combined with that the position facebook would take on possible solutions like UBI

8. The impact of the "highlight reel" on depression and mental health. Studies have shown that heavy facebook use correlates with depression


At the very least, this is a good first step. Certainly, there are a lot more ethical questions to ask than are raised in this post (although I do think a lot of the questions you raise fall under the questions they suggest in their post, they just flesh them out more, e.g. (1) and (2) are intrinsically tied to the second-last question).

But culture is a serious thing, and managing it is hugely important for how a company handles big-picture issues, rather than simply letting them get tossed about in the waves. It definitely seems a bit late in the game for Facebook to start doing this - they passed startup scale years ago, and they've had some very big fuckups (c.f. News Feed experiments) - but it says something that they're stepping up to the plate to acknowledge this now, especially in the public eye.

I wonder what the discussion about this is like internally. I haven't had a chance to ping anyone I know there yet about this.


For point one, you can set it so as to have every tag of you in a post (not comments) be queued for review by you before they show up on your timeline. You can untag yourself during reviewing.

If you think Facebook should automatically recognise your face on other people's public posts (where you are NOT tagged) and blur THEIR pictures, I don't know if that's not just reverse censorship.

On a related point, FB could come up with a system that publishes any image with tags of personal profiles only after all those tagged profiles have approved their tags, or removed themselves, but that might significantly delay the post and inconvenience the poster.


Even if you don't have a FB account, people can upload pictures of you and tag you in them. It won't be linked to an account, but your name will be tagged to the picture.


> Having the right to not have pictures of you made public (where is a setting that automatically UN-tags me and blurs my face in pictures I don't approve?)

The minute that Faceshmuck implements that, you'll find half of 4chan posting your pics along with Barbra Streisand's mansion. Technology trumps privacy, information wants to be free, yada yada yada.


I'm talking about the possibility of going to a party, somebody taking a picture (as is their right) and then uploading it and tagging me. That I would like to have the filter for. If they really care that much about connecting me to the party, sure, but as a "casual privacy protection" it's about the same level as keeping your facebook wall private


I expect that the answers to these questions will be "whatever makes facebook the most money without drawing too much public distrust."

This is the company that performed psychological experiments on their customers without informed consent. The same company that worked with the government of Pakistan to EXECUTE a citizen spreading 'blasphemy' on their platform.

Zuck seems to want to start getting into politics (kill me now) so probably the best way to combat facebook's continuous moral disasters is to hold him personally responsible for them.


Let's not go overboard with those "psychological experiments". It wasn't like they were injecting you with radium. They simply ran different newsfeed algorithms and tested for correlated change in behaviour.


It wasn't an experiment to see how people were able to recognize different animals at different times of the day. They were deliberately altering people's moods just to see if they could. That may not sound like much but considering they did it over thousands of people, there's no telling the cumulative effects over people's mental health as a result of their poorly managed experiment.


Without consent, without any sort of internal ethics review.

The real problem is that they probably learned just to keep their mouth shut, not to stop doing those things. I'm sure it's all kosher by the TOS. But it's still the kind of actions any university's IRB would stop or require rigorous user protections, both to protect the University's brand and to treat people with respect.


so is A/B testing also immoral?


A/B testing is no more immoral than HTML, it's what you do with the tool that makes it moral/immoral.


I can't seem to find any info on Facebook working with the Pakistani government on that blasphemy case. Got any?


Sure: http://www.bbc.com/news/world-asia-39300270

"Facebook has agreed to send a team to Pakistan to address reservations about content on the social media site, according to the interior ministry."

But also

"But Facebook has not yet made any public comment about a delegation being sent to Pakistan."

I'm personally not inclined to trust Facebook on these matters.


> How can we use data for everyone’s benefit, without undermining people’s trust?

How about not collecting so much data? I really don't see how my social data benefits anyone except Facebook and advertisers.


> Facebook is where people... form support groups...

Geez. I hope not.

This brings up one a "hard question" we need to consider: discretion.

Many categories of communication are modeled really well by modern technology, including social networks. However, there is a real dearth of discretion-oriented communication available.

Privacy concerns are obvious and being discussed, but we're not really discussing discretion as a healthy part of our lifestyles. Some things need to be shared, but only with particular people or groups. This is absolutely social communication but it's basically unaddressed by any social network I can think of. Group discussions that revolve around substance abuse, significant health issues, survivors of violent crimes, etc. Even less structured conversations with loved ones that are about sensitive issues: health problems, money problems, relationship problems, etc.

And it's worth mentioning that these goals seem to be at odds with the goals of our governments (surveillance) and our social networks (data collection and mining).

Whatsapp (closed source encryption aside) helps a bit for one-on-one communication or permanent groups (like families), but doesn't lend itself to the conversations you might have with a counselor, priest, spouse, or support group.

It may be that "social networks" are entirely for public discourse, but they don't seem to be modeled that way, with "friends", "private" messaging, and so on. In the meantime, we have no real medium for this type of communication and I suspect it's actively hurting our culture and society.



Honestly this seems like the best we could hope for, not just from Facebook, but from any company. It seems honest and forthcoming. The conversations we're starting will be going for decades. This page clearly recognizes this.

Anyone who thinks they have the one true answer to any of those bullet points is full of hubris.


It's a good start.

I hesitate strongly to say "it's the best", as the whole concept of starting such a dialogue is to improve the discussion, the discussion process, the understanding, and the discovery of such truths as might be found.

This is much better than I'd expected, and is much as what I've hoped to encourage from Google, in whom I generally put more faith than Facebook, and in whom I've been increasingly disappointed.


Facebook is unprecendentedly widely used, global in reach.

It has become for many an essential service.

Facebook ( in many countries ) is a monopoly, unassailable competitively due to the size of its network.

In days of yore, such a huge monopoly that is a utility would be broken up like Ma Bell in the 1980s or like rail, water and electricity in the UK were nationalised.

Consumer data protections are currently woefully insufficient.

I propose that such vast walled gardens should be curtailed for the public good.

Thus Facebook should be just as viewable without signup, data exportable and deletable in whole or in part.

Facebook should provide an API so competing and extending services can be freely built on it, restoring competition and innovation and niche utilities it brings.

Consumers should be free to use whatever services and programs they choose on their own data whether stored on Facebooks servers or downloaded and held privately.

Huge transnational data monopolies should not be walled gardens.


> Facebook should provide an API so competing and extending services can be freely built on it, restoring competition and innovation and niche utilities it brings.

Facebook has zero incentive to do this voluntarily. It would be good for users (effectively turning social media into something like email) but terrible for Facebook's platform lock-in and all-important quarterly advertising revenue.

As an example of them going in the opposite direction, they've shut down XMPP in the past to force users to use their messenger exclusively[1] rather than 3rd party chat applications.

[1] https://news.ycombinator.com/item?id=9266769


> Facebook has zero incentive to do this voluntarily

I think the parent was suggesting some level of intervention and imposition. However, given the worldwide reach, which organisation could do this? I can't see how single nation government would be effective.

I wonder if Facebook could begin by unlocking data a fixed period after it has been posted? Assuming the value to Facebook of a posting erodes over time.


Your suggestion amounts to using Facebook as an infrastructure, not directly using their user-facing services. The problem is that alternative (and open source) infrastructures have been developed, but nobody had the UI that Facebook has so people used Facebook anyway.

Some like to argue that alternative infrastructures failed because "my friends weren't there so I couldn't use it" but that chicken-and-egg problem does not exist in small tight communities such as makers and startups. Yet, all those groups stayed on Facebook anyway even after trying Diaspora et. al. The common reasons were UI and features.


Yes but not quite.

I suggested that the consumer is not forced to use Facebook's user-facing services but would have a choice.

> all those groups stayed on Facebook anyway...

I don't fully accept your assertion that all these groups stayed on Facebook, many exist elsewhere.

You may well be right about some or most but I have not seen this surveyed anywhere. Do you have a citation ?


My guess at the answer is that effectively the public debate will be shaped as a compromise between ad peddlers, spooks, and people who have nothing better to do than complain on Facebook. I'd rather have terrorists in my news feed.


The orphaned words in that bulleted list is driving me crazy. Just throw an ` ` in between the two last words in each line and this will never happen.


[off topic] I really love to have one of those "Save it for later" or "mark it important" button on SNs, I tend to keep wasting my time digging through endless posts and/or feeds to get what I briefly saw.


A workaround on several sites (G+ and Reddit come to mind) is to create your own archive -- a G+ Community or Collection, a subreddit) where you save items of interest. I'm not sure if there's an equivalent on FB as I don't use it.

This has advantages over the bookmarking suggestion in that this is done within the context of the system itself, which may make finding it somewhat easlier: "Oh, I know that was on (FB|G+|Reddit ...)". On G+, search of Communities is vastly better than of Collections. On Reddit, Subreddits themselves are searchable, whilst save lists are not.


Facebook has a "Save post" option, you need to expand the tiny arrow on the right side of the post. You can review all saved posts later, in Explore > Saved subcategory.


Browsers have had these things called :bookmarks" for a while now, Safari also has the "reading list" for exactly your use case.


A (hopefully brief) attempt at responses -- I'm working on a more detailed one.

Q: How should platforms approach keeping terrorists from spreading propaganda online?

Briefly: consider this from an epidemiological perspective. There are infectuous agents, hosts, and vectors of propogation. In public health, a combination of factors is used to limit the spread of disease, with exceedingly high effectiveness. With greater effectiveness than all of acute and therapeutic medicine, by a factor of about 85% to 15%. See Laurie Garrett's The Coming Plague.

Monitoring, innoculation, disruption, containment, elimination of breeding and development conditions, and avoiding strengthening resistance to treatment* are all core elements.

The question of whose terrorist is whose freedom fighter also arises, as do questions over acceptable and unacceptable tactics in various forms of warfare.

Q: After a person dies, what should happen to their online identity?

This would be a very good thing to make a determination of whilst the person is still alive.

There is considerable prior art, on which I strongly recommend researching the legal definition and practice of will.

There's also a practice amongst librarians and academicians of access to personal writings, journals, etc., with consideration for both the deceased and those still living who might be affected by revelations.

Q: How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide what’s controversial, especially in a global community with a multitude of cultural norms?

Cultures vary tremendously in norms, and in what is considered acceptable or transgressive. Communications, online or otherwise, breaks down the barriers between such cultures.

One possible response is to perhaps resurrect at least some of those walls, at least in part. There's a notion from travel, "when in Rome...". There's also a trope of travel, of the ugly tourist -- British, American, German, of late, Japanese, Chinese, or Russian. Issues extend to both the traveller and the native.

The dislocation of online space in violating a sense of "whose space is this" is a severe one. That was amongst the more toxic elements of Google's exceedingly ill-conceived Anschluss of Google+ and YouTube. Not only were privacy norms (enshrined only a few years earlier in YouTube's own privacy guidelines) violated, but members of each community found themselves overwhelmed by "intruders" from the other.

De-globalising the community would seem a partial response.

Q: Who gets to define what’s false news — and what’s simply controversial political speech?

Briefly: Someone who's exceedingly good at it. And reasonably unbiased.

Non-briefly: this is among the fundamental philosophical dilemmas. There is considerable prior art, there are authorities, they should be consulted (and questioned). This is not a greenfield. Making a list of those authorities and references, sharing it, and the discussion, should help.

Epistemology, justice, the Scientific Method, the history of science (and where it has and hasn't succeeded, and at what rates), the history of free expression (and the limits placed upon it), including J.S. Mill (who did NOT coin the expression "the marketplace of ideas", that was a free-market advocate, Francis Wrigley Hirst), and more.

Q: Is social media good for democracy?

Wrong question.

Every single change in communications and media has had profound impacts upon, and fundamentally changed, the societies in which they occurred. See Elizabeth Eisenstein, Marshall McLuhan, and others who've written on the social impacts of communications. And by every, I mean going back to speech itself, as well as writing, clay tablets, paper, print, radio, film, phonograph, television, the Internet, and mobile.

Facebook has to face the fact that it and Google are the two largest media institutions in all of history. Their reach is on the order of billions of people. Contrast with the most-published books ever: a few billion for the Bible and Mao's Little Red Book, 500 million for Don Quixote. By contrast, "Gangnam Style" has been viewed over 2 billion times on YouTube alone.

That is great power. Spider Man on line 3 with a word about responsibility.

Social media is going to change democracy. Full stop. It may end it. It may only interrupt it, as radio did in spurring on fascism. We want to look to history, psychology, sociology, anthropology, economics, communications studies, information theory, and more, to get a sense of where the hell this is headed. Of late it's been more than a bit concerning.

Q: How can we use data for everyone’s benefit, without undermining people’s trust?

Wrong question. It presumes the answer, then poses the question.

Briefly: 1) respect people's boundaries, generally and 2) consider the public welfare, overall.

Non-briefly: this is among the fundamental philosophical dilemmas. There is considerable prior art, there are authorities, they should be consulted (and questioned). This is not a greenfield. Making a list of those authorities and references, sharing it, and the discussion, should help.

Q: How should young internet users be introduced to new ways to express themselves in a safe environment?

Not solely by a party whose self-interests fail to align with those of the young. Which would exclude Facebook, amongst other present Internet Giants: FAAMG -- Facebook, Amazon, Apple, Microsoft, Google.

The risks of indoctrination at a young age are exceedingly great. This is a role I'd like to see placed outside the control of any of the major participants to the extent possible.

Again: a partial response. There are questions not being asked by FB which should be, and much more which might also be said. I see serious limitations to this approach, and will be voicing criticisms.

But for all that, I applaud the initiative and approach, and hope that it evolves into an exceptionally necessary discussion. Facebook have out-shone the other principle participants in this space, and I truly hope they step up to the challenge.

I've alluded to prior art and works. In 2015 I suggested on a G+ thread that Google compile a bibliography or syllabus, make it required reading of all employees and contractors, and share it with the public. I'll extend that suggestion to Facebook as well.

https://plus.google.com/+YonatanZunger/posts/cKot7AKmtty




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: