Hacker News new | past | comments | ask | show | jobs | submit login
Former Facebook staffers launch Integrity Institute (protocol.com)
163 points by paultopia 85 days ago | hide | past | favorite | 175 comments

Oh hi! That’s me! (And a bunch of other folks). Happy to answer any questions.

I also really like this founders note that we wrote. It’s in our own words, etc: https://integrityinstitute.org/founders-letter

We are recruiting new members! If you work on integrity / trust and safety / antispam / content quality / etc, let’s talk.

Many PMs and engineers and laymen knew FB was a rotten product for a long time. I also assume a lot of people at this institute didn’t take the Alex Stamos (ex, short term FB CISO) U-turn either. So…

How is this effort not a virtue signaling by people that made their fortune on the back of this generation’s cigarettes, and now hoping to get traction on being listened to for fixing the mess they created?

Integrity folks across the industry work on solving these problems, not creating them. It is more of a balance of whether to tackle the problems inside the organization, or outside - depending on which is more effective.

What is not in question is that we should be spending time on tackling these problems.

> Integrity folks across the industry work on solving these problems, not creating them.

Solving these problems is pretty trivially simple. Get rid of the news feed, or perhaps make it just friends, reverse chronological order. All of these "integrity teams" only exist because Facebook isn't content just making boatloads of money, it needs to make 10x boatloads of money by getting people to scroll infinitely.

People try to pretend this is some hard, difficult problem, when it's only a problem because Facebook is just as addicted to their users' mindless scrolling as their users are.

> It is more of a balance of whether to tackle the problems inside the organization, or outside - depending on which is more effective.

I think you answered your own question already.

Of course.

Regarding internal solutions: it’s impossible to overlook the significantly misaligned incentives facing integrity teams. They try to solve problems inherently created by the profit model of the company and what also pays their salary. The parallels to the internal scientist teams at Phillip Morris are uncanny.

As I said, this paradox, and the company kicking the can down the road, has been obvious for years externally and also seemed internally per the Slack (or w/e) leaks post 1/6.

So, for these teams, if they’re staying longer than a quick in and out once realities trumped ones idealism at “going where the problem is” (look, I really do get that impulse), continuing to claim the moral high ground in the way this Institute is so tone deaf. Sorry buddy - I’m on levels.FYI as well, we all know what you got paid to (increasingly ineffectually) support this product. Like there’s been leaks for years of PMs discussing Myanmar genocide inflammation via FB/WA for a few years now. Was that not enough to leave?

This is inescapably how a lot of tech is going to judge this period/company. It went on for too long to claim ignorance otherwise.

Edit: You know what, no it's not at all about an external vs. internal balance, that very much misses the point. It's likely that a meaningful change which preserves what FB (and related) intrinsically is will have to come from an internally-driven fix. The technical abstractions are just too much of a problem for outsiders to grok, then suggest a fix for, and etc. etc. ("Senator, we sell ads...").

It's about understanding where the moral and character capital that leadership, especially transformational leadership, comes from, and understanding how the founders of this have none of it. People that have stayed at FB and profited immensely from the experience, integrity team or not, stayed silent about it, and then start coming out with this publicly now vs. years ago (see: Alex Stamos' example), don't have that capital. And it's so distasteful to see them think that they do via efforts like this. That's the problem.

While at the company, we were busy trying to solve problems during a critical period. The criticism has been fierce for Facebook - but make no mistake, our teams made enormous difference within the company, and the world would look very different than it does today without the work of integrity workers within the company. This continues to be true to this day.

Coming out publicly over the years has done very little. Even now, with all the attention, it is questionable what will actually change. We are much less interested in virtue signaling, taking the high ground, etc. than working with folks who are interested to carve a path forward. We are dedicating our lives to this work to find solutions over the long-term, whether it is in the spotlight, or not.

Someone else in this thread commented it well - “this is the problem talking to FB engineers.”

I’m not sure how you can read that founders letter, and not see a spotlight grab/virtue signals by people who contributed to and profited from the problem they’re trying now take a leading role in solving. It’s like there is just total refusal to be seen as part of the problem. Blows my mind.

“Coming out publicly has done very little” attitude sort of says all you need to know. Pretty sure the article said also ~”now that Frances came out publicly, we can start this!” Mental gymnastics.

Correct, the Integrity Institute's cofounder:

"Frances is exposing a lot of the knobs in the machine that is a modern social media platform," Massachi said. "Now that people know that those knobs exist, we can start having a conversation about what is the science of how these work, what these knobs do and why you would want to turn them in which direction."

It seems public discussion does help, and is even a critical input to this effort. It's just better when others do it first.

We started this back in January, and have been getting our ducks in a row since then. It turns out starting a nonprofit is hard, and takes a long time.

Haha what… have a fair bit of experience with technical mission non-profits, and I’d disagree with that take.

Registration is quick and has been done by many a skeleton crew over a Slack DM, AWS/Azure credits abound, free GSuite is available, board charter is boilerplate if you can make a mission statement, free Slack pro tier is often an option, $10k in Google Ads are available.

This is all a few weeks of focused evenings.

Man, words escape me. Tethics indeed. Will stop engaging on this thread.

>now hoping to get traction

Not even just that, they also want us to donate to cover the costs for the good work they’re surely going to be doing

Are there any examples of sticking to your principles by standing up for socially and morally reprehensible groups? My initial impression is this is just another group pushing left leaning elite American values as "integrity".

Things like this: https://www.aclu.org/other/aclu-history-taking-stand-free-sp... go a long way.

I have a hypothesis. It goes like this.

Today all the major text-based social platforms work in about the same way: we type text into an empty little box that's threaded under another box. There are variations in ranking and flagging, but the basic mechanism is unchanged.

It is striking to me that, in all these years, we have explored only a tiny little corner of the design space. There are no mechanisms to lower the temperature when arguments get intense, for example. Nothing to help keep us from misconstruing comments out of context. Nothing to assist us in feeling compassion for the people we're talking to, or understanding their intent as they mean it to be understood. And so on. It's almost as though we sold millions of cars without brakes, everyone is crashing into things and hurting each other, and our response as a society is to throw up our hands and say "Welp, guess humans are just too stupid to drive cars safely."

Many people have suggested eliminating engagement as a metric and going back to purely chronological feeds. That sounds pretty reasonable given where "engagement" has gotten us so far.

But what if there were such a thing as "healthy engagement"?

The hypothesis is that healthy engagement is achievable. I don't know whether it is, but there's a huge range of design possibilities that we have yet to explore.

Do you know of anyone working on healthy engagement? Is it something you want to work on, or do you have any recommendations on starting an effort in this direction?

>Do you know of anyone working on healthy engagement?

Wikipedia is one, by not "working on" it.

>It's almost as though we sold millions of cars without brakes, everyone is crashing into things and hurting each other, and our response as a society is to throw up our hands and say "Welp, guess humans are just too stupid to drive cars safely."

No, they have brakes, it's just that half of the car owners are punching people who work at Jiffy Lube and screaming that brakers should be killed.

>But what if there were such a thing as "healthy engagement"?

"Engagement" can not ever be healthy because "working on" it relies on manipulation. Curiosity implies an absence of manipulation and a maximum of arbitrary connections.

You want healthy engagement? Create something well-loved that doesn't depend on or require any additional action from the appreciator for them to experience its complete impact.

Oh shit, that's hard! "Well then how about I scare them into clicking on a link that pays me money when they do so, then use an image and/or words to call them inadequate in some aspect of their lives so that they send me more money?" Tomato-tomahto?

Hey this is a really good question. And I agree with you 100%! We have only explored a tiny corner of the design space.

Since you asked for links, I think you'd like this talk I gave at Berkman a little while ago: https://cyber.harvard.edu/events/governing-social-media-city

It's about: if we think of social media as a new city, what are the alternatives to hiring tons of cops/censors? What about urban planning, bike lanes, etc?

As for healthy engagement: I think there are a few people in this space. I honestly don't know as many as I'd like. This will be a learning experience for everyone. I think New Public might be doing good work, but I'm not sure!


Hope that helps!

> There are no mechanisms to lower the temperature when arguments get intense, for example.

This is a second, third, or fourth-tier concern after stuff like white supremacists using platforms for organizing/harassment, governments using them to facilitate genocide, scammers using them to profit off of the pandemic, etc., etc.

"When arguments get intense" completely ignores the incredibly low-hanging fruit: banning networks of bad actors. (They have the data to do this, they just don't want to.)

> Do you know of anyone working on healthy engagement?

Twitter is trying to warn users about "intense" conversations. Unsurprisingly, they suck at it:

https://twitter.com/angryblacklady/status/144754672404668416... https://twitter.com/chadloder/status/1446879163307028481

> Twitter is trying to warn users about "intense" conversations.

Discourse, too. It gives a statistic that X% of the posts are yours, and asks you to consider letting other's voice their opinion. It says that after you typed your post. I'm not going to not post, then, but I admit that yes it does remind me to consider spending less time on the platform in general (!).

HN has it, too. Deep threads hide the reply button if posts are made in quick succession.

The orig. comment claimed:

> [...] There are no mechanisms to lower the temperature when arguments get intense, for example.

Addressed above.

> Nothing to help keep us from misconstruing comments out of context.

I don't agree with this. Factchecking occurs, thumb-up/heart on factual content helps. Also, quoting, logic, and linking sources allows to dispute (such) fallacies. These tools are available.

> Nothing to assist us in feeling compassion for the people we're talking to, or understanding their intent as they mean it to be understood. [...]


Because polarized people don't want to. They want to 'win' a discussion instead of learn from it.

It's understandable that people go to work for Sauron to get paid big money. But starting a "human-orc ethics think tank" because you have relevant experience in the field is a bit rich.

We need to be able to forgive those who have done evil, if they truly repent. If they’re trying to do good now, we shouldn’t hold it against them that they worked for a monopoly spreading lies that incited violence, created online addicts, and took the pieces of silver happily. No, it only matters do good now.

I wholeheartedly agree. But, to be frank, I really don't see any amount of contrition in the Founders' Letter here, https://integrityinstitute.org/founders-letter. Primarily, I don't see anything here that says "You know what, we're sorry, because we were part of the problem, despite working on 'the inside' trying to improve things." I'm not asking for the founders to fall on a proverbial sword, but I don't see any acknowledgement of how the entire Facebook "machine", which causes all the problems they point out in their letter, is only possible because legions of really smart people choose to work there.

We can forgive them but not listen to their thought leadership on solving the post-Sauron world.

Repenting is indeed a major step on the way from evil to not evil, but I do not see any sign of repentance in their materials. Repenting means recognizing you did wrong, recognizing and admitting, sincerely and honestly, what was wrong in what you did, and then building a corrective action on top of that. I do not see any of that. I only see a bunch of platitides, which I could find in any corporate "our values" binder, we're all for everything good and against everything bad. "Strive to be a good, honorable person" - ok, sure, who doesn't? Are there many people striving to be bad and dishonorable? That's not doing anything.

It's also a bit rich to tell people "from the start, the Institute has been funded out-of-pocket by its founders. But that can’t last forever." -- even though you were only incorporated a month ago on September 29 -- and start asking for donations before you have received approval for tax-exempt status[1].

[1] https://integrityinstitute.org/donate

Jeff and I have been working on this nonstop since January-February. Formal incorporation happened way way after the work started. This has been my full time (unpaid, volunteer, only) job since February, and I have burned down much of my savings in that time.

How do you ensure that "integrity" does not morph into "ban everything our groupthink says is wrong"? How do you ensure you're not coopted into an ideological or partisan propaganda/speech control efforts - a trap that so many "fact checkers" gladly fallen into?

One thing we focus on is not really looking at content at all, but instead, behavior. Mostly -- spam.

One way to see what we're doing is glorified spamfighting. Except the spam sometimes isn't fake ray-bans, but is instead doctored videos that call for lynchings in India.

Here's more on the subject. (My ideas, not the full institute): https://cyber.harvard.edu/events/governing-social-media-city

One content agnostic approach to improving signal to noise is the tendency for deep chain sharing to be mostly false/misleading content. Introducing a forced copy-paste hurdle after the chain gets, say, 4 deep, would be content agnostic, but probably improve the quality of what get's spread.

> the tendency for deep chain sharing to be mostly false/misleading content

Does it rely on some specific research? I mean I'm sure there's a lot of viral stories that are false. But there's also a lot of viral stories that are true - or at least no less true than what you'd commonly see on network TV or read in major newspapers, like NYT. I am not sure why it'd be obvious that something that is shared a lot is necessarily false.

This is a great example of the kind of stuff we want to be talking about and looking at. Design changes that apply across the board.

You could imagine us swapping notes, and doing some research as a group, to see if this is indeed an idea that would work. And then spreading that information to everyone, including the platforms themselves.

Transparency goes a long way towards addressing this concern. This is one area we are focused on - asking companies to be more transparent, both in their overall metrics and samples of public content, so we can have those debates as a society rather than behind closed doors.

Transparency is not enough. If the decision is "we ban everything that the Troika we appointed finds not to its liking" - it's transparent, but not helpful and has nothing to do with "integrity". Transparency is good, but not enough.

What are the institute's thoughts, if you've formed them, on end-to-end encryption, especially as it applies to social media where the line between group and group-text blurs? I feel it's an incredibly nuanced topic that's become incredibly polarizing in recent days with some of Haugen's comments.

On the one hand, in favor of E2EE, companies can and will use the content of messages, if they have access to them, to micro-target suggested content to users, and this can lead to increased levels of misinformation being promoted to people who have engaged with misinformation. And of course there's the government surveillance angle, which is an entirely separate story!

But if you remove the signals in that content by encrypting in a way that is opaque to the platform, do you substantially reduce the ability to microtarget? Very possibly not, given the amount of graph data the social media company has anyways about group members independent from the content itself. And encryption gives the social media company the ability to wash its hands of any responsibility or awareness of content.

Assuming it were easy to technically achieve (which is a huge leap, to be fair!) do you think it better serves the definition of integrity you've adopted, that a social media platform have the majority of its content end-to-end encrypted, or not?

Hi! Thanks for the constructive question. I agree with you that it's nuanced, and also I'm sad that it's getting polarized/simplified in some venues.

I don't think we have gotten an institute stance on very much relating to E2EE. Our community advisory board (composed of integrity workers) is the moral core of the organization. So far, when we say "the institute has a stance on X", that has meant "the advisory board signs off on X, and that X represents a good faith consensus view of workers in the industry".

We don't yet has a doc that lays out why we think integrity and privacy can coexist nicely. Speaking only for myself, I think the answer lies in careful design. As an example, you could see, via research and experimentation on FB Messenger, that messages that are forwarded in chains of > N are just empirically overwhelmingly likely to be bad faith, spammy, etc. You could then take that finding to WhatsApp, Signal, etc, and then bake in changes to the UX that make it slightly more annoying to forward messages if they've been on a reshare chain of N/M. That kind of stuff.

There's also some consideration to group size -- if a group is 5000 people large on, say, Telegram, it might be encrypted, but it's no longer really private. Maybe it should be treated differently? Unclear, let's think and research about it.

I think a rough consensus we might move towards is treating messaging differently than broadcast, and also treating broadcast features inside of messaging apps differently than straight up messaging themselves.

But again, those are just some of my more idle thoughts. There are members and fellows who are better experts on this particular subject than I am.

Does that make sense? Is that helpful?

Absolutely, and thanks so much!

To the point about research and experimentation, it wouldn't be surprising to me if certain high-level people at various platforms are having confidential discussions about "are there technical means to prevent message-level research and experimentation from being possible in the first place."

And it's vital not only that the public/media recognize when there are conflicts of interest at play, but also that well-meaning employees at various platforms have the ability to see "here are the bright lines that a consensus of your peers across the industry believe shouldn't be crossed, and here are constructive talking points that you can use for internal advocacy if you are in a position to 'nudge the path' towards a sustainable way for integrity, privacy, and business/legal priorities to coexist."

It's really, really heartening to see people working through these tough questions and working towards a brighter future. You're doing incredibly necessary work.

I appreciate you. Thank you.

Not working on integrity but really, really interested in seeing this coming alive. I've had a very deep curiosity about this topic for a couple of years, allegedly Facebook's products might have influenced the elections on my home country (Brazil), which has very directly impacted the quality of life of my family still living there.

Looking forward to see what comes out of this and wishing you and the team all the best luck, thank you.

I am not working on integrity (I have a feeling that the integrity offices of many companies have brooms and buckets in them), but I do write software that Serves a constituency that has a very vested interest in the matter, and wish you well.

I also have a personal code of ethics, and hold myself to a very high standard of Personal Integrity.

In my experience, talking about that in the tech community does not end well.

Ethics and Integrity do not seem to be popular topics for discussion in SV.

Corporate departments you do NOT want to be in: Innovation, Ethics, Trust… unless of course it’s corporate doublespeak and the department is doing the exact opposite of what its name implies.

Exactly. If "Innovation, Ethics, Trust" are being addressed by a department:

1. They have no profit and loss, so are going to get sidelined by the higher impact (in any short or medium term) concerns of every department that does.

2. They have no direct control over serious corporate activities, so are going to be considered a distraction at best, interference at worst, by others under their own pressures. If they are noticed at all.

It isn't an accident when universal values are silo'd into departments from the rest of the company. It means leadership doesn't want the rest of the company to waste time or focus on them. But feel like the values need token recognition.


It gets even worse if these were governmental departments.

Department of Innovation: enforces acceptable methods and subjects of innovation, i.e. anti-innovation.

Department of Ethics: Reinforces the governments views by justifying them as ethical, and demands lip service to those fictional high ethics.

Department of Trust: You must trust us.

Worst one of all: Department of Truth. Here it is. Don't look elsewhere.

Thank you! I helped set up the Brazil Election War Room -- the first election war room inside of FB. It was intense! There is a special place in my heart for your country <3

This sounds like election interference by a foreign power. What business do employees in a US company have influencing the elections of a foreign country?

More or less intense than the Myanmar Genocide War Room? Was it the first genocide war room at facebook or were there ones before it?

(1) Thank you so much for doing this. Facebook has been after me for years (> 15 years in anti-spam/email abuse). I keep putting it off, but its time for a clear "no thanks". As an outsider saying "no" - is there anything I can convey to them to drive the message home?

(2) Minor - your "Join Us" link from the Founders Letter page is 404 -- https://integrityinstitute.org/join-us

Thank you!

(1) - I'm not sure. I don't think I'm an expert here. But if you've got 15 years of anti-spam experience, we'd love to have you join us as a member :-)

(2) - Thank you. Fixed!

What’s the agenda here? (Fit the description, but am very skeptical of your motives :))

I think this should answer your question: https://integrityinstitute.org/founders-letter

But also, our values: https://integrityinstitute.org/our-values

Also, succinctly -- this is a real, grassroots thing. We gathered a bunch of friends and coworkers for this big idea of "what if we had best practices and a professional association for integrity work, just like we do for cybersecurity" and then worked for 10 months to do it.

We've tried hard to put all kinds of pieces into place. We're making it a place that is both independent of companies and also safe for current employees of those companies to join.

It's also cool that we can draw on this community to give expert advice to stakeholders (policymakers, journalists, companies, academics, etc). Mad that congress doesn't understand how Instagram works, or whatever? We can explain things -- as people whose training was in looking at the total information ecosystem of a platform.

A goal is to have integrity work be at least as prestigious, valued, and essential as cybersecurity, or software engineering is. A thing where quality matters, and if you do shoddy work you will be called out on it.

Does that help?

Thanks! The values resonate strongly (rarely practiced in industry though - power, the pursuit of glory, and partisanship can and do form a toxic combination).

a statement of values is meaningless. What organizational procedures do you have in place to prevent some faction from hijacking the organization and morphing it into an ideological echo chamber. The values on that page are as mutable as the html they are written in.

For example, in the past 10 to 20 years or so we've seen both the ADL and ACLU morph into institutions that would be unrecognizable to ADL or ACLU staffers from 10 to 20 years ago. No one ever imagined that they would abandoned their classically liberal values and replace those values with the illiberal "liberal" values they practice today.

Was the ADL ever classically liberal? I don't mean that snarkily. I just don't recall much of a shift in their values in the way that's clearly visible for the ACLU and others.

Probably not fully classically liberal like the ACLU once was but certainly far more classically liberal than they are today (which is pretty much not at all).

The ACLU was founded by Hellen Keller, who was a lifelong socialist, among others.

I don't think I follow your point. Are you suggesting that the ACLU was also never classically liberal because the one of the dozen+ members of the founding committee was a socialist?

> morphing it into an ideological echo chamber

Assuming that wasn’t the point from the very beginning.

How do you define integrity in this context. (you, personally)

This is a great question! I'm still trying to find a top-down definition instead of a "I know it when I see it" one.

To me it's probably something like this: we can think about an information ecosystem or social platform as a system. "Normal" hacking of the system happens through finding loopholes code. (That's cybersecurity). "Integrity-related" hacking of the system happens through finding loopholes in design and rules.

For some easy examples, that covers things like realizing you can post to 1000 groups in an hour. Or using sockpuppets to give artificial boosts to posts. The attackers aren't hacking code, but are hacking a system of rules, norms, and defaults on that system. (And often, finding the holes between where one part of the system was soldered onto the other).

That's the technical part. Integrity also has a sort of ethical component. I think that's meaningful too.

I know I'm hammering you in another thread, but will press pause for a second.

What you're basically describing as "integrity hacking" is already around as a discipline "social-cybersecurity." If not the same thing, it's likely a very close peer discipline.

Also, coopting a word like 'integrity' vs. 'platform integrity' or 'cyberspace community integrity' or w/e is a tough call. By doing that, you end up with PR-destruction sentences like "integrity also has a sort of ethical component," which at face value is quite a read w/o the context of how "integrity" is being re-defined here.

https://sites.google.com/view/social-cybersec// https://link.springer.com/article/10.1007/s10588-020-09322-9 https://socialcybersecurity.org/

dogman, thank you for the heads up about this social-cybersecurity group. It does seem related, and I'll be sure to check them out. Lately, someone has also mentioned m3aawg.org, which also seems adjacent.

Personally, I do think the field has a lot to learn from cybersecurity. From my POV, integrity work has three different 90's ancestors: web forum moderation,email/search engine antispam work, and cybersecurity mindsets like risk mitigation rather than risk elimination.

As for the PR stuff -- fair enough! We're all trying to do the best we can with the skills we have. Not every decision will be the right one. Maybe the name was one of those.

I hope you check back in with us in 6-12 months. Once you can judge us more by our work over time, I hope we will have earned your respect.

What you describe sounds like you're there to counter user fraud. It's definitely not the first thing people imagine when they hear you're there to support integrity imo. Doesn't that feel misleading to you?

We spent a good amount of space on the website laying out different types of work that are covered in the term "integrity".

I also do very strongly think there is a moral/ethical dimension to the work. That's one reason I shy away from "Trust and Safety" label. Integrity work may have started with connotations of "structural integrity", but it also must be done carefully and ethically. It's not just engineering, it's also about doing the right thing. Hence the Hippocratic oath we all take. [1]

[1] https://integrityinstitute.org/our-values

Have you spoken or do you plan to speak out about the most normalized form of state-sponsored propaganda in the U.S., namely hasbara [1]? For example, Hasbara Fellowships and Israel on Campus Coalition [2]

[1] https://en.wikipedia.org/wiki/Hasbara

[2] https://en.wikipedia.org/wiki/Israel_on_Campus_Coalition#Mis...

My understanding, from speaking with someone at Facebook who has worked in this area, is that they are primarily empowered to do individual fixes. I.e. they do root cause analysis on why content X was correctly/incorrectly handled, update processes, and repeat.

The problem instead seems to be a systemic one. i.e. What kinds of posts does the platform incentivise and promote as a whole? However, changing the system would require significant product updates and harm the bottom line, as it would likely result in lower engagement.

We're also in this weird position where politicians seem to want to hold platforms accountable for content which is legal, but objectionable. This is also exacerbated by employee activists wanting to do the same.

>The problem instead seems to be a systemic one. i.e. What kinds of posts does the platform incentivise and promote as a whole? However, changing the system would require significant product updates and harm the bottom line, as it would likely result in lower engagement.

It took me an hour but I made FB enjoyable for me again by making it a feed that is just my friends in chronological order. As a bonus there is no good way for the system that decides what to show me to be gamed, I just bookmark this:


Preventing all the groups and pages I had interacted with before from creeping into that feed was the time consuming but worthwhile part. I had to go into my groups settings & unfollow them all individually though:


same for pages:


> https://facebook.com/?sk=h_chr

That's just the "Most recent" option from the left hand menu (or the burger menu) on the Facebook website?


> The problem instead seems to be a systemic one. i.e. What kinds of posts does the platform incentivise and promote as a whole?

These are the big questions we want to be tackling.

> Allen left Facebook soon after that for a data scientist job at the Democratic National Committee.

So the guy on the election integrity team then takes a job with the DNC. This is called a revolving door.

And now, curiously, he is part of the effort to push for more censorship of Facebook. This concerted effort to cow Facebook into censoring their platform more is coming from the RNC, DNC, Congress, and the mainstream media.

This founders' letter from their website is dripping with hubris:


They talk about keeping the Internet "safe" and "good" but not free.

I had the same take away from that founders letter as you did - full of arrogance, as well as very chilling and creepy in my opinion.

Somewhere in this thread one of them is commenting, and made a point to link to that same letter saying how proud he was of it.

Just a complete disconnect.

“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.” - C. S. Lewis

Forget the military industrial complex, welcome to the social media industrial complex.

In our current descent into technofascism, I think every industry is going to be gamed.

Not surprising. I know a few people who have family members who are ex US intelligence apparatus or FBI turned Facebook investigators. They investigate the bad bad stuff like CP, human trafficking, and animal torture. I’m sure they are specifically hired for connections.

The current conversation isn't about the real bad stuff (which I thankfully have never seen on FB), but rather about how much control the government should have over Facebook as a platform, and the direction seems to be universally in favor of more restriction of speech, not less.

I think it’s still very similar. Misinformation is still bad stuff but not the worst of it. If a foreign political group was buying billboard or newspaper ads there would be some kind of legal accountability. Instead of ads like this we have troll farms targeting Black and Christian Americans with misinformation. Maybe it’s time there was some kind of legal framework about posts and foreign influence but I don’t pretend to be any sort of policy maker.

To extend this somewhat, not only does paid and quasi-paid political speech(paying operatives/algorithms rather than a newspaper) not face the same accountability re content on Facebook (for example), Facebook does not face the same accountability for providing equal access to it's platform for all political points-of-view.

Do you really disagree that there's way, way too much bad stuff on Facebook and it's had a hugely harmful impact on our society? Or, do you just think we shouldn't do anything about it?

I don't think the comment was about "too much bad stuff on facebook" at all, rather it was about how an actual facebook employee, who now works for the dnc, is forming a group that will advocate for policy that will affect facebook. When this happens in the banking industry, we call it regulatory capture. When it happens in the insurance/healthcare industry, we call it regulatory capture. When a board member of an oil company is appointed head of the epa, we call it regulatory capture. Why is this case any different?

(here's another way of looking at it: if the guy was hired as a "data scientist" by the republicans, wouldn't you question his motives?)

What are you talking about, he went to the DNC. No government relation and no regulatory power.

Also I work in politics, and people move between the private, political, and public sector all the time. The GOP hires data scientists from tech whenever they can, because they're sought after professionals.

My point was if former social media staffers can't work against this stuff, what should they do.

Yes, I disagree with that. Even if I agreed, I'd still think we shouldn't do anything about it.

How much bad stuff is "too much?" Which stuff is "bad"? Who gets to decide? Why should I trust them? Who and what else is influencing them? What happens when the people making those decisions change? Why should I cede the authority to decide which "stuff" I get to see to anyone other than myself?

There are no good answers to any of these questions, and there never will be.

I don't think we need to get that philosophical about it. Can't use bots, can't pay people to astroturf. Ez.


That’s exactly what this and all the efforts to control Facebook are - a power grab.

Slap some “feel good” language around protecting democracy (ha!) and they can nicely make sure it’s their message that’s approved (of course! They are protecting democracy) and not their opponents (they are harming democracy!).

This is either a Trojan horse or an exercise in self-aggrandizing pomposity.

What is this about “not breaking your NDAs” with Facebook? If you had integrity the NDAs wouldn’t matter, because no moral framework is constrained by respecting an arbitrary document drafted by a megacorporation to keep you silent. When your first move is to assuage any fears from your former employer that you might do something as drastic as break your NDA, it’s obvious your priorities are nowhere near where you claim them to be. So the question becomes, is this intentional dishonesty, or lack of self-awareness? I suspect a bit of both.

Don’t tell me what integrity means.

Edit: Apparently I read this wrong, and they haven’t actually quit their jobs? So this is just a cartel of self-righteous employees at tech corps with plans to maximize efficiency of collaborative deplatforming? Lol!!!

The line about the one founder taking a job with the DNC was enough for me, but I'm glad I read until this part because I think it's even more illuminating:

>"Frances is exposing a lot of the knobs in the machine that is a modern social media platform," Massachi said. "Now that people know that those knobs exist, we can start having a conversation about what is the science of how these work, what these knobs do and why you would want to turn them in which direction."

Which I translated to:

>Now that someone else has broken their NDA, we can use our jobs that we are not quitting to make more money.

I don't know about Jeff, but I know Sahar quit working at Facebook over a year ago to work on this instead.

Yep! We both quit FB in 2019. This has been our (full time, volunteer, spending our savings) job since ~Jan/Feb of this year.

It's taken a long time to gather people, build trust, figure out our strategy, find consensus, make our first reports, etc.

> If you had integrity the NDAs wouldn’t matter, because no moral framework is constrained by respecting an arbitrary document drafted by a megacorporation to keep you silent.

The desire to not spend the next several years mired in lawsuits brought by a company that can hire an army of lawyers doesn't need to be explained by a moral framework. Wouldn't you rather spend that time being an activist instead of sitting in the courtroom, wondering if you're going to survive the financial fallout?

The recent whistleblower has also been suggested to be a tactic to control and set the terms of the concern about Facebook, utlimately not challenging its reach as a monopoly but to allow for certain censorship actions that facebook would be more than happy to oblige to do.

You really think Frances is a deep cover op by Facebook? Really? Wow.

Facebook should focus on giving tools to individuals and communities to communicate with each other and speak freely with one another. Communities and individuals should determine their rules, not Facebook.

Platform wide content moderation is inevitable (illegal content exists), but mass censorship is bad. It will come for you, if it hasn't already.

you mean like how r/physical_removal, r/jailbait, r/creepshots, etc etc were allowed to proliferate on reddit

>We respect your privacy. We will not share your data with anyone for marketing purposes.

Yet the homepage has Google analytics and Squarespace cross site trackers. Not a great first impression.

Not everyone equates privacy maximalism with integrity. In fact, in the eyes of many people they are unrelated topics.

They explicitly say “marketing purposes”… doesn’t mean they won’t share for other purposes.

That's a common trick - "overly-specific lie". "I never stole your wallet from your purse" (I stole it while it was lying on the table), "I never told your wife any rumors about you" (I told them to her best friend to whom she talks daily), "we never sell your information to the highest bidder" (second highest is ok though), etc.

This has a real Silicon Valley (show) tethics vide to it.

I came wading through the comments specifically for this, totally agree.

You're welcome. But it is uncanny.


Once again I'm skeptical. From where I stand the entire promo reads like FB demanding regulatory capture. Of course it is done in an underhanded way, "Oh, we've been so naughty! We need the firm hand of regulation"

Shades of Castle Anthrax from Monty Python's "Holy Grail".

Maybe others here would like to argue the minutiae, but I see that as largely irrelevant.

That sounds a bit like "former Enron staffers launch Ethical Business Institute". I mean I'm sure some nice people worked for Enron too, but...

Also I can't point at the exact place but it definitely has a whiff of "how we get DNC/government to control all major information platforms without them explicitly controlling all major information platforms". I mean, I have nothing against banning troll farms and such, but when I read about platforms' "integrity teams" banning people that are definitely not troll farms but somebody who dares to say something contradicting the Currently Approved Truth (TM) - and it's not one or two cases, but dozens going into hundreds - I start doubting "integrity" has anything to do with it.

This is how you create a regime of regulatory capture.


>banning people that are definitely not troll farms but somebody who dares to say something contradicting the Currently Approved Truth (TM)

Hundreds? I can't think of a single one, honestly.

If you can't think of one person who was banned by Facebook and isn't a troll farm, you're either very ignorant in the matter - which is ok, nobody knows everything, and it may not be a subject that is of interest to everybody - or pretending to be. I am not sure if there's hundreds - I personally know cases definitely well into two figures, but probably less than a hundred. But I don't do any systematic research on the subject, it's just what I hear from my acquaintances (none of whom work for any troll farms) or read in the press (usually when somebody high-profile enough gets banned). So I estimate there are many more cases than I personally know. Which would take it easily into hundreds.

You made the claim. Put up or shut up.

I think you are confused, the site you're looking for is at twitter.com.

If a bunch of Enron staffers left Enron due to frustration with the company and built an Ethical Business Institute to improve industry practices, I would be interested to see what they had to say.

In any industry, it's a difficult balance between expertise and bias - but we have tackled these problems and understand the nuances in detail.

We also have folks from a mix of political backgrounds.

What do you mean by mix of political backgrounds? Do you have people that are registered Republicans, are any of them in leadership positions? Or registered libertarians for that matter?

Anyone that has similar involvement in politics like working directly for a major political party?

EDIT: Checked out the site and darn, I'll give them credit they do have a Republican and someone who worked with the RNC, so that is something. So I'll give you that one, I'm still suspicious but can't knock you for being completely lacking in diversity.

I mean I still don't love the idea, and don't think it won't be used to ensure that "misinformation" is stamped out, but I'll give credit where credit is due.

If you mean Katie Harbath, it never says she's a Republican - only that she worked for RNC in "senior strategic digital roles", whatever that means. She also worked as "a public policy director" responsible for elections at Facebook. And given what we know about how Facebook management reacted to the elections of 2016 and what measures they took in 2020 to prevent it from happening again, I am not sure how could one really be a Republican and work for Facebook as a policy director in charge of all that. Kinda hard for me to reconcile it.

And, speaking of people who worked for the Republicans once, certain Hillary Rodham was once the head of the Young Republicans Club...

Short answer is yes to the first - check the website for more info.

> from a mix of political backgrounds.

What kind of mix? There are different mixes. A mix of Socialist, Communist, Antifa, Democratic Socialist, Democrat and Left Anarchist certainly qualify as "a mix of political backgrounds", but doesn't exactly represent a broad spectrum of opinions on many important questions.


Nebulous platitudes that tell us nothing, stake out no hard positions, and leave things open to pull a complete 180 in the future.

I don't know what tegridy is, but that is some good shit!

Anyone who was actually willing to join facebook is by definition unqualified to work on an integrity project.

I'm not so eager to jump to that kind of conclusion, because it's too facile.

You might personally decide that anyone who worked for FB has no integrity, but I don't think it makes for a good discussion on HN.

What may be interesting to discuss is if one can regain the integrity that was perceived to be lost in the first place? I have done things I am not proud of. Sometimes it was due to ignorance, greed, selfishness or indifference. For those that condemn Facebook employees as being unethical, is there a path forward?

Regarding Facebook, I would like all employees to quit. I will also hire former Facebook employees. I know we are capable of remorse and can change for the better.

Even employee #1? Intern can be redeemed, but can a lifer? Can the person that invented the like button?

If you were to replace 'FB employee' with 'criminal', the substance of this post and the parent wouldn't really change.

So, as with GP: you're welcome to think FB employees are beyond redemption, and the parent is welcome to see them as ex-convicts, but that's not a good quality discussion for HN.

It's just doubling down on facile.

Now, if your own place of work took a large turn towards the unethical, would you live up to these expectations or call yourself an exception to the rule because you have reasons?

I'm not sure I would live up to them, I think it would be difficult for many people to do that. I might hope to try and improve the situation from the inside before bailing out, provided I had enough internal clout. I might do a lot of things to make me feel like I did my best. In some cases it might not even be clear that what I'm facilitating with my employment is actually wrong.

> you're welcome to think FB employees are beyond redemption

You are welcome to think that is what I think but it is just doubling down on faclie.

The societal context and literature on redemption or salvation is typically religious. We can also wade into the crime vs morality vs things that should be crimes against humanity but are not because there are shareholders.

Christians even have a handy loophole as usual. There is Heaven for the good, there is Hell for the bad. Then there is Purgatory, for souls with unforgiven sins, where sins are punished and where a persons soul undergoes purification before it can go to Heaven.

We also have the added benefit of remuneration. People are paid for their service, and that number is not as arbitary as it may seem. There is a supply/demand factor, skills, location, danger.

> Now, if your own place of work took a large turn towards the unethical

large turn that I was enacting. I would refuse to 100%. Have done, will do. Someone somewhere in a different department, nothing to do with me, but just happens to be representing the same organisation? We are into the grey here. Is it a rouge actor or the overriding philosophy of my organisation?

> I might do a lot of things to make me feel like I did my best

But we all know this is just the paycheque and the internal moral compass having a debate about what our price is. Everyone has a price.

So is it the paycheque and internal moral compass or is it the religious aspect, or is it is just christianity and a good/evil dichotomy and christians having a loop hole? Is it money? Or is it a rogue actor, or an overriding philisophy?

In all of that, you would refuse 100% to do something.

I honestly can't tell if you're agreeing with me when you say everyone has a price, or you're trying to dispute it.

There are two matters posed originally redeption and your actions. Redemtion is tied up in all of the

When it comes to ones own actions everyone has a price. Unless you are at the very top of facebook there is nobody paying that price.

When it comes to redeption, you can go to heaven, hell or purgatory and maybe even try and buy your way out of it.

Yes, if you paid me a billion I would happily incite genocide half way around the world, it would make me beyond redemption but I would have a billion dollars, so maybe I can buy my way out of it. But that isnt happening, what is on the table is standard salaries, which are the same as you can get elsewhere in the same profession, potentially even lower.

> Now, if your own place of work took a large turn towards the unethical, would you live up to these expectations or call yourself an exception to the rule because you have reasons?

I've quit better jobs for less, to be honest.

> Allen left Facebook soon after that for a data scientist job at the Democratic National Committee.

So...a propoganda arm for the Democratic Party?

A few years back, when groups carrying Orwellian names like these[1] started appearing within these massive tech companies, my first thought was: this is a joke, right? But even then it was clear that it was anything but.

Things start making sense if you start looking at these as religious movements and the people initially attracted to it and forming the first inner circles as vicars/priests/imams. You have a Faith. Anyone who rejects the Faith is an infidel. Anyone who questions the tenets of the Faith is a blasphemer. Anyone who leaves it is an apostate. Once the group gains sufficient power, it is in a position to appoint political officers and, through the ability to enforce ideological purity, also determine who runs the organization.

You don't have to refer to fiction to point out parallels -- there are so many real life examples from Communist China to Nazi Germany to Soviet Russia -- but the Night Watch and Ministry of Peace in Babylon 5 is what comes to mind when I look at all the discussion around "integrity" and "misinformation." The greatest power a group can have is the ability to label all opposing views "misinformation" and its opponents "subversives."

[1] Exhibit One: Trust and Safety Council @ Twitter

I agree there is a religious aspect. Like humans are wired for it, but some go out if their way to not see it’s almost exactly what they’re doing. Go spend some time on Reddit‘s /r/atheism, some of them are so fanatical needing to prove they aren’t religious it seems like evangelical atheism.

Long before the Internet, you could go through an airport and be unable to avoid the Moonies & the followers of Lyndon LaRouche. You could find the book "None Dare Call It Treason" and lots of people did. "The Daily Worker" (newspaper of the US Communist Party) was readily available. Any of those groups could take out ads in the newspaper, or hold demonstrations (like the Nazi Party in Skokie, IL). There were UHF stations on TV where they held forth.

Yet democracy did not collapse. Why? Because humans have the ability to weigh evidence and discard the nonsense. They have agency. They can decide for themselves what to believe. They don't need you to control their information flow.

"Oh, but now it's different!" the "integrity" people claim. "Now we have to control their conditioning!" No, it's actually not, and no you don't. Now the public has more exposure to views that you don't like. That doesn't mean they swallow them all -- that's just your nightmare of how those other people act.

I might obviously be wrong but I do think it's different. People didn't spend a good chunk of their day at newspaper stands waiting for new content. No one got "The Daily Worker" recommended to them because of their person (age/sex/income/location/communication/...). I feel like that fact often gets ignored. The problem isn't that "problematic" content exists but that FB is actively shoving it down people's throats in a sinisterly targeted way to increase their bottom line through increased engagement.

It's different only because people new on the scene aren't familiar with what happened before they were born. Or even before they were on the scene.

Nothing is being "shoved down their throats" unless they choose to eat it.

>The problem isn't that "problematic" content exists but...

Then you'll be disturbed to learn that members of the team are literally government operatives lobbying for more censorship on Facebook: https://news.ycombinator.com/item?id=29001625

> "Oh, but now it's different!" the "integrity" people claim

Because it is. Our modern legal constructs of free speech came about in a semi-literate, printing-press pamphleteering age. It was revised in the era of broadcast television. "It was like this before, and so it should be forever after" is not an argument.

We're at a point where we have to talk in terms of first principles. Not ideologies or teams. That's tough, because we haven't done it properly--in America--in at least a generation.

Actually, "It was like this before" IS an argument. Unless you can demonstrate that allowing unpopular speech was always bad, you lose. The world did not begin with you.

"semi-literate"?? Literacy rates in the US were always very high.

And no, we are not "at a point where we have to talk in terms of first principles." Because we have a Constitution to do that for us, is has not been abolished, and it is not going to be.

'We' have been talking in first principles about this and other constitutional touchstones for generations. 'We' don't agree with you in sufficient numbers to allow you alter our concept of free speech.

I agree with your point completely. The 'misinformation' is unimportant. What is important is restricting access. You correctly note that, where legal, any of those groups were allowed to hand out nonsense printed on cheap pulp. Today? Not so much.

Democracy has absolutely collapsed. Our days are governed for eight hours by autocratic bosses and for eight hours by autocratic "products" and "services." We have no say about how our work is conducted, nor about what poisons hit the shelves.

It's astonishing to me that anyone thinks we have "democracy" under Capitalism, where plutocracy masquerades as meritocracy – which isn't democracy, anyway!

> They can decide for themselves what to believe.

In a vacuum, perhaps, but not under the inescapable deluge of marketing for not just commercial and classically "religious" institutions but the "patriotism" of American Civil Religion. Our consent to all the above — to the taxes we pay, and to the work that we do — is all manufactured by this propaganda. Our agency is tightly, tightly constrained by plutocratic machinations. (To say nothing of the constraints applied especially to marginalized identities, some of whom are living in a post-apocalyptic hell-scape and surviving literal genocide).

Democracy has long since collapsed. Count yourself lucky not to have noticed.

Okey, dokey, Noam. Have fun when you move to Caracas.

I don't share your idiosyncratic connotations enough to understand how this is an insult.

The fact that over 40% of Americans are still not fully vaccinated and thousands of people per day are still dying needlessly proves that people actually aren't good at filtering out bullshit.

.. and therefore they need their information filtered for them by their betters, i.e. techies?

It proves that after people are repeatedly lied to - remember "don't buy masks"? Remember "go out and celebrate!"? Remember "restricting international travel is racist"? Remember "two weeks to flatten the curve"? that's just covid crap, there was 10x that on other topics, I'd just nominate "fiery but mostly peaceful" and let it rest at that. After all that overwhelming torrent of lies people don't believe a word that is coming out of our ruler's mouths anymore. And I hardly can blame them. Every gatekeeping institution that was supposed to take care of them has failed, has failed miserably and has failed willingly and eagerly. Of course people are looking for information in random weird places - because non-weird places they used to trust lied to them so much they finally gave up on them.

Very good points. Rogan said Google is hiding information (which they are).

Once you lose your trust, it's difficult or impossible to get it back. "Let's lie to them, but do it better & more comprehensively" is what the censors seem to be saying.

That's the worst aspect of it. When the elites started lying and people started turning away from them, the conclusion that they made - and still are making - is not that they have to stop lying and try to rebuild trust, but that they need to lie harder, more creative and suppress any alternative information sources harder, so there wouldn't be any way to contradict them. They are not seeking to re-establish the trust, they are seeking the situation where your trust doesn't matter anymore because you don't have any choice.

The whole PR discourse around these issues mentioned in the article is broken.

> Namely, three years after the 2016 election, troll farms in Kosovo and Macedonia were continuing to operate vast networks of Facebook pages filled with mostly plagiarized content targeting Black Americans and Christian Americans on Facebook.

I have yet to see a mainstream press article mention that people in other countries are not nefarious others in hoodies and sunglasses typing away at green screened terminals for Putin. The only reason people in Kosovo and Macedonia are making Facebook pages aimed at American politics is because some American PAC paid them to do so. If it were cheaper to hire people in Detroit or Chicago to do it, they'd be in Detroit or Chicago instead.

> Combined, the troll farms' pages reached 140 million Facebook users a month, dwarfing the reach of even Walmart's Facebook presence.

Do people go to Walmart's Facebook page? I can't think of any reason to go to a retailer's Facebook page, personally.

> "But if you just want to write python scripts that scrape social media and anonymously regurgitate content into communities while siphoning off some monetary or influence reward for yourself… well you can fuck right off."

Yet no one had a problem with Fark and Digg.

> Allen left Facebook soon after that for a data scientist job at the Democratic National Committee.

Of course he did, this is the real crux of the whole thing. He's fishing for donors and looking to jumpstart his political consulting career, so that he can help the DNC dupe people into voting for ballot referendums that make Uber exempt from labor laws, instead of helping Facebook ad buyers dupe people into buying t-shirts with unlicensed logos.

All of this sturm and drang aside, the more simple explanation for how to "solve these problems" is to not have all-encompassing social media platforms. The internet was largely fine when teenage boys who call each other slurs were limited to gaming forums and anonymous chats that grandmas and moms would never find. There's no reason outside of the monetary self-interest of those associated with Facebook and Twitter for these groups of people to be on the same app together at the same time.

The Walmart thing really made me giggle. I mean, who even goes there? When I was on FB, I never visited Walmart page even once, over the years. What kind of a titan of social media they think Walmart is? What kind of serious discussion about social media can we have with people that think Walmart page is a measuring stick of popularity?

>The only reason people in Kosovo and Macedonia are making Facebook pages aimed at American politics is because some American PAC paid them to do so. If it were cheaper to hire people in Detroit or Chicago to do it, they'd be in Detroit or Chicago instead.

The pages weren't even being run for a political purpose, it was all about the adsense dollars. Pro Trump stories just pulled the maximum amount of traffic from Facebook.

>Trump groups seemed to have hundreds of thousands more members than Clinton groups, which made it simpler to propel an article into virality. (For a week in July, he experimented with fake news extolling Bernie Sanders. “Bernie Sanders supporters are among the smartest people I’ve seen,” he says. “They don’t believe anything. The post must have proof for them to believe it.”)


Right above in the same paragraph in that article:

> After seeing the BMW, Boris decided to start some websites of his own. He already knew there was money to be made off the internet; for a while, when he was 17, he’d been one of the many peons around the world laboring online for MicroWorkers.com, earning something like a tenth of a cent for liking a YouTube video or leaving a comment.

I mean, this is the whole explanation for youtube as a platform, isn't it? You can't see who upvotes, so everyone can simply buy said upvotes. It also explains why google will play an endless string of youtube videos on my sleeping laptop despite me being asleep as well, and why Netflix auto-plays movies you scroll past.

The whole of the major internet sites (except Amazon, simply because it's a store) seems to be mainly concerned with rigging menial stats.

What is the point. Just let the company burn. All it’s smart engineers should just leave. The world would be so much better without FB.

Can anyone else who is using Firefox read their main page? All I get is a white screen even after refreshing.

Trying to make something an 'institute' corrupts its integrity from the get-go.

A reminder that Facebook's VP of integrity (Guy Rosen)[1] co-founded the spyware company, Onavo, that Facebook acquired. Onavo made a VPN that collected user activity and delivered it to Facebook.[2]

[1] First temporary (https://gizmodo.com/facebook-exec-gets-new-title-as-vp-of-in...), then long-term (https://www.theverge.com/2021/10/17/22731214/facebook-disput...)

[2] https://en.wikipedia.org/wiki/Onavo

Facebook Integrity team is about user's integrity of not abusing Facebook services. It doesn't manage Facebook's integrity towards its users.

There are overlaps (for example, some people may abuse the system by trick people to like a piece of content to promote misinformation). But the focus is about protecting Facebook services, not the user. Certainly not user's privacy in the realm of how Facebook uses it (how other actors use it may fall into "abuse" category).

Related note: the famous "we value our users" means "we increase the lifetime value of our users" and that's not a snarky speculation.

They apparently value users at around $1,500/year.

Do you have a citation here? I thought it was on the order of $30-$50

You’re so right. Don’t know where I got the other figure.

I see this mentioned on multiple threads.

What is the accusation here?

(FB employee, opinions are my own)

I think most would consider facebook’s usage of onavo pretty gross from an integrity perspective


The characterization as Spyware seems pretty accurate.

You are correct there is a balance between offering free services in exchange for data collection, and likely most people in tech misinterpret how the general public views this balance, but at the same time it’s reasonable to assume that the general public isn’t fully aware of the extent of these data collection practices.

There is a reason Facebook is constantly mired in controversy- especially compared to their social media peers.

The person created spyware for profit- something that clearly lacks of integrity.

The fact that this is even a question shows why no one who worked at Facebook should be in charge of any integrity project.

So it’s a linguistic issue. You’re lumping integrity as “do people abuse our system” with “does person A follow the same morals as me”.

Specifically, looks like your morals are that products that are free in exchange for anonymously analyzing your data are immoral. (Remember: Onavo gave free VPN to users)

It’s a reasonable position to take, but it’s unclear it’s necessarily true and that everyone who doesn’t agree with you has no integrity, more so, it definitely doesn’t imply they can’t achieve results.

There’s plenty of criticism to give on Facebook’s integrity efforts. I just don’t think yours is helping the discussion.

And as an example, I think the linked article is a great idea and hope the founders succeed.

This is part of the problem with talking with Facebook engineers.

Most of society is against spyware. It's not a controversial topic- spyware is pretty frowned upon. Spying on people does, to most people, show a lack of integrity. So does much of the other actions Facebook has taken and which are now coming out in these leaks.

If you bring this up with Facebook engineers it turns into a "linguistic" issue. You're told that integrity only applies in the computer science sense of the word, and are then told by the engineer what you "actually" think.

All of this is condescending and distracting from the main point- which seems to be the point.

I’m sorry if my tone is not coming across as I intended.

I think there’s plenty of arguments to be made against Facebook’s integrity effort, and plenty of that work happened on Guy Rosen’s shift.

But your feedback is not nuanced and seems to be avoiding a lot of the context on purpose. It seems to be about getting people enraged through an implied accusation in your first comment.

In reality there’s some things you should consider:

- Onavo started as a “compress your data to save you money app” (I used it back then for a while)

- Seems like they found making money from selling market research data from aggregated users is a more visible business than getting customers to pay (way before FB purchased them)

- customers willingly installed the app, they got value from it, and not one customer actually got hurt (correct me if I’m wrong)

- Onavo was popular in countries with censored internet as a free workaround

- selling or using aggregated market research data is not illegal. It’s practiced by all of FB’s competition at the time (Apple and Google have control of their platforms and I’ll guess use this data). Other equivalents include Uber’s credit card as a way to see where people spend money on what transportation and restaurants.

I’m sorry if you find any of this condescending, this is not my intention. I just wanted to clarify what’s your issue with Guy is. That is all.

I'm sorry but you keep trying to act as if Onavo was a high integrity operation but it clearly wasn't. I'm just quoting from wikipedia here-

* In October 2013, Onavo was acquired by Facebook, which used Onavo's analytics platform to monitor competitors. This influenced Facebook to make various business decisions, including its 2014 acquisition of WhatsApp.

* The Australian Competition and Consumer Commission (ACCC) initiated legal proceedings against Facebook on December 16, 2020, alleging that Facebook engaged in "false, misleading or deceptive conduct" by using personal data collected from Onavo "for its own commercial purposes" contrary to Onavo's privacy-oriented marketing.

* In August 2018, Facebook pulled Onavo Protect from the iOS App Store due to violations of Apple's policy forbidding apps from collecting data on the usage of other apps. (Note: this one even meets your definition of integrity above)

* This led to denouncements of the app by media outlets, who classified Onavo as spyware because it is used by Facebook to monetize usage habits within a privacy-focused environment, and because the app listing did not contain a prominent disclosure of Facebook's ownership.

* On January 29, 2019, TechCrunch published a report detailing "Project Atlas"—an internal market research program employed by Facebook since 2016. It invited users between the ages of 13 and 35 to install the Facebook Research app—allegedly a rebranded version of Onavo Protect—on their device, to collect data on their app usage, web browsing history, web search history, location history, personal messages, photos, videos, emails, and Amazon order history.

So every step of the way Facebook and Onava lied to their users, violated terms of services of the platforms they were user, and to spy on people at an unprecedented level. This system eventually resulted in congressional hearings.

No reasonable person thinks that these are the actions of people with integrity.

Here's an easy test: was Onavo (or, for that matter, is Facebook) upfront with users about the data they were collecting and how they were using it? Or did they hide it behind legalese in the terms of service?

I don’t think even Apple passes this test though. (And I do consider Apple to much better privacy wise than other big tech)

We're not talking about Apple, though. You're presenting the context of Onavo's actions as though people weighed the pros and cons and made an informed decision about whether to use the app. I'm adding another bit of context that suggests people did not realize what Onavo was doing behind the scenes, and that Onavo concealed their behavior because otherwise people wouldn't use it.

that wasn't the question asked.

I'll give you the benefit of the doubt, and assume you're genuinely trying to describe Onavo's value exchange from your perspective.

But it demonstrates the dissonance between a growing proportion of the public and fb. For example:

1. Facebook's use of "Integrity" is duplicitous. I'd wager strongly that the general public would assume the word to have its common-language meaning: that Facebook is operating 'with integrity'. That OP's comment is necessary evidences that the meaning is not as would be expected. To your earlier question, that is the "accusation".

2. "looks like your morals are that products that are free in exchange for anonymously analyzing your data are immoral". This is also duplicitous. It is increasingly clear that the cost of Facebook's products is very much more than "anonymously analyzing your data". Impacting teenage mental health[0] or polarizing political opinion[1] as just two examples.

I suggest you re-read @tedivm's post:

> Most of society is against spyware. It's not a controversial topic- spyware is pretty frowned upon. Spying on people does, to most people, show a lack of integrity. So does much of the other actions Facebook has taken and which are now coming out in these leaks.

Your comments about Onavo might be factually correct, but you're missing the point. The reality of the surveillance economy is slowly seeping into societal consciousness, and it's not landing well. That Google/Apple/whomever also engage in it doesn't legitimise the behaviour.

[0] https://www.theguardian.com/technology/2021/sep/14/facebook-...

[1] https://www.wired.com/story/facebook-vortex-political-polari...

IIRC, FB “cross-sold” the Onavo app through the main app without any mention that it was not a third-party app, and without mentioning that data was collected. That’s problematic to most people.

Multiple agencies in multiple countries have had the same concerns: “ "The ACCC alleges that, between 1 February 2016 to October 2017, Facebook and its subsidiaries Facebook Israel Ltd and Onavo, Inc. misled Australian consumers by representing that the Onavo Protect app would keep users' personal activity data private, protected and secret, and that the data would not be used for any purpose other than providing Onavo Protect's products," the ACCC said on Wednesday.”

I’m sure you think Guy is an upstanding individual, but then it is difficult to get a man to understand something, when his salary depends on his not understanding it.

> Most of society is against spyware

Part of this is because "spyware" connotes "malware". To steelman the other: would people willingly make the trade-off* if they knew how the data was used, in exchange for this free service? (AFAICT, the data was used in aggregate, to inform competitive intelligence - not for ad targeting or doxxing or parallel construction).

* (To be clear, there are consent issues that have been covered well in this thread.)

I looked up the poll results after Snowden's revelations because I was curious. Thought others might find the data interesting. (This is not exhaustive - I literally just did a quick search.)

In 2015, "A majority of Americans (54%) disapprove of the U.S. government’s collection of telephone and internet data as part of anti-terrorism efforts, while 42% approve of the program. Democrats are divided on the program, while Republicans and independents are more likely to disapprove than approve." [0]

"Also in 2015, 64% of Americans who had heard of Snowden viewed him unfavorably" [1].

[0] https://www.usnews.com/news/articles/2015/04/21/edward-snowd...

[1] https://www.pewresearch.org/fact-tank/2015/05/29/what-americ...

"So it’s a linguistic issue."

Only in the same way that you can dismiss misaligned values as a 'linguistic issue'.

> Remember: Onavo gave free VPN to users

IIRC, it wasn't just a free VPN, it also did extra compression of data like images to speed up site loading on mobile. This eventually stopped working when https became more popular.

If there's any linguistics problems here, it's that the head of integrity's role is not to promote integrity, but rather to be manage user fraud from what I can see.

And here we have an actual real-life example of when that Upton Sinclair quote can be used in a non-snarky way. Because, yeah, it's just a semantical argument.

Were 'anonymously analyzing your data' either true, or the complete extent of the exchange, then you might have a point

FB branded Onavo as a free VPN to protect your privacy. Instead, it routed all of your traffic through FB-owned machines so that they could profile your connections and get an idea of how much you were using other apps.

This here folks is why hiring out of FB is dangerous.

They've accepted the Zen of Zuck as being the way and are unable to see any moral or ethical ambiguity.

This comment demonstrates FB's lack of self-awareness.

From the outside, people see poor judgment and lack of basic moral fiber.

From the inside, people see their own behaviour and decision-making as normal.

Thus, we get a question like this: "I do not understand. What is the accusation?"


One can imagine their thought process is something like: "If it is not illegal, then what is the problem?" This is not how normal people think.

It is like an inability to grasp the moral of the Trojan Horse. Facebook's response when accused of deceptive practices was "We gave users a beautiful wooden horse. Users knew exactly what they were getting. We will defend ourselves in court."

I can't think of a more Orwellian name for a group set on controlling online speech.

Related news (found on axios): Soros and other good guys have founded Good Information Inc. (actual company name, not a joke) to fight "misinformation", promote "good" news and take care of dumb peons in general who can't be trusted in forming their own opinions (for those opinions may be ungood).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact