Hacker News new | past | comments | ask | show | jobs | submit login

As cited in this article, Frances Haugen is arguing against allowing Facebook to use end-to-end encryption because she suggests Facebook should have more surveillance of private communications: https://twitter.com/AlecMuffett/status/1452309133928054799?s...

A lot of people jumped to the defense of the Facebook leaker because the media so successfully framed it as a “whistleblower” situation, but that’s not really what’s happening here. She’s very much an activist, and some of her suggestions really do not align with what the tech community wants at all.

Any time someone goes in front of the government and insists on less encryption and more government control of tech companies, we should be worried and proceed with extreme caution. No matter how much you dislike Facebook, this situation is no exception.




HN is a bit schizophrenic on this topic. On one hand, it seems everybody wants more freedom and less surveillance, but they give FB a hard time for not monitoring content better. Hopefully these aren't the same people expressing these contradictory points of view.

I'd be curious to know what both side think the ideal social network should look like.


> On one hand, it seems everybody wants more freedom and less surveillance, but they give FB a hard time for not monitoring content better.

I don't think these are conflicting views: (1) Less monitoring on private messages (2) More monitoring on public posts


How do you look at spamming private messages to many recipients, resulting in a similar effect to posting publicly?

I know of bullying on FB where the harasser sends the same message to dozens of friends of the harassed. FB makes this easy to do since a list of someone's contacts are often easy to find online and there is no recourse to find or report these messages (as with a public post).

To me this presents a particularly tricky double-edged sword. E2E encryption is good in many cases, but tied to an easy way to send many messages and easily-accessible lists of people to target a message to, can result in a similar but more hidden version of public posts.

My guess is that this is being used today to disseminate similar content that is being restricted on public posts.

As far as I can tell, restrictions to limit the number and speed of private messages have not been effective against this kind of approach, and new accounts can always be created. In some cases, these messages go to a different "inbox" for non-contacts, but not always, and this just delays the receipt of the message since, again, they cannot be found or reported.

I don't know a good solution to this problem, but it's not one I've seen talked about.


There is no solution. Either you give people e2e and let them choose to do horrible things with the privilege or not.

Maybe a middle ground is that every e2e message is hashed and sent once, and if duplicate hashes are detected at scale (of the hashed message) you slow the propagation to 1 user per day.


Perhaps our expactation of privacy should depend on what platform we use, no e2e on public platforms (i.e. Facebook) but e2e on other platforms where an username/id/phonenumber is required, that can not be found easily.

I think the main problem is users mainly using only one platform for their communication instead of choosing it on a case by case basis.


The limits aren't always visible. In particular, its a good idea if new accounts get heavily limited in invisible ways, and it's a moderate challenge to create mass amounts of them that don't start off shadow-banned.


My question is what constitutes a public post? I feel like that definition is evolving. Is a WhatsApp group with 1k members spreading disinformation still a private message?


> Is a WhatsApp group with 1k members spreading disinformation still a private message?

Yes it is.

A “public post” is when the message was directed towards anyone who cares to listen.

A “private post” is when the message was directed to a specific group of recipients. The length of the list of recipients doesn’t grant a non-recipient access rights to the message.

This is easy stuff.


The distinction between "public" and "private" is gray and messy when it comes to corporate social media platforms. Is anything really private? How do you send a message to someone in Messenger? You're not sending the HTTP request to person X. You're sending it to Facebook with metadata that says "please make this visible to person X and consider it private". Then Facebook keeps the message and decides whether or not to publish it onward to person X, and whether or not to display it to anyone else (internally, externally, in logs, etc.). It's not like a package that gets sent over to person X's house. It stays on Facebook's property at all times.

When you post something to "your" feed, you're sending it to Facebook with metadata that says "please post this on my wall" or whatever.

To strain an analog analogy, this is not like the telephone or even the post office, where you hand them something or send out voice packets and they just look at the recipient and forward it on to the actual person. Everything you send is sent to Facebook and kept at Facebook.

Replace "Facebook" above with any social media platform or cloud service. They're fundamentally the same. I don't consider iCloud photos private. You're sending your content to Apple, not putting it in some safe that you alone control.


> Then Facebook keeps the message and decides whether or not to publish it onward to person X

An interesting weird angle: If X deletes their whatsApp account messages fail silently. FB stores the message under the false pretense that they are able to forward it to X. I think imessage does the same?


Furthermore, aligning on a definition is only half the battle. If you create some arbitrary line (100+ = public), forums will break up in into many groups of 99. Or it will switch to viral messages that are forwarded in 1:1 threads. There is a the fundamental asymmetry between bad actors (who are highly sophisticated) and regular users (who are generally unsophisticated) that makes it really hard to roll out tools / rules that have both high precision and high recall.


It really does not seem like a terrible idea to set off an arbitrary point at say 50 or 100 messages. Not everyone is going to be happy, sure, but it's clear.

More important I think is the kind of "control" that's exerted: it of course should be around developing factually accurate sources and promoting those instead (maybe a warning "This post contains keywords detected anonymously that suggest misinformation. Here is an alternative position"

The fundamental problem is that we are building our entire lives around a few systems that are completely opaque. Facebook, Google, and many other algorithms are still closed source, and to their own detriment they cannot reveal details of their inner workings. That's why we need a way to move to a radically open society that still allows for innovation and advanced technology.


As far as I know, legally speaking it would be a private message if the members were all members of a legal entity like a business. I.e. subject of this HN article is a private message to the Facebook organisation which clearly has over 1000 members.

I also think legally, but this may vary from jurisdiction to jurisdiction, a message to one thousand people that you do not know personally and you are not in an organisation with, or that organisation has an open access policy (even if there's an entrance fee) would be a public message.

Of course, IANAL and just learned this from reading the news.

More interestingly however, is what Facebook or other media should consider public. In my opinion a WhatsApp group with 1k members should not be considered public, even if there's absolutely no bar to entry. Private companies should have no business policing private communities. If they've got concerns they should invite law enforcement to decide if there's any laws that are being broken.

The moment when communication becomes public is when the communication goes outside the boundaries of the group. If my antivax aunt posts an edutainment video about how vaccinations cause autism and it is clear that it is misinformation, and that edutainment video is not just in her group of weirdo's, but actually shared on her public timeline, in my opinion Facebook should definitely come down with the ban hammer. There should be a little "report" flag that I'll use to snitch on my aunt. Even if it's on her "friends & family" timeline I wouldn't consider it private for the purpose of culling misinformation. She should specifically select a group of people who have self-selected to be in that group for it to be considered a private message.

Also, if a private group on a platform like Facebook/Whatsapp has members that are underage, and not all members are in a complete graph of direct friends/family, Facebook should require that group to have active moderation that conforms to some sort of platform wide code of conduct.


Yes, that is definitely a public post.


There are plenty of private email lists with 1000+ members. Should email providers be censoring misinformation in those messages?


And what about 100? 25? 10?


Depends on the type of invitation.


The invitation type can change at any time.


Yes.


How about a WhatsApp group with 20 family members?


That is also a private message.


> I don't think these are conflicting views: (1) Less monitoring on private messages (2) More monitoring on public posts

The same people argue that FB needs to censor more public content and wringing their hands in public over just how awful it is that in places like India, WhatsApp message forwards can spread memes that respectable people like western journalists and tech activists don't like.

It's a "who, whom" thing. There's no principled stance differentiating private from public with respect to control and censorship. It's become respectable in western political discourse to demand totalitarian control over the spread of ideas. Today's activists will do or say whatever it takes in the moment to bring the boots closer to human faces forever. Anyone who values human dignity needs to oppose this movement.


While I agree the elites are upset about losing power, there absolutely are issues of principle here.

There is such a thing as legitimate authority.

For example - there are people who do actual science, and other people who actually can read scientific papers and make assessments, there are people who have legitimate basic understanding of science, relationships with scientists, and consistently communicate reasonable information about that, as it relates to our world.

And there are people who make stuff up.

And very influential actors who will use a system without information integrity to their advantage.

These people wield enormous power and fundamentally shape outcomes for everyone.

Both the Truth, and the Public Good matter. While the later is more ambiguous, it's also material.

The question the becomes - how do we allow yahoos in their basements to say anything they want publicly (i.e. aliens invented COVID, the vaccine will kill you), how do we allow legitimate detractors to question classical authority (i.e. ivermectin might work), how do we try to unbias information when it's politically contentious (i.e. mask policies), how do we allow politicians to speak their minds, but to not destroy their communities with irrational or completely irresponsible information (i.e. 'the vaccine isn't safe, you don't need masks, just eat healthy )'.

I'm afraid we don't have the answers, but maybe some degree of 'proportional oversight' on the most public bits of information might be reasonable i.e. statements about the vaccine, when they reach a certain threshold of popularity, must have references to actual studies, or something along those lines.


> The question the becomes - how do we allow yahoos in their basements to say anything they want publicly (i.e. aliens invented COVID, the vaccine will kill you), how do we allow legitimate detractors to question classical authority (i.e. ivermectin might work), how do we try to unbias information when it's politically contentious (i.e. mask policies), how do we allow politicians to speak their minds, but to not destroy their communities with irrational or completely irresponsible information (i.e. 'the vaccine isn't safe, you don't need masks, just eat healthy )'.

How do "we" this? How do "we" that? Who determines what constitutes the group of "we"? Power to censor inevitably, invariably, and irreversibly gets used to simple deceive and propagandize people in favor of what the censor wants. Safety is no excuse either --- every tyranny begins with the excuse that unusual powers are necessitated by some emergency.

There is no such thing as "irresponsible information". I reject the entire concept. There are only competing truth claims, and some nebulous "we" made up of journalists and tech activists has no more legitimate basis for policing speech than anyone else. There is no "legitimate authority" over people's minds.

You're right here that there is an issue of principle underlying the double standard: the principle is that some people think that they ought to control what other people thing, ostensibly for their own good. I wish they would at least be open about this principle.


'We' work together as a community and groups of communities.

Traditionally through Classical Liberal institutions like 'Executive, Legislative, and Judiciary' with elected representatives for oversight, but more realistically also through the '4th Branch of Government' i.e. the Central Bank, the security apparatus (Military, FBI, NSA, CIA), a Free Media with integrity (believe it or not, they don't just publish whatever, there are norms and standards), the '5th Estate' i.e. people with voices outside the norm, the Academic community, Industry, NGOs, Faith Groups, Cultural Institutions. Other nations and international bodies.

You're confusing a bit the legitimate motivation for regulation, with the means by which bad actors take power (i.e. we can't use security as a measure because then Stalin will come along).

'Security' is 100% a material issue, it's not even an argument - there are bad actors trying to do bad things all day long from petty violence to terrorism to invasion etc..

What that means is we have to take special care in those scenarios, usually by means of oversight and proportionality.

For example, the police can't just go into your home, they need a warrant, signed by a judge etc.. The laws the security apparatus use have oversight by elected officials.

There are no rules for this FB issue, it's the Wild West, and because it touches on issues of censorship, security, politics and now Public Health ... it's a tough one.


> 1) Less monitoring on private messages

The Facebook leaker is explicitly arguing against this though. She cites Facebook’s push for end-to-end encryption of private messages as a problem.


I'm merely clarifying HN's common opinion.

As for the whistleblower, I'm very skeptical of her — to be a tech PM against encryption, and somehow linking e2e encryption to making the platform less safe, is dubious at best. Removing misinformation and calls to violence on the Facebook platform doesn't need to include monitoring private messages.

The idea that she's been a PM at large tech companies for 15 years and doesn't understand that Facebook monitoring messages will mean China can monitor those messages is almost too suspicious to believe.


How do these two align? Why would FB sending messages that are sent encrypted (not e2e) and stored on US servers allow China to read messages? If you allege hacking then why wouldn’t they be able to hack the devices?

Re misinformation: why would misinformation not simply happen in e2e group chats like it is already happening in e.g. Brazil or India? What’s the difference between posting to a group of friends on Facebook vs sharing a group message to those friends?

I do think messages should be encrypted but the trade off isn’t as straightforward as you make it sound.


The idea is once a company has some power over its users, that power will be used by some government somewhere as well. The latest example: https://www.nytimes.com/2021/09/17/world/europe/russia-naval...


> Why would FB sending messages that are sent encrypted (not e2e) and stored on US servers allow China to read messages?

Not parent, but I think the idea is that if BigCo does business in CountryA , then CountryA's government invariably forces BigCo to spy on their users who are residents.

Obviously compromising the user's device is a workaround open to governments but hard to achieve in bulk.


FB does not do business in China. This is rather a risk for Apple given they store keys in the cloud and do business there fwiw. I agree it’s a risk for almost all other countries.


I think you're missing the context of what this is about:

> Strangely, the example she gave suggested that Facebook needs to have looser encryption in order to somehow protect Uyghurs in China from government attempts to implant spyware onto their phones. [1]

And Facebook responded, sticking up for e2e encryption:

> A Facebook spokesperson responded to The Telegraph with what we all should realize at this point is the responsible approach to encryption: "The reason we believe in end-to-end encryption is precisely so that we can keep people safe, including from foreign interference and surveillance as well as hackers and criminals." There is no such thing as encryption back doors that only the "right" people can access. If they exist, they can eventually be found or accessed by others. [1]

[1] https://reason.com/2021/10/25/whistleblower-absurdly-attacks...


> Facebook leaker is explicitly arguing against this though. She cites Facebook’s push for end-to-end encryption of private messages as a problem.

One doesn’t have to agree 100% with an ally.


If someone is going to Congress and lobbying against end-to-end encryption of private communications, how are they an ally?


> If someone is going to Congress and lobbying against end-to-end encryption of private communications, how are they an ally?

Because they're also lobbying for other things you care about. And those things are more likely to be passed into law than the E2E encryption pieces.

Taking a puritanical view on an issue is a high-risk high-reward gambit. Nine out of ten times, it ejects you from the room. One out of ten times, you will organize sufficiently to make it a wedge issue (e.g. the NRA on guns, NIMBYs, et cetera).


There's a gambler's fallacy at work here, though. Our Fourth Amendment right to encrypted private communications is so important that if we lose it (or give it up), any future wins in areas like corporate transparency, monopoly regulation, and net neutrality won't ultimately matter. We won't have the freedom needed to benefit from them.

To the extent Haugen disagrees, she's not on "our side."


Sorry, what clause in the Fourth Amendment talks about encryption? Thanks in advance.


The plain text reads, "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated."

It's not reasonable to outlaw the tools needed to exercise a fundamental right. Not OK for the 1st, not OK for the 2nd, and not OK for the 4th. Encryption was already an old technology at the time of the Constitution's authorship. If they had wanted to regulate or outlaw it, they were free to say so. They didn't.


> Encryption was already an old technology at the time of the Constitution's authorship...

...so if they had wanted to guarantee a right to it, they were free to say so.

But they didn't.

Despite clearly contemplating privacy.


> One doesn’t have to agree 100% with an ally.

Depending on how terrible the bad ideas are that they're pushing, they may not be an ally at all in fact. A multi-purpose trojan horse may be more accurate.

In this case, promoting the abolition of end to end encryption is quite heinous. She's providing the authoritarians a potent argument that isn't yet well established in the public mind (we have to be able to see all of your data so we can keep you safe from the Chinese trying to see all of your data).


Nor does one have to disagree 100% with an enemy.


Nor do you have to support an ally you agree with a majority of the time if they get some big things wrong.


How about (3) Less consolidation of power

Personally as someone who doesn't use FB and never will, I couldn't care less if Facebook wants to track and monitor every one of their users, monetize their every movement, and ban any message they want. In a free market you'd have thousands of social networks to provide competition with all sorts of different policies and ToS. The real issue is that one company is in a skewed position of power due to a broken marketplace. Fix that problem and all the other problems are irrelevant.


> The real issue is that one company is in a skewed position of power due to a broken marketplace

Nobody seems able to identify what the unfair advantage is.

The truth is that this is the nature of social networks: the successful ones tend towards Monopoly. Why? Because more people attract more people. Access begets access. The value of a network grows with the square of its size [0], and higher-value networks attract more users.

You can't break up a social network without starting to make rules about who can associate with who, which is a fundamentally anti-free position.

The problem is not Facebook. In its absence another would take its place. The "problem" is human nature, and that we were not designed cognitively for the types of networks that technology now enables for us.

We should focus on education, friendship, and real-world experiences. Legal fights against social networks in general or Facebook in specific are futile.

[0] https://en.m.wikipedia.org/wiki/Metcalfe%27s_law


How would thousands of competing networks deal with a government-enforced “no E2EE; wiretapping API required” law?


Badly, and that's the point.

I'd like social media to be run by everyone having a social media server linked to their home network (imagine something like a Raspberry Pi), with it being totally decentralised and every user controlling their own server.

Then if the government wants to shut it down, they have to raid everyone's home.


Or, you know, just work with the ISPs to make it illegal, like with BitTorrent.

Also, what you want already exists - you're free to go use Mastodon and run your own node. You can't possibly think that's a reasonable product your grandparents would be able to use.


> Or, you know, just work with the ISPs to make it illegal, like with BitTorrent.

BitTorrent certainly isn't illegal in the UK; it might be in other jurisdictions.

> you're free to go use Mastodon and run your own node

Would it work behind a NAT'ed router on a dynamic IP? I suspect it might not.

> You can't possibly think that's a reasonable product your grandparents would be able to use.

What I envisage is an SD card containing the OS + apps, you put it in the Pi, plug it into your router by ethernet, configure it via its web app, and you're ready to go. I think it ought to be possible to make it easy enough for the average person to install (certainly anyone who could install a router + internet would be able to).


> In a free market you'd have thousands of social networks to provide competition with all sorts of different policies and ToS.

You know, I want to believe that, but I don't think it's true.

Because of the "network effect", the value of belonging to a social network is mostly dependent on how many people are on it. This dynamic very strongly favors a few big social networks.


But the network effect of a social network is a status quo we kind of just let happen, and maybe that needs to be rethought. You could equally imagine a case where email was a walled garden — where Gmail users could only send messages to other Gmail users, Hotmail to Hotmail etc — and someone said "well, that's the nature of email platforms".

It doesn't have to be this way — if we had a common protocol for 'social' (or used & improved the ones that we already have), you could have interop of posts/comments/media between platforms and then the platform's 'secret sauce' be curation of messages, discoverability of people you might want to connect with, etc.

Facebook could focus on connecting with friends & being free but ad supported. Instagram could be only media posts & supported by sponsored influences in your timeline. Twitter could be focused on current events rather than your friends and be subscription based, etc.

Similarly with IM, we had lots of competing IRC clients when that was a thing. As products, you could easily create competing messengers that could speak to each other. You could even have additionally things on top e.g. if you have a FB<->FB message it can support 'stickers' or whatever that are outside the spec, but similar to browser development they _do_ get added to a shared spec that isn't controlled by one company.


> Less monitoring on private messages

As pointed out on The Last Week Tonight show, private messaging apps are a cess pool of misinformation [0] especially in the developing world.

But: Facebook can and does monitor private messages whenever any user flags / reports them [1]. The problem is, how effective is the mechanism given not many know it is even there. Of course, e2ee mustn't be compromised but it should also not be used as an excuse to let misinformation run amock. May be homomorphic encryption gets us there, may be UI changes do. I hope Facebook does act swiftly and decisively whatever the case, since e2ee (backdoored or not [2][3]) seems like the scape goat here.

[0] https://www.youtube-nocookie.com/embed/l5jtFqWq5iU

[1] https://news.ycombinator.com/item?id=25211185

[2] https://news.ycombinator.com/item?id=13389935

[3] https://news.ycombinator.com/item?id=25685446


If someone has their Twitter set on private (only followers can see content), but they accept most/all follow requests, would you consider their content in category 1 (private) or category 2 (public)?


The obvious solution to #2 is encrypted private social networks that aggressively lock out the monitors. To continue the monitoring it'll require abolishing encryption, that's where the views in question (more freedom + less surveillence and more content monitoring) inevitably are going to end up conflicting and must always end up conflicting.


How do you ban monitoring of public posts?


> How do you ban monitoring of public posts?

I'm not suggesting you should or can in any practical way (how heavily - or not at all - that public posts should be monitored by the government is a different debate from what I was saying).

I'm saying that the parent comment claiming the views are not necessarily conflicting, is incorrect.

This must always conflict in the end:

> I don't think these are conflicting views: (1) Less monitoring on private messages (2) More monitoring on public posts

More aggressive public monitoring (along with the follow-on laws to regulate & punish a lot more things said in public) will inevitably result in a drive toward more encrypted private social networks that can't be easily monitored. Those private social networks will rely heavily on encryption. The aggressive public monitors will have to abolish encryption to then regain the high degree of mass content monitoring they used to have. Call it networks going underground, or dark; the authorities will come up with a negative naming scheme for it as they seek to castigate the shift.

You can bet on the rise of mass popular encrypted private social networks (likely built around subjects/topics/ideology/x thing in common; more like groups or subreddits than mass social media today, in other words). It's coming this decade. And the response from the government toward that is quite predictable. They'll use it as another argument against encryption.


If there is a drive towards more end to end private messaging, I'm OK with that. But I wouldn't call that "social networks"; in my mind there is a huge difference between an encrypted end-to-end message between two users, an encrypted message which is sent out to a large group of users, and a public post.

You can make an end-to-end message between two users perfectly secure, to the limits of engineering and the security hygine of the two users. No problem there. If you have an encrypted message sent out to a group of users, as the group of users gets larger and larger, it's more likely that one of those users might be an informant to law enforcement, or will be sloppy with their message hygine, so that after they get arrested invading the capitol on January 6th (for example), law enforcement gets access to all of their information on their phone with the pin code 1234. Still no problem as far as I'm concerned. Criminals tend to be stupid, and that's good from a societal point of view.

Public posts are a different story altogether, because social networks have an economic incentive to promote "engagement". And if that engagement happens to emphasize messages that incite hate, or anger, or causes people to think that vaccines contain 5G modems, or whatever, hey, that's just good for shareholders, and in the captalist system, shareholder value (especially VC value, for startups) is the highest good, right? Well, I have some real concerns about about that. I think that corporations which are prioritizing engagement uber ales, even if that causes social harm, should be potentially regulated.

And that's why it is quite possible for someone (like me) to believe that end to end encryption should be allowed, and promoted, even if it gives the FBI hives --- and at the same time, argue that public posts should be highly monitored from the perpsective of trying to get insight to whether the amplification alogirhtms might be doing something unhealthy for our society's information ecosystem.


This is probably what would have naturally happened if the CDA were never passed. Instead, we've turned internet companies into geese that lay golden eggs for the government.


I would argue "HN" is not schizophrenic, "HN" is a bunch of users who have different passions and will comment/vote on different things.


We should coin this the "hive mind fallacy". Though there probably already exists a name for this.


"Distributed hypocrisy."


> Hopefully these aren't the same people expressing these contradictory points of view. I'd be curious to know what both side think the ideal social network should look like.

I see social networks (and in many ways the internet as a whole) like a new country we’ve founded. It’s a bit different from countries made out of atoms. For a start, everyone there is a dual citizen with somewhere in meatspace. And instead of community centres we build websites. But it’s a place.

How are those spaces governed? Is it a democracy? No. Each social network is its own mostly benevolent corporate dictatorship. If you don’t like the rules, your only option is banishment.

Healthy communities (in healthy society) need rules to keep bad actors in check. And freedom to explore and be ourselves. Healthy communities in real life use participatory processes to figure out what those rules should be. You need people to feel like they have a voice. It’ll never be perfect. And different rules will make sense for different groups.

Facebook’s problem is they’re trying to be the government, the judiciary and police for billions of people from every country on the planet. There is no single set of rules and policies which will work everywhere. And even if there was, how do you police billions of people? AIs make mistakes.

I don’t know how, but I think FB needs to eat humble pie and find ways for communities to decide on (and enforce) their own social norms somehow. It’d be messy and disjointed, but so are people. Reddit and discord do this - although they’re obviously very different products.

Tyrannies don’t have a strong history of making choices which primarily benefit their citizens. So far, facebook’s track record hasn’t been much better. To improve, they need to acknowledge the position they’re in, learn from history and give some power back to the people who populate their site.


> I'd be curious to know what both side think the ideal social network should look like.

I'd rather they didn't exist, honestly.


How do you define social networks? Should email mailing lists not exist either because those are absolutely social networks in a way as were BBS systems?


Not OP, but IMHO, the danger comes from the algorithms which lead to "engaging" content being pushed aggressively to everyone. "Engaging" can mean cat videos or "incredible looking food", but it means divisive, partisan, insincere and outright dangerous more often than not.

Inasmuch as a "social network" is just people signing up to talk about certain topics and that's it, I don't have a problem with it. Internet forums of the 2000s weren't a problem necessarily. And while HN does have some "virality" mechanisms built in due to upvoting and while it is sometimes a "problem" on very divisive topics, it's not nearly on the scale of Facebook, Twitter and Co.

So if it were up to me, Twitter, Facebook, Tiktok etc. should either disappear or they should at least have to revise their algorithms and open them up to public scrutiny. Or, you know, if they went back to their original purpose of just being about connecting with friends and family. But I guess you can't make money out of that.


I take your point. But I would need more convincing that just getting rid of algorithms/recommendations/etc. would suddenly make all the problems go away.


All? No. Conspiracy theories and disinformation existed before the internet, too.

But we can at least try to get rid of the things that make the problem worse.

Moreover, minimising hyper-addictive patterns on such platforms would have a host of other benefits too.


Another part of the problem is size. Older communities were miniscule compared to the billions of users each platform has now.


While I agree that it's a sticky problem, I think we can find a middle ground between "email mailing list" and "site dedicated to maximizing engagement by both encouraging the spread of rage baiting misinformation and deliberately maximizing the number of people who see it".


Yep, this is my point summed up pretty well. SMS can be seen as a "social network" if you really squint your eyes - clearly I'm not talking about BBS/forum software/etc.


Like any social network, HN falls into polarized, sometimes unproductive, discussion with certain topics. For HN, that's frequently discussion of big tech and their antics and discussion of censorship/moderation/surveillance.

Links about the intersection of these topics, such as apple/CSAM or facebook/moderation, are most likely to have comments that devolve into polemic without much productive discussion taking place.


HN is not a single person, it's a community with a variety of members with a variety of opinions. The fact that communities tend to form consensus of opinions, especially such as subreddits, is kind of a major issue.


Please keep in mind that some of us just want bad things to happen to Facebook and don’t have a pro or anti-surveillance agenda behind that.


> On one hand, it seems everybody wants more freedom and less surveillance, but they give FB a hard time for not monitoring content better.

That seems perfectly consistent. FB is already going all-in on surveillance and ignoring any notion of freedom; if they must destroy privacy, the least we can ask is that they actually do something useful with it.


Author of original blogpost here; I am seeing a lot of discussion here about "what constitutes a public group?" and so I wrote this to help with the discussion. https://alecmuffett.com/article/15095


In the case of FB, I actually believe having Instagram and Facebook not e2e could be even work, if people are educated and made aware about it, while Whatsapp remains e2e.

Instagram and FB are mostly "public" facing so they offer a big surface for malicious activities. (Scammers, Groomers/People seeking CSAM which are always used as a reason for more surveillance/ Trolls etc.)

Whatsapp is more private and requires knowing one's phone number which ideally should be harder to get ahold of.

Messaging on Instagram/FB could be compared to whispering in a crowded place, private...but not fully private.

In an ideal world this would not be necessary, but there will always be a fight between surveillance and freedom. And perhaps giving up freedoms in some parts could allow us regain more freedom in others, as long as people are aware about it, which might be the biggest hurdle to tackle.


> HN is a bit schizophrenic on this topic.

What if there was a way to send private message, in a decentralized manner, and where it's not even possible to tell if the recipient has actually read the message or not?

There are blockchains using ZK-proofs allowing to do that today. Not only it's decentralized so "fuck corporation surveillance/profiling" but it's also highly unlikely uncle sam and its offsprings can break the elliptic curves and ZK-cryptograhy in use by these blockchains, so "fuck state surveillance" too.

But then it's using the word "blockchain", so HN is pretty much against it.

And instead HN as a whole shall root for "secure" messengers that are leaking metadata like there's no tomorrow while explaining that they're the best thing since sliced bread "because they're easy to use".

Go figure.


A solution is requiring social media companies to verify the identity of users. They don’t have to require other users know the identity but the company has to. This protects against sock puppet armies and makes bans easier to enforce.


Maybe decentralized completely: do not talk to my kids about your BS paranoid and my kids should just see whatever kids want to see.

We dont exactly need a centralized humongous social website. Im on both sides honestly: I want facebook to subside in profit of isolated more freedom-centric micro network where we can say what we want but wont impact massively crowd thinking ?

Like that if monitoring must happen it happens in isolation, and if freedom must exist it s not on the same place as the other fucktards?


the future will not be censorship but categories, so people will not get banned, they will get labeled and you will filter tags out, those tags will be connected to a person, if you don’t like a tag you will see less of that persons content. this will be useful in keeping the peace. it’ll mess up dialogue but most people have a hard to with empathy these days. so maybe certain sections will have open discussions for people who are okay with seeing content from other tags, and are okay with engaging with that content. right now dating apps will forever ban your account if you engage a certain way with the opposite sex, so in many cases a match will get banned because the person they matched with did not like their content. but that does not mean someone else won’t like their content. so the people who have the most in common including their communication styles should be allowed to still engage with each other without getting banned by someone else’s arbitrary rules that are subjective depending on a persons upbringing.


> I'd be curious to know what both side think the ideal social network should look like.

I don't care what they look like as long as there's hundreds of them all on relatively equal footing.

I firmly believe that most of the major problems facing society today are not caused by any features of particular companies, but by the consolidation of power in a very small number of them.


Maybe the ideal social network is no one big "social network", i.e. get rid of your Facebooks and Twitters and go back to small, decentralized, localized forums and interest groups running on independent, secure, open-source platforms that are easy for laypeople to set up, maintain and moderate.


I think you need both. Small decentralized topic specific forums are important. But, without a larger community people tend to not realize the outside world may not share the same opinions. That leads to this tribalist attitude of hatred of "others".


I'm skeptical that seeing opposite viewpoints expressed by random people on the internet leads to shared understanding or less hatred.

If anything, research seems to indicate that tolerance to opposing viewpoints and revising of stereotypes comes from extended personal contact and having a shared sense of purpose, and that's difficult to achieve on the internet.


You are conflating schizophrenia with multiple personality disorder. This is a common, but harmful, mistake.


Perhaps you could provide some insight into why? I don't see how multiple personality disorder is a very useful characterization when you're describing a community made up of multiple personalities.


yodsanklai said HN is "schizophrenic" when accusing HN of being one body with two conflicting personalities: one that wants FB to emphasize freedom and the other that wants FB to police content. He said "Hopefully these aren't the same people expressing these contradictory points of view." (In my opinion, what's really happening is that there's some diversity of opinion among HNers, which is how it should be. As you point out, and yadsanklai implicitly admits, HN is not a single person so there is no self-contradiction or multiple personality disorder. Beyond that, it's a mark of intelligence for people to be able to change their minds, but I digress.)

Schizophrenia would have been appropriate if he was accusing HN of being paranoid, hearing voices, etc.

Schizophrenia is not multiple personality disorder (actually called Disassociative Identity Disorder, I just learned). I am really curious to know why people started misusing the word schizophrenia in common parlance like this.


I don't think either side has a solution. But, that doesn't mean they can't detect an obvious subterfuge from outside the community.


I think what they want is more freedom for their own opinions, and more surveillance on people they disagree with.


HN isn't one person with a single set of ideals and opinions. Its a website, with many people who's worldview ranges quite drastically. Tech, like anything else in the world, is going to have a cross section of humanity in it. You have libertarians, anarcho-capitalists, traditional GOP and Dems, European Socialists and full on communists here. You can't expect such a diverse group to have a homogenous view.


Exactly - every community has a diverse set of opinions (whether its discernible to the average person or not is a different question) and rarely do you find a homogenous group of individuals without a very strictly enforced control mechanism which suppress differences from being seen then creating a homogenous community.


> her suggestions really do not align with what the tech community wants at all.

I feel like we as a tech community should take a step back and consider whether or not we should not be the only ones that decide what should or should not be done. It is a massive country with many opposing viewpoints.

We have massive conflicts of interest and some tech companies have shown over and over that greed trumps morals in many cases. We are humans and we are not a group of enlightened arbiters that know what is best for the world.

We have done a shit job so far of managing ourselves. Sure in lots of cases we have made the world a better place but I think we need to be honest with ourselves and acknowledge that greed has crept in and supplanted that "making the world a better place" mission.


In my opinion, domain experts have a moral obligation to speak up regarding policy that involves their area of expertise.

On the topic of encryption, we're the only community with an understanding of what "breaking encryption" means. There are no skeleton keys, only vulnerabilities. We have a moral and ethical duty to fight for encryption, privacy, and security, not against it.


> There are no skeleton keys, only vulnerabilities.

This is something really unique to digital technologies. In meat-space, everything has a vulnerability: force. With sufficient reason and circumstance, whether or not you give up your keys, the government is still able to claim dominion over anything you're trying to hide. All they have to do is break a few walls.

Digital technologies don't have this problem (feature if you're the government). If you have your key stored in your memory, and there is no back door in the software, no matter what court orders anyone has, if you don't tell them the key then the information is lost to them forever.

This kind of breaks some fundamental assumptions layed into law prior to tech. When you use encryption, tech really becomes an extension of your mind, and your 5th amendment rights effectively shut the government out of them completely. While I like this idea, it definitely poses problems for enforcement of basically every digital and financial law on the books, which can pose problems for all of us.


The use of force generally leaves evidence of its use, however. If the government cuts the door off your safe, and doesn't have a warrant to conduct that search, then the facts will speak for themselves when you avail yourself of your rights - e.g. to have evidence from an unlawful search suppressed in court and the subsequent civil rights lawsuit.

This is where the analogy breaks down - introducing backdoors and passkeys into digital security doesn't create the same trail in the physical world.

The better analogy would be - imagine that the government insisted that all locks be approved by the government, on the understanding that those locks would have master keys that the government owned. They would be able to come to a bank during the night, unlock the front door, the vault, and your safe deposit box; or come into your home when you were at work, unlock your filing cabinet and look at your records.

You would have to trust that government employees who had access to the master keys would only use them for legitimate purposes, and when authorized to do so. You would have to trust that the government could secure those keys at all times so they never fell into the wrong hands, and that no unauthorized duplicates would be made.

You would have to do all those things bearing in mind that the government regularly loses such master keys, has employees that exceed their authority, misuse government resources, demonstrate poor custody practices, and so on.

It doesn't seem like a good bet to me.

EDIT: oh yes, and I forgot to mention there's a whole bunch of other bad actors out there who know how to buy a government-approved lock, take it apart, and are highly motivated to make their own master keys.


Yes, I agree that we should absolutely provide advice on our area of expertise for topics in our domain but I don't think we should be the ones that make the decision. The other side of the debate aren't idiots, they have some valid view points, else this encryption debate would have been settled long ago.

Encryption was not the only point brought up by the OP, they were also against government regulation. My main point is that we are tech workers that have conflicts of interest and that we should look hard at our viewpoints and ask ourselves if they are better for us or better for everyone. We are humans and like other industries, greed has crept in and we don't always make great decisions when left on our own.


Would you care to share what you think the valid view points against encryption are?

For my own part, I find it helpful to remember that questions over encryption have been ongoing for several decades. Historically, the arguments boiled down to shouting about CSAM, terrorism, drugs, etc. Mostly it tended to be really be about law enforcement agencies wanting to have the unfettered powers of surveillance.

What's perhaps most interesting to me is that the prevalence of encryption seems to have done very little to stop the FBI from catching people. I know this next point will be contentious, but perhaps there's room to question why these people who are indeed not idiots are so against encryption for you and me.

So yes, you're absolutely right. Let's look hard at our viewpoints and biases and expertise and paychecks and ask ourselves - why does the FBI want us to not have encryption? Why do we want it? Who has the valid view points here?


I don't think it's that complicated or nefarious. Law enforcement are humans just like us, mostly trying to do their jobs with as little effort as possible. Encrypted communication makes their jobs harder in some cases, so they don't like it.

> What's perhaps most interesting to me is that the prevalence of encryption seems to have done very little to stop the FBI from catching people.

I don't think we can know this. Or maybe you have some data? But more importantly there's a big gap between "catching" someone and "convicting" someone. Having the content of specific communications can make a big difference in actually proving guilt.


> I don't think it's that complicated or nefarious. Law enforcement are humans just like us, mostly trying to do their jobs with as little effort as possible. Encrypted communication makes their jobs harder in some cases, so they don't like it.

I completely agree. My point was not that their motives are inherently evil per se, just that they're as self-centered as anyone and carefully gazing deep into our navels does not imbue their perspectives with extra validity or compelling strength.

> I don't think we can know this.

We do know that the FBI routinely arrests and prosecutes people, even ones that use encryption. There's no end of public sources showing it occurring again and again. The fate of Dread Pirate Roberts is a good example.

> Or maybe you have some data?

What data would you like? The FBI has statistics available going back to the 1930s: https://www.fbi.gov/services/cjis/ucr

> Having the content of specific communications can make a big difference in actually proving guilt.

You're absolutely right, it definitely can be! Which is why the FBI has learned how to both build cases without such things. They've also learned how to gain access to encrypted communications. Between the two, some might opine that it's enough to raise questions about if they really need to prevent you and I from having access to cryptography or not. They're clearly experts at law enforcement and perhaps have no need to confiscate the tools of mathematics from technologists.

Which brings me back to the point. Let's look hard at our viewpoints and biases and expertise and paychecks and ask ourselves - why does the FBI want us to not have encryption? Why do we want it? Who has the valid view points here? What have we gained from this navel-gazing?


We live in a world made of spectrums where we must balance opposing desires. We cannot have prosperity without security (in the law enforcement sense) and we cannot have freedom with absolute security but we make compromises to try to find a balance between those opposing forces. It is an incredibly hard problem to solve but I don't really know where that perfect balance is and I'm not sure anyone knows.

I don't want to live in a police state and I'm not sure of anyone else that does. I also don't want to live in complete anarchy or even a libertarian state because I think there are bad people out there that would take advantage of that and potentially hurt other people.

It always seems to be hand waved away on this forum, but perfect computer security could indeed help criminals out there. I don't know the numbers around how many more but I really believe that no one knows.

I make no assertion that one side is right and the other is wrong, just that it is a hard problem and that I don't know the answer.

Where and how do we find the balance? I know the law enforcement community pushes against computer security to attain better security (in the law enforcement sense) and I know that tech people generally push for computer security that limits law enforcement abilities. I don't think we can let one side of the argument completely win, but how do we find that balance?


You seem like an intelligent, thoughtful, and educated individual. Perhaps you might find it interesting to read up on the history of attempts to pursue the precise balance you suggest is desirable: https://en.wikipedia.org/wiki/Clipper_chip

If you seek numbers, the FBI invites you to inspect their data: https://www.fbi.gov/services/cjis/ucr

As you correctly and wisely say, the world is full of hard problems with many strong, clear, compelling, and valid viewpoints to balance. It may just be possibly worth considering that this might not be one such scenario where balance is possible, let alone achievable. Do you think there might be cases, in the fullness of the human experience, where one set of extremists on an issue are right, their opposite numbers are wrong, and any balance between them also wrong? What if what seems to be a balance is based on a false premise?

Perhaps we should stop, take a step back, and examine our biases?


Yes, I absolutely think that finding that balance is usually a case of finding the unhappy minimum and not about finding a spot where everyone is happy.

> Do you think there might be cases, in the fullness of the human experience, where one set of extremists on an issue are right, their opposite numbers are wrong, and any balance between them also wrong?

Yes, I think there probably are cases where something like this happens but I would assume those cases would be a little less debatable. But yes, what you say makes a lot of sense and is probably a large part of the reason the current political climate is so tribal.

> Perhaps we should stop, take a step back, and examine our biases?

If you are talking about the current case though, what is the false premise or what are the biases?

There are indeed monsters out there that hurt women and children and use technology to accomplish those crimes. They are not made up bogey men. They flock to technology that provides them cover and punishments for crimes deter criminals from attempting those crimes. If criminals did not have to fear punishments, more crimes would happen. I certainly would speed more if I didn't have to worry about speeding tickets.


> Yes, I absolutely think that finding that balance is usually a case of finding the unhappy minimum and not about finding a spot where everyone is happy.

I'm glad we agree!

Here's where the hot take comes in: we're there now. Encryption is something we have access to. Law enforcement manages to work around it on a regular basis and has now for several decades.

> Yes, I think there probably are cases where something like this happens but I would assume those cases would be a little less debatable.

One would certainly hope so, but it's perhaps possible that this might not always be true. There are often people ready and willing to debate what should be undebateable. It shocks the conscience.

> If you are talking about the current case though, what is the false premise or what are the biases?

Some people have come to this conversation with the false premise that taking away encryption will substantially help law enforcement, improving safety and security for our vulnerable friends, neighbors, and community members. Their pain, suffering, and exploitation is very real, yet that perhaps does not always make functional what is done in the name of keeping them safe.

Some have come to this debate with the bias of assuming there is a useful policy medium to be found. Perhaps there is not, as we might be dealing with technical matters that are all-or-nothing in unavoidable ways.

Some may find these to be perhaps worthy of careful examination, as such things can perhaps lead to dangerously misguided policy - such as the Clipper chip or mass surveillance.

Thank you for being thoughtful and centering kindness, mercy, and compassion.


I have yet to see a compelling argument for enshrining security vulnerabilities in law outside of the usual sob stories that like to get echoed in Congressional testimony.


> We are humans and like other industries, greed has crept in and we don't always make great decisions when left on our own.

I feel strange having to point out that Governments are made up of people too. If it's not people that are making the good decisions, whom do we turn to?


Yes, democracy is the worst form of governance except for all of the other ones. I trust more in our government than I do in zuckerberg. Letting him continue the path that he and only he decides does not seem like a good idea.


I find it odd that you exalt democracy and the idea of limiting individual freedom in the same paragraph. Zuckerberg isn't a government unto himself and answers to shareholders and board members; one of which served as Chief of staff for the National Economic Council in the Obama administration. You seem to think that our government and large corporations are separate and isolated organizations, but the people that work within them often slip between the two. Taking Zuckerberg out of the equation will certainly not change the game.


due to Facebook's corporate structure, I don't think Zuckerberg actually answers to the shareholders and board members.


So then what is your solution to this?


> I don't think we should be the ones that make the decision

Is Admiral Poindexter a preferable decision-maker?

https://en.wikipedia.org/wiki/Total_Information_Awareness


> My main point is that we are tech workers that have conflicts of interest and that we should look hard at our viewpoints and ask ourselves if they are better for us or better for everyone.

That is a trait lacking in almost everyone these days, not just tech workers.


To play Devil's Advocate and use your own point against you;

Some software engineers might be experts in the domain of encryption, but there are other professions who are experts in the domain of National Security.


Tech people have tech knowledge, but they are not policy experts. Our job is to give the people creating policy information on our area of expertise, not theirs. If you don't agree, look at that comments on this post. They are shallow and tech centric, not considering all of the various impacts of a comprehensive policy.


It's not obvious that Ms. Haugen provides any additional expertise in policy matters. What are her achievements in the policy space that merit deference to expertise?


Her expertise and motives are separate from the public utility of what she revealed. I'm not going to denounce her for leaking this material: I can separate the utility of having this material in the public record from her reasons for putting it there.

As for her expertise, she worked in FB's civic integrity unit as a product manager. I may disagree with her, but she does have a reasonable basis to claim better knowledge on the topic than the average person.


> We have massive conflicts of interest and some tech companies have shown over and over that greed trumps morals in many cases.

How is using end-to-end encryption of private communications a case of greed trumping morals? If anything, encrypting private messages and preventing others from reading them - be it Facebook or the various world governments - seems like the obvious moral move.

I think a lot of people, tech or otherwise, are projecting their own ideals on to this Facebook leaker without looking closely at the details of what she’s been lobbying for. Most people seem to want less surveillance and interference with private communications by Facebook but she’s arguing for much, much more.

But that’s the problem: The issue has now been so dramatized in the media that the average viewer doesn’t really know what’s being proposed, they just see “Facebook bad, whistleblower good” and assume it’s what they want.

It’s up to people who know the subject matter, including the tech community who understand things like end-to-end encryption and government regulation thereof, to speak up.


I think one of the big sticking points of the problem is the challenge of power represents innately. It is very difficult to have an exceptional steward of power over long time periods. The government has done a poor job, individuals have done a poor job, crowdsourcing responsibility has done a poor job, corporations have done a poor job.

I am not trying to throw my hands up in defeat but pointing to an intransigent problem that is not easily fixable however simple every side of the argument tries to frame it and why they should be the arbiters of power/control. I understand the reluctance of any individual or company to hand over a large portion of control to a government or other community that doesn't have a track record - it is a fraught situation.


Yes, democratic governments have many drawbacks around regulation but aren't they the least bad choice we have in a situation like facebook?

There is no governance beyond zuckerberg currently and he has proven that he is aligned with zuckerberg and not society. Continuing with the status quo and doing nothing does not seem like a good idea to me.


I mean, he's been pretty open about his goals and moral code for a while now. The entire company was built on the assumption that being connected to as many people as possible is net good.

You can probably argue that he's wrong on this, but it's hard to say that he just cares about himself.


I doubt it, but I guess its possible that is his only true north. If it is, he is going about it in a Machiavellian or Caesar like manner where he does not care about the impact he has while attaining those goals. But I also think it's possible that his actions are those of someone who is power hungry and does not care about what happens in his wake.


> If it is, he is going about it in a Machiavellian or Caesar like manner

The dude is a classics nerd, and supposedly a bunch of the original FB mottos were in Latin (Fortune Favours the Bold, at least). That isn't surprising to me, at least.


There is an excellent, obvious solution. And it's the one Facebook refuses to consider because it's a large negative economically.

Give users a lot more control. Over their feeds. Over what they receive or not. How they receive it.

Alternatively, abolish the feeds entirely. Make people seek out updates by other users manually. It's time consuming, it drastically slows down the reaction-agitation cycle. People then focus on consuming only what's most important to them to a far greater degree, as their time is limited. Facebook moved away from that early approach by intention to spur time-on-platform, consumption, engagement.

In the offline space you have a small number of actual friends, a small number of people you can actually keep up with, because of time constraints. That's a good thing, not a bad thing. It keeps people focused on what's most important to them. Facebook seeks to generate the opposite outcome, they want max engagement and consumption, so now you've got two thousand fake friends, and two hundred more relevant people within those two thousand fake friends that you receive updates from on a daily or weekly basis. That's insane. There's no other good way to put, it's insanity. It's bullshit. It's fake, it's inhuman. That mass consumption of content, that the feed/wall was built to accomplish, how much of it matters to the end user? I think the answer is very little. Put it back into the user's hands to seek out profile page updates, to seek out updates by their friends - they'll do it if it really matters to them, and they'll massively neglect the users that don't matter to them. This is also where Facebook's finances implode; most FB employees are self-interested in that outcome not happening.

The central issue is: the core mission of Facebook (today) is a fraud. Everyone should not be connected. Everyone does not want to be connected to everyone else. It is not a great ideal to pursue connect-everyone, it's fundamentally anti-human. It will not make the world a better place. That ideology needs to be challenged, and it rests at the center of Facebook.

Facebook has built itself to agitate for attention. They designed the feed/wall to prompt artificially high levels of engagement. Give that control back to the user in spades. People will adjust their systems, they do know what's best for them when it comes to this matter (and a lot more so than a politician thousands of miles away that has never met this person, or Facebook corporate).

Society sculpting by some ruling authority, some group that happens to be dominant at a given time, is a truly horrific approach. It will accelerate the splintering into tribes, and accelerate oppression by the government.


And you think partisan politicians and activists will make unbiased decisions with no negative unintended consequences?

Encryption is not a pro-tech position. It is a pro-privacy position and you don't need to be a tech expert to see the value in that.


> I feel like we as a tech community should take a step back and consider whether or not we should not be the only ones that decide what should or should not be done.

As an engineer, I'm in a better position to understand what the f is going on regarding encryption and, say, nuclear power than most people.

No amount of wishful thinking will change that.

> It is a massive country with many opposing viewpoints.

World. Not country.


I have seen plenty of comments by engineers here containing wildly incorrect statements about nuclear power. In general engineers seem to overestimate their own competence in areas outside their immediate professional experience.


> As an engineer, I'm in a better position to understand what the f is going on regarding encryption and, say, nuclear power than most people.

Yes, you are an engineer and understand the implementation details and potential side effects, but what about the opposing viewpoints like those from law enforcement? Are you an expert in law enforcement? Your viewpoints should trump theirs why exactly? You really don't think that it's possible that you can't see the forest for the trees?

Do nuclear scientists define nuclear energy policies or do they provide advice to those that define them?

> World. Not country.

I don't think we have a world government that will make regulations in regards to facebook.


> but what about the opposing viewpoints like those from law enforcement?

You're right, I may be ignorant of the issues that support the need for weakening encryption, but the fact is that I can evaluate the other side of the equation. And if the side pushing against encryption is making terrible arguments, I can see how terrible they objectively are.

> Do nuclear scientists define nuclear energy policies or do they provide advice to those that define them?

Similar thing: the arguments against nuclear power are idiotic and typically plain wrong and based on irrational appeals to fear.

Just like the previous issue, if a decision has to be made, it has to balance the pros and cons. Even if I can only properly evaluate only one side of the equation and find it lacking, I can certainly have serious doubts as to the legitimacy of the choice presented. And that's one side more than most people can grasp.


> I feel like we as a tech community should take a step back and consider whether or not we should not be the only ones that decide what should or should not be done

When it comes to encryption, tech activists say tech should take a back seat to policy-makers and let them decide on the rules. When it comes to internet censorship, tech activists says "but my private companies!" and argues that techies decide on the limits of acceptable discourse for the whole world.

What do these stances have in common? That activists are arguing that tech should do what power wants.

There are no principles at work here. There's only a "who, whom" power dynamic. Activists will say whatever is effective in the moment for achieving their immediate aims, and right now, that's being good little sycophants for people with broad, unclear, and definitely undemocratic influence over public affairs.


> We have done a shit job so far of managing ourselves. Sure in lots of cases we have made the world a better place but I think we need to be honest with ourselves and acknowledge that greed has crept in and supplanted that "making the world a better place" mission.

I think "we" have been largely absent from the conversation as heard by the rest of society. You can't say "trust the software engineers" like you can say "trust the doctors," not because we don't have ethics or expertise, but because what most people think of as the voices of our profession are the PR mouthpieces for companies like Facebook and Google.

Speaking person to person and in online forums such as this, you'll find most engineers concerned about privacy and the societal and emotional harm of social media, and reflexively distrustful of corporations that speak about issues they have a financial stake in. But that's not what non-engineers think we think. The assumption seems to be, well, you're a techie, so of course you uncritically embrace all that stuff. People who know me better and know I don't think that way seem to regard me as less of a techie for that reason, which goes to show how little they are aware of sentiment in our profession. I think that's what we need to fix.

A profession like medicine suffers a little bit from the same bias, where people tend to assume that doctors are personally pro-surgery, pro-drugs, etc., but doctors have a professional structure that creates recognized authorities, which means journalists have somebody to talk to when they want to get the overall take of "doctors" on an issue where their expertise applies. When writing about a public health issue, journalists have no problem getting an independent perspective from doctors with credibility, relevant expertise, and no direct financial stake in the issue.

Software is in a position analogous to if the only doctors visible in the media were PR flacks for hospitals and pharmaceutical companies.

A big cultural difference between medicine and software is that doctors are traditionally trained to bear the mantle of authority. They are trained that commanding the trust and respect of patients is vital to providing care to them, and that it is a skill separate from and equally necessary to the technical skills that make a doctor worthy of that respect. Tech people are socialized to beware of the dangers of authority and distrust those who seek it. I may be speaking as an old-timer here, but when I was young, I learned to lionize the scientists and engineers who rejected the accoutrements of authority, who wore cheap and frumpy clothes, who let their hair go crazy, who reveled in stories of their own stupid mistakes, who actively punctured the mystique of authority so that they would only have as much respect as their knowledge and achievements alone would earn them.

In other words, medicine comes from an old tradition, which long accepted that wielding authority and being worthy of it were separate skills. Keeping the two connected was a moral responsibility that fell on individual doctors and on the profession as a whole. The tradition of software is much younger and was much more deeply marked by the counterculture, which took an opposite approach to the problem of authority, declaring that we could and should unlearn our reflexive deference to the superficial aspect of authority and replace it with a critical, informed consumption of the expertise of other members of society. To the counterculture mindset, it was unacceptable that society should be at the mercy of the closed ranks of a profession privileged by its exclusive knowledge.

I think that the counterculture perspective was an important corrective, but it is incomplete, because the problem of authority hasn't gone away. Even if we despise authority, we still depend on it, so the question is: how should we as a profession create and elevate professional authority? How do we make it possible for a journalist to easily get a read on what software engineers think as professionals, distinct from the official line of large corporate employers of software engineers?

Maybe software needs a replacement for the old professional societies, except with an emphasis on policy and public education, instead of expensive journals and social events designed to help you find your way into the old boys' club. I have no idea what such a group would look like, though.


Some part of me feels like this would be like letting the country decide on vaccines.

I don’t think people have any idea what end-to-end encryption means, they just hear maybe the occasional slogan (“it keeps your messages private”, “it lets terrorists hide from cops”) and assume there’s some valid choice between totally cripple online security and totally cripple people stalking your communications. I don’t even think people understand the implications of what communications means. Why else are we still talking about this issue in abstract terms when the harms are precisely enumerable and the supposed benefits can be completely debunked as fantasy?

All this stuff is so depressing. Free speech online is no longer in vogue, the ACLU, EFF, and other usual heroes of these sorts of battles feel asleep at the wheel. It’s sad to think I might see the death of something so beautiful.


> some of her suggestions really do not align with what the tech community wants at all

I agree in general with your comment, but what I don't like is anyone (Haugen, you or anyone else) speaking for "the tech community". We all have different opinions, and you can't generalize with that.

For example, plenty of people I know including myself would be strongly against less encryption, but strongly for more government control of tech companies (not via less encryption but via other means).

It's really hard to generalize, and in this case we gain nothing by generalizing so let's do it less, not more.


> For example, plenty of people I know including myself would be strongly against less encryption, but strongly for more government control of tech companies (not via less encryption but via other means).

But that’s not what Haughen is lobbying for. She’s explicitly saying that end-to-end encryption is bad because it doesn’t allow Facebook to monitor private communications enough.

I’d be surprised if you could find more than a tiny minority in the tech communicate who agree with this.


I understand that, that's why I expressed that I don't like them expressing it as something the "tech community" wants...


That's not the position that Haugen took testifying before Parliament in the UK -- where she expressed strong support for e2e encryption. She said her views had been misreported earlier -- that she'd questioned whether Facebook could be trusted to implement e2e properly, and that the reporters had garbled it.

See, e.g., https://twitter.com/sleepdefic1t/status/1452724217393434636


I don’t understand why it matters what Haugen believes - whether in “no encryption” or space fairies or whatever - instead of what she proved, which is the profoundly amoral and dishonest nature of the company.


It matters because she's using the one to promote the other.

We should be able to say that the leaks themselves are good, but also that they are making a transparent political play for control over Facebook (more than reining in their amorality and dishonesty).

Whistleblowers do not usually have the support of a top tier lobbying firm (Bryson Gillette) paid for by a rival tech billionaire (Pierre Omidyar). I say take the leaks, but say no thanks to the "advice" it comes with.


She isn’t just a neutral presenter of data, she is a spokesperson and advocate. She is positioning herself to be a decision maker, or help decision makers.


Isn’t a whistleblower by definition not neutral? I’m not even sure what a neutral whistleblower would look like.

Should her motivations be questioned? Absolutely! But if you don’t like her stance, just say so instead labelling her an “activist” or “advocate” like there’s something inherently bad about those things.


I didn't intent to imply anything bad about being labeled an activist or an advocate. Whistleblower carries something of a neutral veneer. A whistleblower are often seen as exposing something objectively wrong, a broken law or crime. That's not really what is happening here (though it wouldn't be surprising to see laws have been broken). The main thrust here is Facebook is bad for society, which, IMO is more activism than whistleblowing. No objective answer here! Just how I read the nuance of the language.


Frances Haugen's own motives are not the defining factor regarding the public utility of leaking this material.

Her motives are her own, and once her information is out in the open it isn't up to her to decide what we all do about it. I'm not about to dismiss what was revealed just because I disagree with some of her opinions about that material.


> motives are not the defining factor regarding the public utility of leaking this material

An evaluation of likely motives is incorporated into the evaluation of the evidence. There are a lot of things that can't be known: How much is true? How much made up? How much is true but not representative of the totality?


> Frances Haugen's own motives

Motives aside, whether she is acting as a mole for intelligence agencies to gather support for enabling more 'round the clock surveillance of wrong-think under the guise of "blowing the whistle" is a defining factor.


If you have evidence of her working as a secret intelligence agency undercover operative, then I'll be incorporate that into views on this issue.

However, I think I'd still be glad that the information she leaked is now public, just like I'd be glad if a plot by intelligence agencies to systematically censor social media was made public, just like I've been glad when prior abuses by the government have been made public. Get it all out in the open.


As much as some on HN like to criticize Moxie/Signal for their decisions (Intel SGX debacle), seeing these kinds of sentiments get cheered on in the media makes me really glad they exist. Same for the Tor Project, Matrix.org Foundation, EFF, etc. Can’t be easy being in their positions ever, but especially right now.


I think, two entirely different topics became mixed up here: the privacy of point to point messages and the impact of algorithmic enhancement of public messages. To my best abilities, I fail to see a connection. (That is, you may suggest that the latter may lead to an increase of "problematic" calls to action in private messages, but, then, you've already messed up in the first place, by enhancing the impact of public messages.)


I also don't see anything too 'whistle-blower' worthy in the goodbye post.

It just seems to be a vague message that although social media sometimes have a positive impact, it also sometimes has a negative impact, and that Facebook has employed strategies to help it grow. Hardly shocking findings!


Web 3.0 offers the best solution to this. Web 2.0 companies, and the people that control them are just going to continue reaching for more control of the data we create, and give them for free. It's a problem that a small group of elite engineers have so much power.


Based on what. The twitter link is just a picture but there is zero source attribution. He uses his own twitter status for his article. So, he's the source of his own article. At this point the entire blog post, twitter post, and the credibility of the author I find lacking.

I'm not defending anything that Frances Haugen did do, or say, but I find it disturbing that anyone can take the author seriously if he can't find support for his argument that doesn't come from himself than he has no argument and linking to other "sources" means nothing he writes is verifiable beyond the actual documents he does link to.


> against allowing Facebook to use end-to-end encryption

Apparently three of her lawyers are connected to the US intel agencies. This is entirely US-gov friendly whistleblowing and they know which horses to back.


"what the tech community wants" is pluriform and broad.

However, what we clearly see here, is how one company can almost dictate "what all of us get". If Facebook decides to have e2e, that is what we have. If it decides to have "less encryption" that is what we get.

The real issue is not what governments, tech communities or whistleblowers want. The real issue is that it matters very little, because in the end we get "what facebook wants" regardless. And that, according to those leaks, is entirely driven by profit.


He's quoting a Telegraph article which misrepresented Haugen's view. She's for end-to-end encryption but concerned Facebook's implementation of it will be closed-source and not open to scrutiny. It's about 2h15 into yesterday's UK Online Safety Bill Committee testimony:

https://www.parliamentlive.tv/Event/Index/cddf75b6-4279-43db...


There isn't any reason someone can't be both a whistleblower and an activist.

> and some of her suggestions really do not align with what the tech community wants at all

For what it's worth, she says her views on E2E encryption have been misrepresented.

Also, where is this single "tech community" with cohesive views on all this? I've never seen it. I'm pretty skeptical of anyone claiming to speak for the tech community.


> For what it's worth, she says her views on E2E encryption have been misrepresented.

My link above has direct quotes from Haughen regarding E2E encryption.

I don’t understand why people are so eager to project something different on to what she’s directly saying.


I'll just note, I'm not projecting anything.

Do you really think those tweeted screen shots sum it all up accurately, though? Considering that she says her views are being mispreresented, couldn't those two quotes be cherry-picked? You know the reputation of the Telegraph, right?

A guy tweeted that someone said (unattributed, but I assume the Telegraph) that she said.... It's just not solid. If you're going to accept that uncritically, I think you're essentially believing what you want to believe.


One other interesting thing to note is that the whole Facebook whisteblower campaign is being funded by Pierre Omidyar https://www.theverge.com/2021/10/20/22737042/facebook-whistl...


One can believe that the whistleblower is correct in their identification of the problem and believe that the whistleblower is wrong about the best solution. This is not an inconsistent position.


There's no rational discussion around these e-mob hate trains. It boils down to "You're either with us or against us".

I guess we're focusing on hating facebook this year.


Whistleblower and activist are not mutually exclusive. Most are both, like Snowden was both. Reality Winner too. And whether I agree with someone views is really not relevant toward whether they are whistleblower.


I’d go one step further and say a whistleblower is by definition an activist. The entire point of leaking is to instigate change.


This is a very common deflection tactic by corporations.

Try and discredit the leaker over the information that was leaked.

We saw it with Wikileaks and now we will see as more and more tech employees start sharing what goes on behind the curtains.


The leak is more important than the leaker. That I support and value the Snowden leaks doesn't mean I always have to agree with Snowden (I think he's terrible on macroeconomics and crypto, for instance).

No whistleblower has ever received such a red carpet as Frances Haugen. In itself, it's good that she's aggressively defended by the political establishment rather than facing reprisals and jail. But it should make you question what's going on here, quite apart from the contents of the leak itself.

One of the things Haugen proposes, repealing section 230 protection, is even supported by Facebook itself and strongly opposed by antimonopolists (who argue that it will be far easier to comply for Facebook than any would be competitor).


I haven't been able to find a clip of it, but Biden recently brought up that democracies are having a hard time keeping up socially with the pace of innovation, and that autocracies are reacting quicker. He wasn't advocating for autocracy; merely pointing out that democracy is struggling on this point. I've been thinking about this a lot whenever government oversight is brought up.

I'm not convinced the war against misinformation is any more winnable than the war on drugs in a free, self-determining society. The best we can hope for is to curb the worst consequences by slowing virality.


> her suggestions really do not align with what the tech community wants at all.

Speak for yourself. Facebook has become a threat to my country's democracy and stability and has enabled all sorts of horrific violence. If Facebook could be trusted to do the right thing I would feel differently but they've shown time and again they don't have the capacity or will to do so. "Move fast and break things" apparently applies to everything Facebook touches and so they should be regulated and controlled like a toddler.


> As cited in this article, Frances Haugen is arguing against allowing Facebook to use end-to-end encryption because she suggests Facebook should have more surveillance of private communications: https://twitter.com/AlecMuffett/status/1452309133928054799?s...

That's because encryption is incredibly problematic. And I say that as a huge fan of digital privacy and an avid user of Signal.

I'm not so blind as to think that perfect encryption is an unalloyed good, and in that screenshot (which, I'll point out, excludes any broader context for her remarks) Haugen touches on just one of many very legitimate problems with the technology.

Now, in the end, I think (though I'm not certain) I believe the upsides outweigh the downsides. But don't pretend as though she doesn't raise a valid concern, even if you don't agree with her conclusion.

> A lot of people jumped to the defense of the Facebook leaker because the media so successfully framed it as a “whistleblower” situation, but that’s not really what’s happening here. She’s very much an activist

All whistleblowers are activists.

Do you really think Snowden didn't have an agenda? Of course he did! His act was specifically and explicitly political.

Hell, Wikileaks has proven itself to be nakedly political.

The only reason I'm betting you don't see it that way is you happen to agree with their politics.

> some of her suggestions really do not align with what the tech community wants at all.

Don't deign to speak for me. There is no unified "tech community" on this topic, even if your echo chamber leads to to believe otherwise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: