A lot of people jumped to the defense of the Facebook leaker because the media so successfully framed it as a “whistleblower” situation, but that’s not really what’s happening here. She’s very much an activist, and some of her suggestions really do not align with what the tech community wants at all.
Any time someone goes in front of the government and insists on less encryption and more government control of tech companies, we should be worried and proceed with extreme caution. No matter how much you dislike Facebook, this situation is no exception.
I'd be curious to know what both side think the ideal social network should look like.
I don't think these are conflicting views:
(1) Less monitoring on private messages
(2) More monitoring on public posts
I know of bullying on FB where the harasser sends the same message to dozens of friends of the harassed. FB makes this easy to do since a list of someone's contacts are often easy to find online and there is no recourse to find or report these messages (as with a public post).
To me this presents a particularly tricky double-edged sword. E2E encryption is good in many cases, but tied to an easy way to send many messages and easily-accessible lists of people to target a message to, can result in a similar but more hidden version of public posts.
My guess is that this is being used today to disseminate similar content that is being restricted on public posts.
As far as I can tell, restrictions to limit the number and speed of private messages have not been effective against this kind of approach, and new accounts can always be created. In some cases, these messages go to a different "inbox" for non-contacts, but not always, and this just delays the receipt of the message since, again, they cannot be found or reported.
I don't know a good solution to this problem, but it's not one I've seen talked about.
Maybe a middle ground is that every e2e message is hashed and sent once, and if duplicate hashes are detected at scale (of the hashed message) you slow the propagation to 1 user per day.
I think the main problem is users mainly using only one platform for their communication instead of choosing it on a case by case basis.
Yes it is.
A “public post” is when the message was directed towards anyone who cares to listen.
A “private post” is when the message was directed to a specific group of recipients. The length of the list of recipients doesn’t grant a non-recipient access rights to the message.
This is easy stuff.
When you post something to "your" feed, you're sending it to Facebook with metadata that says "please post this on my wall" or whatever.
To strain an analog analogy, this is not like the telephone or even the post office, where you hand them something or send out voice packets and they just look at the recipient and forward it on to the actual person. Everything you send is sent to Facebook and kept at Facebook.
Replace "Facebook" above with any social media platform or cloud service. They're fundamentally the same. I don't consider iCloud photos private. You're sending your content to Apple, not putting it in some safe that you alone control.
An interesting weird angle: If X deletes their whatsApp account messages fail silently. FB stores the message under the false pretense that they are able to forward it to X. I think imessage does the same?
More important I think is the kind of "control" that's exerted: it of course should be around developing factually accurate sources and promoting those instead (maybe a warning "This post contains keywords detected anonymously that suggest misinformation. Here is an alternative position"
The fundamental problem is that we are building our entire lives around a few systems that are completely opaque. Facebook, Google, and many other algorithms are still closed source, and to their own detriment they cannot reveal details of their inner workings. That's why we need a way to move to a radically open society that still allows for innovation and advanced technology.
I also think legally, but this may vary from jurisdiction to jurisdiction, a message to one thousand people that you do not know personally and you are not in an organisation with, or that organisation has an open access policy (even if there's an entrance fee) would be a public message.
Of course, IANAL and just learned this from reading the news.
More interestingly however, is what Facebook or other media should consider public. In my opinion a WhatsApp group with 1k members should not be considered public, even if there's absolutely no bar to entry. Private companies should have no business policing private communities. If they've got concerns they should invite law enforcement to decide if there's any laws that are being broken.
The moment when communication becomes public is when the communication goes outside the boundaries of the group. If my antivax aunt posts an edutainment video about how vaccinations cause autism and it is clear that it is misinformation, and that edutainment video is not just in her group of weirdo's, but actually shared on her public timeline, in my opinion Facebook should definitely come down with the ban hammer. There should be a little "report" flag that I'll use to snitch on my aunt. Even if it's on her "friends & family" timeline I wouldn't consider it private for the purpose of culling misinformation. She should specifically select a group of people who have self-selected to be in that group for it to be considered a private message.
Also, if a private group on a platform like Facebook/Whatsapp has members that are underage, and not all members are in a complete graph of direct friends/family, Facebook should require that group to have active moderation that conforms to some sort of platform wide code of conduct.
The same people argue that FB needs to censor more public content and wringing their hands in public over just how awful it is that in places like India, WhatsApp message forwards can spread memes that respectable people like western journalists and tech activists don't like.
It's a "who, whom" thing. There's no principled stance differentiating private from public with respect to control and censorship. It's become respectable in western political discourse to demand totalitarian control over the spread of ideas. Today's activists will do or say whatever it takes in the moment to bring the boots closer to human faces forever. Anyone who values human dignity needs to oppose this movement.
There is such a thing as legitimate authority.
For example - there are people who do actual science, and other people who actually can read scientific papers and make assessments, there are people who have legitimate basic understanding of science, relationships with scientists, and consistently communicate reasonable information about that, as it relates to our world.
And there are people who make stuff up.
And very influential actors who will use a system without information integrity to their advantage.
These people wield enormous power and fundamentally shape outcomes for everyone.
Both the Truth, and the Public Good matter. While the later is more ambiguous, it's also material.
The question the becomes - how do we allow yahoos in their basements to say anything they want publicly (i.e. aliens invented COVID, the vaccine will kill you), how do we allow legitimate detractors to question classical authority (i.e. ivermectin might work), how do we try to unbias information when it's politically contentious (i.e. mask policies), how do we allow politicians to speak their minds, but to not destroy their communities with irrational or completely irresponsible information (i.e. 'the vaccine isn't safe, you don't need masks, just eat healthy )'.
I'm afraid we don't have the answers, but maybe some degree of 'proportional oversight' on the most public bits of information might be reasonable i.e. statements about the vaccine, when they reach a certain threshold of popularity, must have references to actual studies, or something along those lines.
How do "we" this? How do "we" that? Who determines what constitutes the group of "we"? Power to censor inevitably, invariably, and irreversibly gets used to simple deceive and propagandize people in favor of what the censor wants. Safety is no excuse either --- every tyranny begins with the excuse that unusual powers are necessitated by some emergency.
There is no such thing as "irresponsible information". I reject the entire concept. There are only competing truth claims, and some nebulous "we" made up of journalists and tech activists has no more legitimate basis for policing speech than anyone else. There is no "legitimate authority" over people's minds.
You're right here that there is an issue of principle underlying the double standard: the principle is that some people think that they ought to control what other people thing, ostensibly for their own good. I wish they would at least be open about this principle.
Traditionally through Classical Liberal institutions like 'Executive, Legislative, and Judiciary' with elected representatives for oversight, but more realistically also through the '4th Branch of Government' i.e. the Central Bank, the security apparatus (Military, FBI, NSA, CIA), a Free Media with integrity (believe it or not, they don't just publish whatever, there are norms and standards), the '5th Estate' i.e. people with voices outside the norm, the Academic community, Industry, NGOs, Faith Groups, Cultural Institutions. Other nations and international bodies.
You're confusing a bit the legitimate motivation for regulation, with the means by which bad actors take power (i.e. we can't use security as a measure because then Stalin will come along).
'Security' is 100% a material issue, it's not even an argument - there are bad actors trying to do bad things all day long from petty violence to terrorism to invasion etc..
What that means is we have to take special care in those scenarios, usually by means of oversight and proportionality.
For example, the police can't just go into your home, they need a warrant, signed by a judge etc.. The laws the security apparatus use have oversight by elected officials.
There are no rules for this FB issue, it's the Wild West, and because it touches on issues of censorship, security, politics and now Public Health ... it's a tough one.
The Facebook leaker is explicitly arguing against this though. She cites Facebook’s push for end-to-end encryption of private messages as a problem.
As for the whistleblower, I'm very skeptical of her — to be a tech PM against encryption, and somehow linking e2e encryption to making the platform less safe, is dubious at best. Removing misinformation and calls to violence on the Facebook platform doesn't need to include monitoring private messages.
The idea that she's been a PM at large tech companies for 15 years and doesn't understand that Facebook monitoring messages will mean China can monitor those messages is almost too suspicious to believe.
Re misinformation: why would misinformation not simply happen in e2e group chats like it is already happening in e.g. Brazil or India? What’s the difference between posting to a group of friends on Facebook vs sharing a group message to those friends?
I do think messages should be encrypted but the trade off isn’t as straightforward as you make it sound.
Not parent, but I think the idea is that if BigCo does business in CountryA , then CountryA's government invariably forces BigCo to spy on their users who are residents.
Obviously compromising the user's device is a workaround open to governments but hard to achieve in bulk.
> Strangely, the example she gave suggested that Facebook needs to have looser encryption in order to somehow protect Uyghurs in China from government attempts to implant spyware onto their phones. 
And Facebook responded, sticking up for e2e encryption:
> A Facebook spokesperson responded to The Telegraph with what we all should realize at this point is the responsible approach to encryption: "The reason we believe in end-to-end encryption is precisely so that we can keep people safe, including from foreign interference and surveillance as well as hackers and criminals." There is no such thing as encryption back doors that only the "right" people can access. If they exist, they can eventually be found or accessed by others. 
One doesn’t have to agree 100% with an ally.
Because they're also lobbying for other things you care about. And those things are more likely to be passed into law than the E2E encryption pieces.
Taking a puritanical view on an issue is a high-risk high-reward gambit. Nine out of ten times, it ejects you from the room. One out of ten times, you will organize sufficiently to make it a wedge issue (e.g. the NRA on guns, NIMBYs, et cetera).
To the extent Haugen disagrees, she's not on "our side."
It's not reasonable to outlaw the tools needed to exercise a fundamental right. Not OK for the 1st, not OK for the 2nd, and not OK for the 4th. Encryption was already an old technology at the time of the Constitution's authorship. If they had wanted to regulate or outlaw it, they were free to say so. They didn't.
...so if they had wanted to guarantee a right to it, they were free to say so.
But they didn't.
Despite clearly contemplating privacy.
Depending on how terrible the bad ideas are that they're pushing, they may not be an ally at all in fact. A multi-purpose trojan horse may be more accurate.
In this case, promoting the abolition of end to end encryption is quite heinous. She's providing the authoritarians a potent argument that isn't yet well established in the public mind (we have to be able to see all of your data so we can keep you safe from the Chinese trying to see all of your data).
Personally as someone who doesn't use FB and never will, I couldn't care less if Facebook wants to track and monitor every one of their users, monetize their every movement, and ban any message they want. In a free market you'd have thousands of social networks to provide competition with all sorts of different policies and ToS. The real issue is that one company is in a skewed position of power due to a broken marketplace. Fix that problem and all the other problems are irrelevant.
Nobody seems able to identify what the unfair advantage is.
The truth is that this is the nature of social networks: the successful ones tend towards Monopoly. Why? Because more people attract more people. Access begets access. The value of a network grows with the square of its size , and higher-value networks attract more users.
You can't break up a social network without starting to make rules about who can associate with who, which is a fundamentally anti-free position.
The problem is not Facebook. In its absence another would take its place. The "problem" is human nature, and that we were not designed cognitively for the types of networks that technology now enables for us.
We should focus on education, friendship, and real-world experiences. Legal fights against social networks in general or Facebook in specific are futile.
I'd like social media to be run by everyone having a social media server linked to their home network (imagine something like a Raspberry Pi), with it being totally decentralised and every user controlling their own server.
Then if the government wants to shut it down, they have to raid everyone's home.
Also, what you want already exists - you're free to go use Mastodon and run your own node. You can't possibly think that's a reasonable product your grandparents would be able to use.
BitTorrent certainly isn't illegal in the UK; it might be in other jurisdictions.
> you're free to go use Mastodon and run your own node
Would it work behind a NAT'ed router on a dynamic IP? I suspect it might not.
> You can't possibly think that's a reasonable product your grandparents would be able to use.
What I envisage is an SD card containing the OS + apps, you put it in the Pi, plug it into your router by ethernet, configure it via its web app, and you're ready to go. I think it ought to be possible to make it easy enough for the average person to install (certainly anyone who could install a router + internet would be able to).
You know, I want to believe that, but I don't think it's true.
Because of the "network effect", the value of belonging to a social network is mostly dependent on how many people are on it. This dynamic very strongly favors a few big social networks.
It doesn't have to be this way — if we had a common protocol for 'social' (or used & improved the ones that we already have), you could have interop of posts/comments/media between platforms and then the platform's 'secret sauce' be curation of messages, discoverability of people you might want to connect with, etc.
Facebook could focus on connecting with friends & being free but ad supported. Instagram could be only media posts & supported by sponsored influences in your timeline. Twitter could be focused on current events rather than your friends and be subscription based, etc.
Similarly with IM, we had lots of competing IRC clients when that was a thing. As products, you could easily create competing messengers that could speak to each other. You could even have additionally things on top e.g. if you have a FB<->FB message it can support 'stickers' or whatever that are outside the spec, but similar to browser development they _do_ get added to a shared spec that isn't controlled by one company.
As pointed out on The Last Week Tonight show, private messaging apps are a cess pool of misinformation  especially in the developing world.
But: Facebook can and does monitor private messages whenever any user flags / reports them . The problem is, how effective is the mechanism given not many know it is even there. Of course, e2ee mustn't be compromised but it should also not be used as an excuse to let misinformation run amock. May be homomorphic encryption gets us there, may be UI changes do. I hope Facebook does act swiftly and decisively whatever the case, since e2ee (backdoored or not ) seems like the scape goat here.
I'm not suggesting you should or can in any practical way (how heavily - or not at all - that public posts should be monitored by the government is a different debate from what I was saying).
I'm saying that the parent comment claiming the views are not necessarily conflicting, is incorrect.
This must always conflict in the end:
> I don't think these are conflicting views: (1) Less monitoring on private messages (2) More monitoring on public posts
More aggressive public monitoring (along with the follow-on laws to regulate & punish a lot more things said in public) will inevitably result in a drive toward more encrypted private social networks that can't be easily monitored. Those private social networks will rely heavily on encryption. The aggressive public monitors will have to abolish encryption to then regain the high degree of mass content monitoring they used to have. Call it networks going underground, or dark; the authorities will come up with a negative naming scheme for it as they seek to castigate the shift.
You can bet on the rise of mass popular encrypted private social networks (likely built around subjects/topics/ideology/x thing in common; more like groups or subreddits than mass social media today, in other words). It's coming this decade. And the response from the government toward that is quite predictable. They'll use it as another argument against encryption.
You can make an end-to-end message between two users perfectly secure, to the limits of engineering and the security hygine of the two users. No problem there. If you have an encrypted message sent out to a group of users, as the group of users gets larger and larger, it's more likely that one of those users might be an informant to law enforcement, or will be sloppy with their message hygine, so that after they get arrested invading the capitol on January 6th (for example), law enforcement gets access to all of their information on their phone with the pin code 1234. Still no problem as far as I'm concerned. Criminals tend to be stupid, and that's good from a societal point of view.
Public posts are a different story altogether, because social networks have an economic incentive to promote "engagement". And if that engagement happens to emphasize messages that incite hate, or anger, or causes people to think that vaccines contain 5G modems, or whatever, hey, that's just good for shareholders, and in the captalist system, shareholder value (especially VC value, for startups) is the highest good, right? Well, I have some real concerns about about that. I think that corporations which are prioritizing engagement uber ales, even if that causes social harm, should be potentially regulated.
And that's why it is quite possible for someone (like me) to believe that end to end encryption should be allowed, and promoted, even if it gives the FBI hives --- and at the same time, argue that public posts should be highly monitored from the perpsective of trying to get insight to whether the amplification alogirhtms might be doing something unhealthy for our society's information ecosystem.
I see social networks (and in many ways the internet as a whole) like a new country we’ve founded. It’s a bit different from countries made out of atoms. For a start, everyone there is a dual citizen with somewhere in meatspace. And instead of community centres we build websites. But it’s a place.
How are those spaces governed? Is it a democracy? No. Each social network is its own mostly benevolent corporate dictatorship. If you don’t like the rules, your only option is banishment.
Healthy communities (in healthy society) need rules to keep bad actors in check. And freedom to explore and be ourselves. Healthy communities in real life use participatory processes to figure out what those rules should be. You need people to feel like they have a voice. It’ll never be perfect. And different rules will make sense for different groups.
Facebook’s problem is they’re trying to be the government, the judiciary and police for billions of people from every country on the planet. There is no single set of rules and policies which will work everywhere. And even if there was, how do you police billions of people? AIs make mistakes.
I don’t know how, but I think FB needs to eat humble pie and find ways for communities to decide on (and enforce) their own social norms somehow. It’d be messy and disjointed, but so are people. Reddit and discord do this - although they’re obviously very different products.
Tyrannies don’t have a strong history of making choices which primarily benefit their citizens. So far, facebook’s track record hasn’t been much better. To improve, they need to acknowledge the position they’re in, learn from history and give some power back to the people who populate their site.
I'd rather they didn't exist, honestly.
Inasmuch as a "social network" is just people signing up to talk about certain topics and that's it, I don't have a problem with it. Internet forums of the 2000s weren't a problem necessarily. And while HN does have some "virality" mechanisms built in due to upvoting and while it is sometimes a "problem" on very divisive topics, it's not nearly on the scale of Facebook, Twitter and Co.
So if it were up to me, Twitter, Facebook, Tiktok etc. should either disappear or they should at least have to revise their algorithms and open them up to public scrutiny. Or, you know, if they went back to their original purpose of just being about connecting with friends and family. But I guess you can't make money out of that.
But we can at least try to get rid of the things that make the problem worse.
Moreover, minimising hyper-addictive patterns on such platforms would have a host of other benefits too.
Links about the intersection of these topics, such as apple/CSAM or facebook/moderation, are most likely to have comments that devolve into polemic without much productive discussion taking place.
That seems perfectly consistent. FB is already going all-in on surveillance and ignoring any notion of freedom; if they must destroy privacy, the least we can ask is that they actually do something useful with it.
Instagram and FB are mostly "public" facing so they offer a big surface for malicious activities. (Scammers, Groomers/People seeking CSAM which are always used as a reason for more surveillance/ Trolls etc.)
Whatsapp is more private and requires knowing one's phone number which ideally should be harder to get ahold of.
Messaging on Instagram/FB could be compared to whispering in a crowded place, private...but not fully private.
In an ideal world this would not be necessary, but there will always be a fight between surveillance and freedom. And perhaps giving up freedoms in some parts could allow us regain more freedom in others, as long as people are aware about it, which might be the biggest hurdle to tackle.
What if there was a way to send private message, in a decentralized manner, and where it's not even possible to tell if the recipient has actually read the message or not?
There are blockchains using ZK-proofs allowing to do that today. Not only it's decentralized so "fuck corporation surveillance/profiling" but it's also highly unlikely uncle sam and its offsprings can break the elliptic curves and ZK-cryptograhy in use by these blockchains, so "fuck state surveillance" too.
But then it's using the word "blockchain", so HN is pretty much against it.
And instead HN as a whole shall root for "secure" messengers that are leaking metadata like there's no tomorrow while explaining that they're the best thing since sliced bread "because they're easy to use".
We dont exactly need a centralized humongous social website. Im on both sides honestly: I want facebook to subside in profit of isolated more freedom-centric micro network where we can say what we want but wont impact massively crowd thinking ?
Like that if monitoring must happen it happens in isolation, and if freedom must exist it s not on the same place as the other fucktards?
I don't care what they look like as long as there's hundreds of them all on relatively equal footing.
I firmly believe that most of the major problems facing society today are not caused by any features of particular companies, but by the consolidation of power in a very small number of them.
If anything, research seems to indicate that tolerance to opposing viewpoints and revising of stereotypes comes from extended personal contact and having a shared sense of purpose, and that's difficult to achieve on the internet.
Schizophrenia would have been appropriate if he was accusing HN of being paranoid, hearing voices, etc.
Schizophrenia is not multiple personality disorder (actually called Disassociative Identity Disorder, I just learned). I am really curious to know why people started misusing the word schizophrenia in common parlance like this.
I feel like we as a tech community should take a step back and consider whether or not we should not be the only ones that decide what should or should not be done. It is a massive country with many opposing viewpoints.
We have massive conflicts of interest and some tech companies have shown over and over that greed trumps morals in many cases. We are humans and we are not a group of enlightened arbiters that know what is best for the world.
We have done a shit job so far of managing ourselves. Sure in lots of cases we have made the world a better place but I think we need to be honest with ourselves and acknowledge that greed has crept in and supplanted that "making the world a better place" mission.
On the topic of encryption, we're the only community with an understanding of what "breaking encryption" means. There are no skeleton keys, only vulnerabilities. We have a moral and ethical duty to fight for encryption, privacy, and security, not against it.
This is something really unique to digital technologies. In meat-space, everything has a vulnerability: force. With sufficient reason and circumstance, whether or not you give up your keys, the government is still able to claim dominion over anything you're trying to hide. All they have to do is break a few walls.
Digital technologies don't have this problem (feature if you're the government). If you have your key stored in your memory, and there is no back door in the software, no matter what court orders anyone has, if you don't tell them the key then the information is lost to them forever.
This kind of breaks some fundamental assumptions layed into law prior to tech. When you use encryption, tech really becomes an extension of your mind, and your 5th amendment rights effectively shut the government out of them completely. While I like this idea, it definitely poses problems for enforcement of basically every digital and financial law on the books, which can pose problems for all of us.
This is where the analogy breaks down - introducing backdoors and passkeys into digital security doesn't create the same trail in the physical world.
The better analogy would be - imagine that the government insisted that all locks be approved by the government, on the understanding that those locks would have master keys that the government owned. They would be able to come to a bank during the night, unlock the front door, the vault, and your safe deposit box; or come into your home when you were at work, unlock your filing cabinet and look at your records.
You would have to trust that government employees who had access to the master keys would only use them for legitimate purposes, and when authorized to do so. You would have to trust that the government could secure those keys at all times so they never fell into the wrong hands, and that no unauthorized duplicates would be made.
You would have to do all those things bearing in mind that the government regularly loses such master keys, has employees that exceed their authority, misuse government resources, demonstrate poor custody practices, and so on.
It doesn't seem like a good bet to me.
EDIT: oh yes, and I forgot to mention there's a whole bunch of other bad actors out there who know how to buy a government-approved lock, take it apart, and are highly motivated to make their own master keys.
Encryption was not the only point brought up by the OP, they were also against government regulation. My main point is that we are tech workers that have conflicts of interest and that we should look hard at our viewpoints and ask ourselves if they are better for us or better for everyone. We are humans and like other industries, greed has crept in and we don't always make great decisions when left on our own.
For my own part, I find it helpful to remember that questions over encryption have been ongoing for several decades. Historically, the arguments boiled down to shouting about CSAM, terrorism, drugs, etc. Mostly it tended to be really be about law enforcement agencies wanting to have the unfettered powers of surveillance.
What's perhaps most interesting to me is that the prevalence of encryption seems to have done very little to stop the FBI from catching people. I know this next point will be contentious, but perhaps there's room to question why these people who are indeed not idiots are so against encryption for you and me.
So yes, you're absolutely right. Let's look hard at our viewpoints and biases and expertise and paychecks and ask ourselves - why does the FBI want us to not have encryption? Why do we want it? Who has the valid view points here?
> What's perhaps most interesting to me is that the prevalence of encryption seems to have done very little to stop the FBI from catching people.
I don't think we can know this. Or maybe you have some data? But more importantly there's a big gap between "catching" someone and "convicting" someone. Having the content of specific communications can make a big difference in actually proving guilt.
I completely agree. My point was not that their motives are inherently evil per se, just that they're as self-centered as anyone and carefully gazing deep into our navels does not imbue their perspectives with extra validity or compelling strength.
> I don't think we can know this.
We do know that the FBI routinely arrests and prosecutes people, even ones that use encryption. There's no end of public sources showing it occurring again and again. The fate of Dread Pirate Roberts is a good example.
> Or maybe you have some data?
What data would you like? The FBI has statistics available going back to the 1930s: https://www.fbi.gov/services/cjis/ucr
> Having the content of specific communications can make a big difference in actually proving guilt.
You're absolutely right, it definitely can be! Which is why the FBI has learned how to both build cases without such things. They've also learned how to gain access to encrypted communications. Between the two, some might opine that it's enough to raise questions about if they really need to prevent you and I from having access to cryptography or not. They're clearly experts at law enforcement and perhaps have no need to confiscate the tools of mathematics from technologists.
Which brings me back to the point. Let's look hard at our viewpoints and biases and expertise and paychecks and ask ourselves - why does the FBI want us to not have encryption? Why do we want it? Who has the valid view points here? What have we gained from this navel-gazing?
I don't want to live in a police state and I'm not sure of anyone else that does. I also don't want to live in complete anarchy or even a libertarian state because I think there are bad people out there that would take advantage of that and potentially hurt other people.
It always seems to be hand waved away on this forum, but perfect computer security could indeed help criminals out there. I don't know the numbers around how many more but I really believe that no one knows.
I make no assertion that one side is right and the other is wrong, just that it is a hard problem and that I don't know the answer.
Where and how do we find the balance? I know the law enforcement community pushes against computer security to attain better security (in the law enforcement sense) and I know that tech people generally push for computer security that limits law enforcement abilities. I don't think we can let one side of the argument completely win, but how do we find that balance?
If you seek numbers, the FBI invites you to inspect their data: https://www.fbi.gov/services/cjis/ucr
As you correctly and wisely say, the world is full of hard problems with many strong, clear, compelling, and valid viewpoints to balance. It may just be possibly worth considering that this might not be one such scenario where balance is possible, let alone achievable. Do you think there might be cases, in the fullness of the human experience, where one set of extremists on an issue are right, their opposite numbers are wrong, and any balance between them also wrong? What if what seems to be a balance is based on a false premise?
Perhaps we should stop, take a step back, and examine our biases?
> Do you think there might be cases, in the fullness of the human experience, where one set of extremists on an issue are right, their opposite numbers are wrong, and any balance between them also wrong?
Yes, I think there probably are cases where something like this happens but I would assume those cases would be a little less debatable. But yes, what you say makes a lot of sense and is probably a large part of the reason the current political climate is so tribal.
> Perhaps we should stop, take a step back, and examine our biases?
If you are talking about the current case though, what is the false premise or what are the biases?
There are indeed monsters out there that hurt women and children and use technology to accomplish those crimes. They are not made up bogey men. They flock to technology that provides them cover and punishments for crimes deter criminals from attempting those crimes. If criminals did not have to fear punishments, more crimes would happen. I certainly would speed more if I didn't have to worry about speeding tickets.
I'm glad we agree!
Here's where the hot take comes in: we're there now. Encryption is something we have access to. Law enforcement manages to work around it on a regular basis and has now for several decades.
> Yes, I think there probably are cases where something like this happens but I would assume those cases would be a little less debatable.
One would certainly hope so, but it's perhaps possible that this might not always be true. There are often people ready and willing to debate what should be undebateable. It shocks the conscience.
> If you are talking about the current case though, what is the false premise or what are the biases?
Some people have come to this conversation with the false premise that taking away encryption will substantially help law enforcement, improving safety and security for our vulnerable friends, neighbors, and community members. Their pain, suffering, and exploitation is very real, yet that perhaps does not always make functional what is done in the name of keeping them safe.
Some have come to this debate with the bias of assuming there is a useful policy medium to be found. Perhaps there is not, as we might be dealing with technical matters that are all-or-nothing in unavoidable ways.
Some may find these to be perhaps worthy of careful examination, as such things can perhaps lead to dangerously misguided policy - such as the Clipper chip or mass surveillance.
Thank you for being thoughtful and centering kindness, mercy, and compassion.
I feel strange having to point out that Governments are made up of people too. If it's not people that are making the good decisions, whom do we turn to?
Is Admiral Poindexter a preferable decision-maker?
That is a trait lacking in almost everyone these days, not just tech workers.
Some software engineers might be experts in the domain of encryption, but there are other professions who are experts in the domain of National Security.
As for her expertise, she worked in FB's civic integrity unit as a product manager. I may disagree with her, but she does have a reasonable basis to claim better knowledge on the topic than the average person.
How is using end-to-end encryption of private communications a case of greed trumping morals? If anything, encrypting private messages and preventing others from reading them - be it Facebook or the various world governments - seems like the obvious moral move.
I think a lot of people, tech or otherwise, are projecting their own ideals on to this Facebook leaker without looking closely at the details of what she’s been lobbying for. Most people seem to want less surveillance and interference with private communications by Facebook but she’s arguing for much, much more.
But that’s the problem: The issue has now been so dramatized in the media that the average viewer doesn’t really know what’s being proposed, they just see “Facebook bad, whistleblower good” and assume it’s what they want.
It’s up to people who know the subject matter, including the tech community who understand things like end-to-end encryption and government regulation thereof, to speak up.
I am not trying to throw my hands up in defeat but pointing to an intransigent problem that is not easily fixable however simple every side of the argument tries to frame it and why they should be the arbiters of power/control. I understand the reluctance of any individual or company to hand over a large portion of control to a government or other community that doesn't have a track record - it is a fraught situation.
There is no governance beyond zuckerberg currently and he has proven that he is aligned with zuckerberg and not society. Continuing with the status quo and doing nothing does not seem like a good idea to me.
You can probably argue that he's wrong on this, but it's hard to say that he just cares about himself.
The dude is a classics nerd, and supposedly a bunch of the original FB mottos were in Latin (Fortune Favours the Bold, at least). That isn't surprising to me, at least.
Give users a lot more control. Over their feeds. Over what they receive or not. How they receive it.
Alternatively, abolish the feeds entirely. Make people seek out updates by other users manually. It's time consuming, it drastically slows down the reaction-agitation cycle. People then focus on consuming only what's most important to them to a far greater degree, as their time is limited. Facebook moved away from that early approach by intention to spur time-on-platform, consumption, engagement.
In the offline space you have a small number of actual friends, a small number of people you can actually keep up with, because of time constraints. That's a good thing, not a bad thing. It keeps people focused on what's most important to them. Facebook seeks to generate the opposite outcome, they want max engagement and consumption, so now you've got two thousand fake friends, and two hundred more relevant people within those two thousand fake friends that you receive updates from on a daily or weekly basis. That's insane. There's no other good way to put, it's insanity. It's bullshit. It's fake, it's inhuman. That mass consumption of content, that the feed/wall was built to accomplish, how much of it matters to the end user? I think the answer is very little. Put it back into the user's hands to seek out profile page updates, to seek out updates by their friends - they'll do it if it really matters to them, and they'll massively neglect the users that don't matter to them. This is also where Facebook's finances implode; most FB employees are self-interested in that outcome not happening.
The central issue is: the core mission of Facebook (today) is a fraud. Everyone should not be connected. Everyone does not want to be connected to everyone else. It is not a great ideal to pursue connect-everyone, it's fundamentally anti-human. It will not make the world a better place. That ideology needs to be challenged, and it rests at the center of Facebook.
Facebook has built itself to agitate for attention. They designed the feed/wall to prompt artificially high levels of engagement. Give that control back to the user in spades. People will adjust their systems, they do know what's best for them when it comes to this matter (and a lot more so than a politician thousands of miles away that has never met this person, or Facebook corporate).
Society sculpting by some ruling authority, some group that happens to be dominant at a given time, is a truly horrific approach. It will accelerate the splintering into tribes, and accelerate oppression by the government.
Encryption is not a pro-tech position. It is a pro-privacy position and you don't need to be a tech expert to see the value in that.
As an engineer, I'm in a better position to understand what the f is going on regarding encryption and, say, nuclear power than most people.
No amount of wishful thinking will change that.
> It is a massive country with many opposing viewpoints.
World. Not country.
Yes, you are an engineer and understand the implementation details and potential side effects, but what about the opposing viewpoints like those from law enforcement? Are you an expert in law enforcement? Your viewpoints should trump theirs why exactly? You really don't think that it's possible that you can't see the forest for the trees?
Do nuclear scientists define nuclear energy policies or do they provide advice to those that define them?
> World. Not country.
I don't think we have a world government that will make regulations in regards to facebook.
You're right, I may be ignorant of the issues that support the need for weakening encryption, but the fact is that I can evaluate the other side of the equation. And if the side pushing against encryption is making terrible arguments, I can see how terrible they objectively are.
> Do nuclear scientists define nuclear energy policies or do they provide advice to those that define them?
Similar thing: the arguments against nuclear power are idiotic and typically plain wrong and based on irrational appeals to fear.
Just like the previous issue, if a decision has to be made, it has to balance the pros and cons. Even if I can only properly evaluate only one side of the equation and find it lacking, I can certainly have serious doubts as to the legitimacy of the choice presented. And that's one side more than most people can grasp.
When it comes to encryption, tech activists say tech should take a back seat to policy-makers and let them decide on the rules. When it comes to internet censorship, tech activists says "but my private companies!" and argues that techies decide on the limits of acceptable discourse for the whole world.
What do these stances have in common? That activists are arguing that tech should do what power wants.
There are no principles at work here. There's only a "who, whom" power dynamic. Activists will say whatever is effective in the moment for achieving their immediate aims, and right now, that's being good little sycophants for people with broad, unclear, and definitely undemocratic influence over public affairs.
I think "we" have been largely absent from the conversation as heard by the rest of society. You can't say "trust the software engineers" like you can say "trust the doctors," not because we don't have ethics or expertise, but because what most people think of as the voices of our profession are the PR mouthpieces for companies like Facebook and Google.
Speaking person to person and in online forums such as this, you'll find most engineers concerned about privacy and the societal and emotional harm of social media, and reflexively distrustful of corporations that speak about issues they have a financial stake in. But that's not what non-engineers think we think. The assumption seems to be, well, you're a techie, so of course you uncritically embrace all that stuff. People who know me better and know I don't think that way seem to regard me as less of a techie for that reason, which goes to show how little they are aware of sentiment in our profession. I think that's what we need to fix.
A profession like medicine suffers a little bit from the same bias, where people tend to assume that doctors are personally pro-surgery, pro-drugs, etc., but doctors have a professional structure that creates recognized authorities, which means journalists have somebody to talk to when they want to get the overall take of "doctors" on an issue where their expertise applies. When writing about a public health issue, journalists have no problem getting an independent perspective from doctors with credibility, relevant expertise, and no direct financial stake in the issue.
Software is in a position analogous to if the only doctors visible in the media were PR flacks for hospitals and pharmaceutical companies.
A big cultural difference between medicine and software is that doctors are traditionally trained to bear the mantle of authority. They are trained that commanding the trust and respect of patients is vital to providing care to them, and that it is a skill separate from and equally necessary to the technical skills that make a doctor worthy of that respect. Tech people are socialized to beware of the dangers of authority and distrust those who seek it. I may be speaking as an old-timer here, but when I was young, I learned to lionize the scientists and engineers who rejected the accoutrements of authority, who wore cheap and frumpy clothes, who let their hair go crazy, who reveled in stories of their own stupid mistakes, who actively punctured the mystique of authority so that they would only have as much respect as their knowledge and achievements alone would earn them.
In other words, medicine comes from an old tradition, which long accepted that wielding authority and being worthy of it were separate skills. Keeping the two connected was a moral responsibility that fell on individual doctors and on the profession as a whole. The tradition of software is much younger and was much more deeply marked by the counterculture, which took an opposite approach to the problem of authority, declaring that we could and should unlearn our reflexive deference to the superficial aspect of authority and replace it with a critical, informed consumption of the expertise of other members of society. To the counterculture mindset, it was unacceptable that society should be at the mercy of the closed ranks of a profession privileged by its exclusive knowledge.
I think that the counterculture perspective was an important corrective, but it is incomplete, because the problem of authority hasn't gone away. Even if we despise authority, we still depend on it, so the question is: how should we as a profession create and elevate professional authority? How do we make it possible for a journalist to easily get a read on what software engineers think as professionals, distinct from the official line of large corporate employers of software engineers?
Maybe software needs a replacement for the old professional societies, except with an emphasis on policy and public education, instead of expensive journals and social events designed to help you find your way into the old boys' club. I have no idea what such a group would look like, though.
I don’t think people have any idea what end-to-end encryption means, they just hear maybe the occasional slogan (“it keeps your messages private”, “it lets terrorists hide from cops”) and assume there’s some valid choice between totally cripple online security and totally cripple people stalking your communications. I don’t even think people understand the implications of what communications means. Why else are we still talking about this issue in abstract terms when the harms are precisely enumerable and the supposed benefits can be completely debunked as fantasy?
All this stuff is so depressing. Free speech online is no longer in vogue, the ACLU, EFF, and other usual heroes of these sorts of battles feel asleep at the wheel. It’s sad to think I might see the death of something so beautiful.
I agree in general with your comment, but what I don't like is anyone (Haugen, you or anyone else) speaking for "the tech community". We all have different opinions, and you can't generalize with that.
For example, plenty of people I know including myself would be strongly against less encryption, but strongly for more government control of tech companies (not via less encryption but via other means).
It's really hard to generalize, and in this case we gain nothing by generalizing so let's do it less, not more.
But that’s not what Haughen is lobbying for. She’s explicitly saying that end-to-end encryption is bad because it doesn’t allow Facebook to monitor private communications enough.
I’d be surprised if you could find more than a tiny minority in the tech communicate who agree with this.
See, e.g., https://twitter.com/sleepdefic1t/status/1452724217393434636
We should be able to say that the leaks themselves are good, but also that they are making a transparent political play for control over Facebook (more than reining in their amorality and dishonesty).
Whistleblowers do not usually have the support of a top tier lobbying firm (Bryson Gillette) paid for by a rival tech billionaire (Pierre Omidyar). I say take the leaks, but say no thanks to the "advice" it comes with.
motivations be questioned? Absolutely! But if you don’t like her stance, just say so instead labelling her an “activist” or “advocate” like there’s something inherently bad about those things.
Her motives are her own, and once her information is out in the open it isn't up to her to decide what we all do about it. I'm not about to dismiss what was revealed just because I disagree with some of her opinions about that material.
An evaluation of likely motives is incorporated into the evaluation of the evidence. There are a lot of things that can't be known: How much is true? How much made up? How much is true but not representative of the totality?
Motives aside, whether she is acting as a mole for intelligence agencies to gather support for enabling more 'round the clock surveillance of wrong-think under the guise of "blowing the whistle" is a defining factor.
However, I think I'd still be glad that the information she leaked is now public, just like I'd be glad if a plot by intelligence agencies to systematically censor social media was made public, just like I've been glad when prior abuses by the government have been made public. Get it all out in the open.
It just seems to be a vague message that although social media sometimes have a positive impact, it also sometimes has a negative impact, and that Facebook has employed strategies to help it grow. Hardly shocking findings!
I'm not defending anything that Frances Haugen did do, or say, but I find it disturbing that anyone can take the author seriously if he can't find support for his argument that doesn't come from himself than he has no argument and linking to other "sources" means nothing he writes is verifiable beyond the actual documents he does link to.
Apparently three of her lawyers are connected to the US intel agencies. This is entirely US-gov friendly whistleblowing and they know which horses to back.
However, what we clearly see here, is how one company can almost dictate "what all of us get". If Facebook decides to have e2e, that is what we have. If it decides to have "less encryption" that is what we get.
The real issue is not what governments, tech communities or whistleblowers want. The real issue is that it matters very little, because in the end we get "what facebook wants" regardless. And that, according to those leaks, is entirely driven by profit.
> and some of her suggestions really do not align with what the tech community wants at all
For what it's worth, she says her views on E2E encryption have been misrepresented.
Also, where is this single "tech community" with cohesive views on all this? I've never seen it. I'm pretty skeptical of anyone claiming to speak for the tech community.
My link above has direct quotes from Haughen regarding E2E encryption.
I don’t understand why people are so eager to project something different on to what she’s directly saying.
Do you really think those tweeted screen shots sum it all up accurately, though? Considering that she says her views are being mispreresented, couldn't those two quotes be cherry-picked? You know the reputation of the Telegraph, right?
A guy tweeted that someone said (unattributed, but I assume the Telegraph) that she said.... It's just not solid. If you're going to accept that uncritically, I think you're essentially believing what you want to believe.
I guess we're focusing on hating facebook this year.
Try and discredit the leaker over the information that was leaked.
We saw it with Wikileaks and now we will see as more and more tech employees start sharing what goes on behind the curtains.
No whistleblower has ever received such a red carpet as Frances Haugen. In itself, it's good that she's aggressively defended by the political establishment rather than facing reprisals and jail. But it should make you question what's going on here, quite apart from the contents of the leak itself.
One of the things Haugen proposes, repealing section 230 protection, is even supported by Facebook itself and strongly opposed by antimonopolists (who argue that it will be far easier to comply for Facebook than any would be competitor).
I'm not convinced the war against misinformation is any more winnable than the war on drugs in a free, self-determining society. The best we can hope for is to curb the worst consequences by slowing virality.
Speak for yourself. Facebook has become a threat to my country's democracy and stability and has enabled all sorts of horrific violence. If Facebook could be trusted to do the right thing I would feel differently but they've shown time and again they don't have the capacity or will to do so. "Move fast and break things" apparently applies to everything Facebook touches and so they should be regulated and controlled like a toddler.
That's because encryption is incredibly problematic. And I say that as a huge fan of digital privacy and an avid user of Signal.
I'm not so blind as to think that perfect encryption is an unalloyed good, and in that screenshot (which, I'll point out, excludes any broader context for her remarks) Haugen touches on just one of many very legitimate problems with the technology.
Now, in the end, I think (though I'm not certain) I believe the upsides outweigh the downsides. But don't pretend as though she doesn't raise a valid concern, even if you don't agree with her conclusion.
> A lot of people jumped to the defense of the Facebook leaker because the media so successfully framed it as a “whistleblower” situation, but that’s not really what’s happening here. She’s very much an activist
All whistleblowers are activists.
Do you really think Snowden didn't have an agenda? Of course he did! His act was specifically and explicitly political.
Hell, Wikileaks has proven itself to be nakedly political.
The only reason I'm betting you don't see it that way is you happen to agree with their politics.
> some of her suggestions really do not align with what the tech community wants at all.
Don't deign to speak for me. There is no unified "tech community" on this topic, even if your echo chamber leads to to believe otherwise.