Hacker News new | past | comments | ask | show | jobs | submit login
Building end-to-end security for Messenger (fb.com)
289 points by contact9879 7 months ago | hide | past | favorite | 385 comments




"Why we’re bringing E2EE to Messenger

... “

Okay, can someone give a good guess as to what's the real reason to do this? Not like all the feel-good BS - what's the business case? How is this gunna make them money?

It seems this just makes them lose access to a ton of data to mine for advertisement. I chat with a friend on IG about something and I immediately get ads for it. It's a bit creepy, but I feel it's working the way they'd want it to (never bring up watches, you will get watch ads for the next 6 months)

Are they bleeding a lot of user to Signal/Telegram b/c they lack encryption? (my impression is only nerds care about encryption)

Are they getting harassed by requests from law enforcement?

Are they in hot water b/c of child porn?

Do they need plausible deniability?

I don't really get why they're rolling this out. Like what's their angle. Seems like something users don't care too much about and they lose a ton of valuable data


Maybe they're doing it because it's the right thing to do, and because they'd like people to trust them.

Also: https://techcrunch.com/2023/07/11/teen-and-mom-plead-guilty-...

They must be able to do good targeted advertising without message contents, with public likes and other data on scrolling behavior, especially as AI tools improve. Maybe having this data is more trouble than it's worth. Data is a liability as well as an asset.


> Maybe they're doing it because it's the right thing to do, and because they'd like people to trust them.

I worked for FB briefly. It was enough to convince me that the "it's the right thing to do" is definitely not relevant to this question.


Yeah I wasn't under the impression that respect the rights of customers was in their top 10 priorities.


Customers are the ones paying for ads. Users are just a resource.


The few guys in charge of security engineering don't have to share the values of the whole company.


That's kind of what I thought before I saw inside.

What actually happens is the reverse. 99% of engineers can have amazing values that you share, but they do not ultimately make the decisions. The board, Zuck, and the $ do.


Nonsense -- they do as long as Mark, and his chosen exec team, control whether they work there or not. Anything else is a pretty lie people tell themselves because they like the paycheck.


There were two parts to what you responded to. In terms of brand risk into the future, the second part could reasonably stand, no?


They sell ads. E2E encryption doesn't hurt that and it also appeals to the trust. So why not?


It does hurt because you can’t deliver targeted ads based on message content.


I don't think they need the exact content to sell ads.

1. sell ads base on the message itself, get crushed in the media; 2. encrypt message but sell ads based on profile and meta data, get good publicity. I think they are doing option 2

Messaging is just part of the platform. My guess is that they want to forgo this part and concentrate on others


All you do is have the end devices build the ad profiles and send them back to FB every once in a while.


Seconded


- It's hard to imagine a project getting signed off for just being "nice" - especially when it hurts their own business interests.

- I don't really see it making sense as a PR move to build trust. I think outside of the tech sector they are doing fine on that front. The vast majority of people use their Bytedance, Meta, Tencent, etc. apps and aren't considering their encryptedness.

- I don't think this announcement will get any substantial press coverage

- It could be preemptive so that they don't get bad PR when they end up being "complicit" in getting people sent to jail for abortions (in the US) or being gay (in some African countries) or whatever


> It could be preemptive so that they don't get bad PR when they end up being "complicit" in getting people sent to jail for abortions (in the US) or being gay (in some African countries) or whatever

Exactly this. With the recent laws passed they see how their altruistic "save the children" partnerships with law enforcement could be twisted for causes that aren't as popular everywhere.

Also it cost money to serve all those warrants.


> Maybe they're doing it because it's the right thing to do, and because they'd like people to trust them.

If that is true, good. But it'll take a very many doing "the right thing"s before I would trust anything Zuck owned.


“People just submitted it. I don’t know why. They ‘trust me.’ Dumb fucks.”


OP’s question rejects the premise of your justification. Facebook doesn’t have much of a track record of “doing the right thing” for its users.


its the "right" thing to do only by a small population of tech workers. Most people do not care, and ad customers would be very upset if this degrades targeting.


I contest both opinions:

1. More and more people care about this, eg journalists, politicians, etc. Apple has been talking about this a lot, although some being propaganda for messages, but their customers are already somewhat aware of private messenging.

2. It may not degrade ad targeting that much. I imagine doomscrolling does it way more: you engaged with this post, you ignored that one, and so on.


for 2, I will say anecdotally I have never in my life bought something directly from an ad until the last 2 years on instagram. It actually found things directly useful to me that I did not know about beforehand (I did still go through an hour of research or so, but was amazed at the algorithm discovery capability)


I think end to end encryption should be the minimum requirement for any private / direct messaging in any chat application. Group chats and larger I don't think it's as necessary since the guarantee that the conversations will be leaked is much higher, just reasonable encryption for those is fine. I do think its entirely possible to have a conversation that sounds incriminating out of context, and in fact is not even remotely relevant. If my shitposting conversations from my teens were taken out of context and shown in a court room I'd be facing several life sentences in an asylum.


I think you're right insofar as they are trying to reposition themselves as more trustworthy. I think they see the writing on the walls.

But at the end of the day, their ultimate end is to make more money. If they do the right thing it isn't out of some altruistic motive. It's because they think that by doing so will make them more money.


Zuck: I have over 4,000 emails, pictures, addresses, SNS

[Redacted Friend's Name]: What? How'd you manage that one?

Zuck: People just submitted it.

Zuck: I don't know why.

Zuck: They "trust me"

Zuck: Dumb fucks.

https://www.esquire.com/uk/latest-news/a19490586/mark-zucker...


I maintain that that was an intelligent commentary on human nature, and that it has been misconstrued.

He was saying “I could be anyone” not “I can’t be trusted.”


What a refreshing perspective. I also trust them to act altruistically.


Some trust is indeed refreshing. But don't be naive.

With Zuckerberg's and FB's track record, their business decisions ought not to be trusted.


I guess the /s was needed.


Ha, good one


> Maybe [Facebook is] doing it because it's the right thing to do

This is the most autistic thing I have read on the internet this year, second to a personal friend sperging out thinking he was going to wife up the first girl he met at a party.


Encryption is a great argument against messenger-interop regulations like the EU is planning.

https://www.eff.org/de/deeplinks/2022/04/eu-digital-markets-...


Why would it? Diffie-Hellman key exchange is a thing.


I'm not sure what Diffie-Hellman has to do with anything here, but yeah, there's no reason encryption would prevent interoperability as long as all clients are using the same protocol (which they would have to do anyway in order to be interoperable).


Encryption makes it practically impossible to transform messages between different protocols, since the cyphertext contains not only the text content of the message, but also formatting, some attributes (e.g. `reply_to`). Even if it were the same, E2EE algorithms also differ between protocols, and you can't reencrypt the message for other protocols server-side.


I thought the opposite, at least as a first thought: Roughly two or three years ago, facebook announced their intent to integrate their messengers - so that you could send a message from your fb inbox to whatsapp, from whatsapp to instagram. And since whatsapp has E2E as a major part of their marketing, I'd think adding it to FB and IG rather than removing it from WA would be the way to go.

(though of course it's not REALLY: it harasses you to backup your messages all the freaking time, and when I say "never", as I ALWAYS do, it asks again in 2 weeks. I assume once they're backed up on Meta's servers, there goes the encryption. But that's a parlor trick and they STILL have that data, as I assume at least 80% back up anyway and the rest is mostly worn down by the constant prompting.)


That's because a single org controls all three messengers and they can develop them to converge to the same message format and to the same encryption mechanism. At the same point Signal or XMPP will use a different format and a different mechanism, making them incompatible with messages from Meta, unless a client with a private key reencrypts them.


Except this change seems to be triggering a reversal of that integration: https://help.instagram.com/654906392080948


WhatsApp doesn't backup to Meta servers. It only supports Google Drive on Android and iCloud on iOS.

You can also optionally encrypt the backups.


but then, why do they not take no for an answer and keep nagging about it, and interpret "never" as "not in the next two weeks, but ask again, please!" if they don't have an interest in having these messages there?

(and no, "it's to help YOU, the hapless user! is of course never the right answer. Corporations never do things for users without an interest of their own.)


That's why we need internet standards for IM. Sadly, not many people seem to care about if their messenger is XMPP compatible or not.


An argument, not a valid argument.


"How can we be sure a 3rd party implements the encryption properly" is the counter-argument.

How would you refute that? Trust users to check that some code is the same on both devices? What would prevent a bad actor from MITMing the whole thing from the start?


> What would prevent a bad actor from MITMing

It's not man-in-the-middle, it's man-on-the-end. If your chat app wants to spy on you, there is nothing you can do, but at least it becomes obvious and easy to analyze because it's client side code. It's not a counter argument to interoperability. You need to trust both sides, the same way web works.


Hmm, but it’s OK to trust that web browsers implement TLS properly? And your router isn’t MITMing you? Or your SSH app exfiltrating all your server information? Why is this different?


> And your router isn’t MITMing you

Can it do so if the encryption and key management is at the client?

> Or your SSH app exfiltrating all your server information

That's a small niche, and most service don't expose SSH to public.

> OK to trust that web browsers implement TLS properly

hmm, you may have a point, maybe they'll ensure that only whitelisted browsers can access it, like Chrome with DRM for HTML. Only purpose is public safety. /s


Pretty simple actually, it either decrypts successfully or it's not implemented correctly. Same way push notifications work.


FWIU, this[1] was decrypting imessages successfully. But was also storing all your imessages in a serverside database accessible to the server (instead of being e2e encrypted like imessage is supposed to be) and leaking the authentication token to access the imessages over unencrypted HTTP.

https://arstechnica.com/gadgets/2023/11/nothings-imessage-ap...


So is key management, key exchange and discovery, revocation, etc.

That stuff is very hard to get right within a single app.

Now do it across mutually-antagonistic companies with incentives to not cooperate.


Interoperability and encryption do not contradict each other. OMEMO is such a federated encryption protocol for example.


Since they control the client, is it possible that the "ad profiling" can still take place on the client, after the message is received and decrypted for visualisation?

The E2EE only means the message is not readable "in transit" (as in after it leaves a Facebook client)


1. Meta can read the metadata perfectly well (who communicates with whom and when), which is enough for ads. 2. Meta doesn't want to be able to read messages, since it's a PR nightmare when doing so. Case: Ordered to do so by a government agency. People could switch to Signal. 3. Data isn't readable "in transit", since it's encrypted with HTTPS. Only Facebook servers could read it if they wanted.


As long as they control the client any kind of government order is still a problem for them.

However it does make it a bit more difficult for them to spy on a conversation, which is arguably a good thing.


What use is the metadata for ads?


If two of your recipients interacted with a certain ad, there's a chance you have similar interests.

Combine this with the frequency of your chatting and your location (at least based on ip) and the other little bits of stuff users give about themselves, Meta doesn't really need to know specifically what the contents of your messages are.

In the mass of their users, an informer smart guess is more than enough.


Example: Went to the dentist, and the clinic messaged me a confirmation message via whatsapp. Next day, I got several ads for orthodontic braces.


You make a post looking for a plumber, you spend time chatting with people who are plumbers in their profile - you are interested in plumbers


Top of my head:

* Who are you messaging most (best friends, family). If they like stuff, you might like the same stuff.

* When are you messaging people (awake time > profiling)

* Messaging companies (obvious, what are you into)


Also: Where are you messaging

You clicked an ad about Product X, you're messaging your friend B from a store that sells Product X

-> Serve ads about Product X to B.


They control the client, so they can do whatever they want. They can take the plain text, encrypt it with my key, encrypt it with their key, catenate the two, send to FB, split off their "copy" and decrypt it to do whatever with, and send my "copy" on to the recipient.

e2e isn't a tech issue, it's a trust issue. Do you* trust FB?

* You in general.


>1. Confidentiality in transit

As opposed to the prior step, "0. Analysis During Composition", in which the Messenger client is doing all the metadata analysis/collection while you are typing, and already knows all the tags its going to assign to you for Meta, before the message is encrypted.

Sure, third parties won't be able to see your message. But you did give Meta permission to analyse your content prior to posting.

This anti-pattern is all over Meta's products. You can see it in use when you type an update in Facebook using a browser - just try to leave your comment un-posted, or close the page, etc. Every single keystroke prompts Meta's analysis - which is completed when you press "Post" (prior to encryption/transfer ..)

So this is some slick positioning on the part of Meta's technical PR managers ..


> is it possible that the "ad profiling" can still take place on the client

I believe this is the future in a GDPR world. The server sends a list to the client of 1000 ads, and the client decides which to show based on all the data available locally and a big local neural network model to decide which you're most likely to click.


IIUC the Brave browser is already experimenting with this model. They promise[0] "privacy-preserving" ads to users AND targeting to advertisers:

"...when a Brave Ad is matched to you, it is done on your own device, by your own device, inside Brave itself. Your personal data never leaves your own device."

The mechanism is very similar to what you describe.

[0] https://support.brave.com/hc/en-us/articles/360026361072-Bra...


The problem is that the 'secret sauce' of ad targeting is that model that decides what you're most likely to click... Ad networks really don't want that model outside their data centers...

Alas, the GDPR might force a rethink on that when it gets enforced with teeth.


Exactly this; that's why for both Messenger and WhatsApp the APIs are very closed and protected so that nobody can make any third party clients.

E2EE is great but does not help at all if you don't want Facebook to read your messages and profile you based on their content / who you talk to etc.


No, this is not what E2EE means at all. E2EE means the message is not readable in transit nor is it measured, scanned, sampled, copied, exported, or modified in any way without explicit action taken to do so by one of the legitimate parties to the conversation.

If the client just leaks the plaintext or leaks any information about the plaintext that encryption is supposed to protect then the encryption scheme cannot be described as "end to end".


I agree. The "end" isn't the network interface; it's the user interface.


Client dictates what ads are shown. Fb knows what ads are shown to who. Fb now can deduce what topics people are talking about. Technically convo info has leaked. If someone is getting served ads for Trump, they probably like Trump. If they are getting ads for Biden they probably like Biden. Etc…..


Yes, so that would violate the end to end principle. If the client downloaded all of the possible ads and the selection was totally local, and interaction with any of them was a user choice I think that could still be fairly described as E2E though. Or ads were fetched by private information retrieval.


“What self interested, selfish reason do these terrible people have to this ostensibly good thing?” - paraphrasing your question.

Answer - Message content wasn’t used for advertising. I believe it had been tried at some point and found to be sort of useless. But people like you won’t believe that, so end to end encryption might help build trust and increase engagement.


That's only a valid paraphrase if you think prioritising making money and value for shareholders above all else makes you "terrible people".

Personally I doubt that any more than low single digit percentages of people care at all about E2EE. Even me, as a tech person, I don't care about it, and I actively avoid Signal because of the inconveniences that E2EE causes.

This has been a very big effort to implement, and FB no longer deploys those kinds of resources on vague whims. I think most likely something to do with regulations, and not wanting to be on the hook for user message content, but it's just wild guessing really.


A security conscious person should assume that whatever can be exploited will be exploited - especially when dealing with actors that are economically incentivized to do creepy things.


We still have to trust that Zuck is going to do e2e correctly. I just don't have that trust in him. Messenger app is doing the encrypting, and I don't trust that FB isn't doing it in a way that they get the message also.


e2e encryption can be negotiation chip with gov agencies: "we turn off encryption this time, but you'll forget about our shady ads business"


Firstly E2E can’t be turned off on a dime. WhatsApp e2e has never been turned off since it was turned on.

Secondly, please educate yourself about what the government actually thinks about the ad business you’ve described as “shady”. Even if it was “shady” to show ads based on preferences and never reveal or sell those preferences to a third party … the elected representatives in government really like having social media ads as an option in elections.

Here, read this so you can learn how government actually influences social media to do their bidding - https://knightcolumbia.org/blog/jawboned

> The senator’s office told Katie that they really wanted to ban that practice but knew they would never get it through the Senate since so many campaigns relied on the tools for their elections. So instead, they said they were going to pressure tech companies like ours to ban the use of the tool in the hopes that if one of us did so, the others would as well. Although we did not stop using Custom Audiences entirely, Facebook and other platforms did dramatically reduce the targeting options for political advertisers.


> show ads based on preferences and never reveal or sell those preferences to a third party

That's not how modern ad markets work. Those preferences are indeed revealed to third parties, specifically ad exchanges and DSPs, as part of the bidstream data.

Now, you say, those bidstream data contain no PII! Except that de-anonymizing those data is absolutely key to targeting, and is widely practiced.

Recently in the news: "Patternz", an Israeli spy-tech company, for years hoovered up and stored all the bidstream data across 87 ad exchanges and SSPs including Google, Yahoo, MoPub, AdColony, and OpenX, de-anonymized them, and claims to have profiles on billions of users including their location history, home address, interests, information about 'people nearby', 'co-workers' and 'family members'. (See also: https://pbs.twimg.com/media/F-5bA6QW8AAyfSK.jpg )

Please stop spreading dangerous misinformation about the threat programmatic advertising poses to our privacy and national security. Your extremely sensitive data are being passed around willy-nilly and this will not change until RTB is outlawed.


> claims


Soon it will be illegal to offer non encrypted chats not scanned for child porn. So either encrypt or start scanning

(EU)


Ylva Johansson, the EU commissioner who proposed that (apparently failing [1]) law, has used Meta’s model behaviour in reporting CSAM material to NCMEC & EU authorities as the justification for why that law should exist.

Considering it was Meta’s policy to scan even when not mandated, it seems like an internal shift in attitude.

[1] https://fortune.com/europe/2023/10/26/eu-chat-control-csam-e...


This makes the most sense to me.


All large competing messengers have P2P encryption. Today it is what the customers expect, and if they are to stay competitive Meta must roll it out.


Given that E2EE messengers usually require being run on a smartphone as primary device, my guess is that they are trying to push the last remaining non-app-and-web-only users to their messenger app.

I'm one of them and I don't like this.


The end-to-end encryption also works on the web. I’ve used it and it’s excellent. You need to use a PIN to access your past messages from their backup HSMs, but other than that it’s completely transparent.


If I understand the parent comment right, this was an argument against ProtonMail's End-to-End Encrypted Webmail 5+ years ago.

The argument being that some assurances typically associated with E2EE (that "even we can't see what you're doing") are shakier without a disinterested third party serving the application to the user. If you have some target user `Mr. X`, and you operate the distribution of your app `Y`, you could theoretically serve them a malicious app that sidesteps E2EE. And since it's just a web app: the blast radius is much smaller than if you were to go through the whole update process with Google or Apple and have it distributed to all users.


Do you know if E2EE also works on the web without having to install the app? That would be novel.


Yes. It does.


??? FB Messenger is available on facebook.com ?


Yes, and my guess is that they are planning on removing the standalone messenger from the web version. You'll probably need to have the FB Messenger app installed on a smartphone device in order to use E2EE. That would make it impossible to write messages on the web version (i.e. facebook.com) without having an app installed. I currently do not have the app installed and am able to write messages on the pure web version of FB on desktop. My guess is that they are enabling E2EE to get the last remaining desktop-only-and-website-only messenger users to install the app. Hope that cleared it up.


According to the article, they went through a lot of trouble to make it work in web browsers. It would be odd to drop it after doing that.


Again, my point is not that FB Messenger will stop working in the web browser altogether. My point is that FB Messenger will stop working in the web browser if you don't have the FB Messenger app installed on your smart phone as the primary device.


OA mentions bringing E2EE to web clients


In a way that works well on low power mobile devices?

Most people I know using FB messenger do so on desktop via facebook.com and the app on mobile. I don't see them removing the former any time soon but if the web only version still exists for mobile users perhaps that will go.


You can't use the web version on mobile, it tells you to install the app.


Or if you have to use desktop mode in your browser...


WhatsApp (also by Meta!) supports E2E encryption on the web app.


Future interoperability with WhatsApp.


Yes, that was explicitly stated in an interview[1] a while back. Quoting from the specific section:

> “Okay, well, WhatsApp — we have this very strong commitment to encryption. So if we’re going to interop, then we’re either going to make the others encrypted, or we’re going to have to decrypt WhatsApp.” And it’s like, “Alright, we’re not going to decrypt WhatsApp, so we’re going to go down the path of encrypting everything else,” which we’re making good progress on. But that basically has just meant completely rewriting Messenger and Instagram direct from scratch.

1: https://www.theverge.com/23889057/mark-zuckerberg-meta-ai-el...


Surely they could still mine the data you send and receive, because their app decrypts it for you and displays it for you.

So they could still be sending their server data like ‘likes watches’ without technically breaking the encryption.


They need to show the regulators they're doing something about the absurd level of data mongering they do as their quintessential business model.


What’s the total cost of encryption engineering / bau? I assume the ‘Facebook cares about my privacy’ goodwill from unknowing users will be worth more, but building a ‘secure’ public reputation has to start somewhere.


They can easily identify what and who you're talking to with message metadata, which is usually not encrypted. They can cooperate with government agencies this way. You don't need to know the exact content of a message, you just need to know who you're talking to and when.


Well, it’s encrypted in transit, maybe encrypted in their storage on their backend. But when the text, after being decrypted, appears in their textviews and websites I don’t think it is not kosher for them to tag every single word and glean lots of data/metadata from there and send home do their magic without associating with the identity. I thought that is something that is given. They also have not touched upon it. Except maybe the “Logging limitations” part - that section read like hogwash to me.

A kind of fatigue is setting in when it comes to Fb messenger and Instagram. They have already bloated these apps and they can’t really add any other gimmicks. So they are trying the “other” gimmick now.

My take is or guess is - they are doing it because they really have nothing else to do.


> How is this gunna make them money?

It's about them not losing customers to the competition that does offer E2EE


"we cannot give access to the user's messages as we do not have them"


Complying with warrants and other requests has a cost. By claiming not to have access to them, they can save money. I think they or some other advertisers have used the actual messages before, but concluded it was too noisy to be worth it.


Given the timing this was decided it the answer was “if we can’t see it it can’t cause a scandal for us and can’t be regulated”


I mean, it's a proprietary platform. It can't even be guaranteed the data isn't tampered with - same case as Whatsapp.

It's a PR move to _say_ they did it.


Win more users and therefore more metadata by buzzwording.


Half of their customers might end up in jail with the latest Supreme Court ruling on abortion.

Nearly 1/4 of women have done abortion in their lifetime. And there are also co conspirators like husbands, Uber drivers, nurses and doctors.


> Half of their customers

I have no side to take in this discussion, but just wanted to point out that the USA is not the only country that exists. I know it sometimes seems that way on Hacker News, but I promise you that there is a big wide world out there that has nothing to do with the Supreme Court :)


The topic is about a large US tech company though.


The USA makes up only about 8-10% (250M) of total Facebook users (3B) based on a quick search I just did.

They're not even the largest user base by country, which is apparently India.


But they are the most valuable cohort in terms of revenue https://www.statista.com/statistics/251328/facebooks-average...


How the hell does Facebook generate 56$ of revenue per US user per quarter? Are selling ads and selling personal information really that lucrative?


What is the share of revenue from US? I would guess it's not thaaat far from 50%. The median income in the US is like 20x of India, so presumably ad views from the US ought to be quite a lot more valuable. I would guess EU + US is the vast majority of revenue.


That doesn't matter if we're talking about % of users that could be affected by some local-US law.


It's published, but last I recall us users are worth revenue about 4x per user in Europe and >10x users from Asia


are you talking about individual users or the whole aggregate?

because if the former, then users from asia likely are largely outnumbering users from the USA


The post they're replying to says "half of their customers", which implies 100% of their customers are in America, which is obviously completely wrong.


It also seems to assume that all women in the USA have had an abortion???


Facebook might be a US tech company, but the US isn't their largest user base, that crown currently goes to India according to Statista: https://www.statista.com/statistics/268136/top-15-countries-...


The number of users matters much less for FB. The amount of users which can be monetized matters much more and the CPC for US users is always much higher than other countries.


At this point, with the impact they have on the global stage and the fact that they will only pay their taxes where they want, it's a but irrelevant to keep this frame of thoughts.


If their entire user base is American, and exactly half are women, and every single one of those women have had an abortion in the last year, and they all live in states where it's illegal, yeah, half of their customers might end up in jail.


My guess is deniability, just like Apple. Apple wanted to make CSAM detection work and make iPhone essentially a weapon law, but when their users hit back, they just made iCloud e2ee. With the number of child predators on FB, I am guessing that Meta wants to wash their hands of responsibility.


Their history approach is interesting, supporting key rotations as well.

However, metadata is still un-encrypted, same as on whatsapp. Meta knows who you talk to, and when - this is juicy enough for both ad-targeting, and government surveillance.


Also I believe they create an id on device for the media and can identify (known) images are going back and forth. Don't know why they couldn't use this for targeting even though "the data is encrypted"


I think this is a next step we must demand after everyone gets on board with E2E messaging.

Metadata is still data!


I would say that people have currently major misunderstanding between what is more important.

Let's imagine a situation where all the messages from Meta's platforms are leaked. On other scenario message content is plaintext, but senders, receivers, timestamps and locations are encrypted (on top of app usage behaviour).

On the other scenario, all the contents are encrypted, but the metadata is public.

We would know to whom everyone, in anytime, in any location, in which interval has talked to.

Which is more dangerous or damaging?


As a thought experiment, I’m interested in people listing metadata that fits the legal definition and teasing out types that the public would probably not think is metadata.

I’ll start first off the top of my head:

- The (real) identity of you and every person you talk to

- The time of the messages

- The location they were sent from

- The specific device used to send them

- A sentiment analysis: were the messages positive? Negative? Depressed? Anxious? Sarcastic?

- A description of the pictures that were sent (for example by an on-device AI model)

- A transcript of any voice memos/videos


Raised elsewhere in this thread: hashes of media, maybe perceptive hashes sent and revived.

Read receipts.

User is typing indicators.


> perceptive hashes

And if you’re sending media they have on record that means they can look up the exact same media and still have it qualified as metadata


Exactly. This is pure marketing. "Normal" people do not know the difference and there is greater chance that they stay in Meta apps instead of switching.


I assume users must still be able to send messages from different devices, just by entering their login data into a new Messenger client.

According to their paper, they are doing client fan-out:

"Messenger uses this "client-fanout" approach for transmitting messages to multiple devices, where the Messenger client transmits a single message `N` number of times to `N` number of different devices. Each message is individually encrypted using the established pairwise encryption session with each device."

This means, the system is only as secure as its client registration protocol. They don't write a lot about it:

"At registration time, a Messenger client transmits its public Identity Key, public Signed Pre Key (with its signature), and a batch of public One-Time Pre Keys to the server. The Messenger server stores these public keys associated with the user's device specific identifier. This facilitates offline session establishment between two devices when one device is offline."

If I interpret this correctly, the server can, at any time it desires, silently add new clients. Those devices will receive all messages directed at that user, and will be able to decrypt it.

I guess that's in line with their bla-bla about setting user expectations:

"Our focus is on determining the appropriate boundaries, ensuring that we remain true to our commitments, setting the correct user expectations, and avoiding creating meaningful privacy risks, while still ensuring that the product retains its usefulness to our users."

Don't forget, their commitments are making profit and exploiting user data.


This sounds similar to what Apple's iMessage does as well. Ultimately, if the user cannot check which devices that their client is sending messages to, then yes, the central server can tell clients to establish a pair with a hostile device the central server controls.


> Typically, E2EE messaging services rely on local storage and encryption keys to secure encrypted messages. Messenger, however, has a long history of storing people’s messages for them so that they can access them whenever they need without having to store them locally. That’s why we’ve designed a server-based solution where encrypted messages can be stored on Meta’s servers while only being readable using encryption keys under the user’s control.

I remember Telegrams founder saying they don't use E2EE because you can't store messages with full E2EE, which obviously BS because matrix does it, and now Facebook too.

Now they say there is no "elegant" solution. https://telegram.org/faq#q-why-not-just-make-all-chats-39sec...


It’s also obviously BS since the end-to-end encrypted data in their secret chats passes through their server tier. They could simply save it!

The fact that they go to such lengths to convince us that they are doing a favor with their insecure-by-default approach has always rubbed me the wrong way.


But they don't store secret chats, meaning you can't restore them on other devices and that's their supposed problem with E2EE


Yeah -- they created a product offering that supports their weird worldview.

It's perfectly legit to store end-to-end encrypted data in its encrypted form and then secure the key material in some manner not visible to the cloud service provider. The Telegram folks have tried their darndest to convince us that this isn't really an option, and so therefore they must go insecure-by-default, even though they also pitch themselves as a bastion of secure messaging.

It's always rubbed me the wrong way, since their claims are so obviously false. Which makes me assume they either don't know their domain well enough to be trusted to do any end-to-end encryption properly, or they have some hidden agenda. Neither of those make me want to treat any part of the Telegram chat experience as secure.


They transmit the messages twice, once to relay to recipients via e2ee (signal), and the other to the storage backend using a different e2ee approach (labyrinth).


With their proprietary client it does not matter how secure is a protocol. There's always a risk of bad update or total compromise. And of course ads need to be targeted.


Exactly. E2EE is just "transit encryption" when clients are not open source/ audited/ trusted. And FB cannot be trusted (I'm not going to list instances by which they gained my distrust here).

Encrypting metadata is also really hard. See the Matrix (and XMPP) community for detailed discussions on why that is.

I use and advise others to use E2EE encrypting tools, all are open source, audited and popular.

Fake sense of safety is worse than understood unsafety.


That’s it right - they get the goodwill of “we value privacy, look we gave you e2ee” and yet they still get to use your data and serve you targeted ads. Creeps.


Exactly. The protest against Reddit's ban on third party clients needs to be more widespread. We want FOSS clients for all IM, discussion and social media platforms.


Same with WhatsApp and signal


Well signal isn’t taking personal data or targeting people with ads, so no. Signal afaik cant access your data and it’s open source so assume has been proven.


Last I checked, Signal requires and uses your phone number.


Actually I think you’re right - there is one data point. Pretty impressive


I thought Signal is open source?


Also their builds are fully reproducible on Android.


Is there a guide how one can check that the PlayStore version matches the source code?



US spooks can get Apple or Google to deliver altered apps to targets, if nothing else.


Source?


AppStore and PlayStore are not open source, so you trust the distribution mechanism, is what I think parent wanted to say.


But you don't need to install them via their store. Also, you can always check the hash code of the binary.


I'm not sure about that. But true, in that case only the fact that's not open source is still in the way of me giving it my "baseline safe" approval. :)


It's "source available". They make changes to their server code, run those modified servers for a year or so and then release a source.


The server code isn't relevant in this case, you want the client code to be secure.


I can never get excited about E2EE encryption... It's not because it isn't important, it's because while I've lived I've had 2 phones die in my hands, 2 family members have lost phones (one of which is sitting at the bottom of an ocean and is clearly unrecoverable), and phones are consumables that change every few years.

I see there's some effort here on history sharing. Does that effort allow recovery of a chat history after an unrecoverable death of a primary phone? That's (honestly) the only usability thing I care about when it comes to E2EE.


The solution to this is to encrypt the cloud backup with a key derived from a password that the user remembers and can enter into a new phone. The password has to be strong because the cloud provider has the encrypted data and has unlimited time to do a brute force attack. Unfortunately strong passwords are hard to remember and users hate them.

But there is a trick to prevent brute forcing of a weak and easy to remember password like your four digit phone unlock PIN. The trick is to have a secure element chip in the datacenter, with storage encrypted by a private key that can't be extracted from the chip. The chip stores an encryption key that unlocks your backup, but it can't be extracted from the chip unless you present it with the right weak password. The chip's firmware rate limits and caps the number of attempts to unlock the backup, and if too many attempts are made it ultimately erases the key and your backup is permanently lost. So you're protected against brute forcing even with a weak four digit unlock code. If you know the unlock code from your old phone and enter it into your new phone, the secure element validates it and releases the key so you can restore the backup.

Obviously you have to trust the manufacturer of the secure element for this to work, but it's probably a good compromise for most users because losing your backups when your phone dies is quite bad. I know Google's Android backup uses this method and I believe iCloud does as well. It seems like Messenger supports this too but you have to choose your own PIN because Messenger doesn't have access to your main phone unlock code.


Personally, I am not attached to chat history and am totally fine with losing it. I've never backed up any chat history, and to be honest I feel weird about having 10 years old chats on my messenger account.

I might screenshot some important messages, but that's about it.


I am with you. Put aside lost phone, I still have all my older phones. Every time I start with a new phone, I use WhatsApp from scratch without restoring history/etc. I don’t back it up at all. Has not been a problem.


> The trick is to have a secure element chip in the datacenter, with storage encrypted by a private key that can't be extracted from the chip.

Ah yes, a HSM. Thus transferring the foundational "trust me bro" from Apple to someone like Thales Group.

Let's hope the HSM supports secret backup well enough to protect against server failure, and yet not so well as to allow the unlock attempt limiter to be bypassed.


> Let's hope the HSM supports secret backup well enough

Secrets backup for HSMs from a certain vendor—that you may or may not have named in your comment—I've worked with is actually the easy part. You just make copies of it and all the key data and check it into a git repo, because all of that data is protected by an HSM secret. Distributing that HSM secret among several HSMs for redundancy is also pretty easy.

The hard part is all the administration around it, specifically around custody of the smart cards that contain chunks of the HSM secret: where are they protected, where are the backups of the cards, who has access to them, coordinating sufficient card custodians to meet quorum, etc. You need to meet quorum to provision HSMs with the same secret.

The real "trust me" part of this is arguably less that the vendor backdoored the HSMs, and more that Apple pays the vendor support contracts (that hardware eventually fails) and maintains the knowledge continuity for the teams responsible for administering these HSMs as people join or leave those teams over time.

For what it's worth, this is pretty much why you don't see HSMs used often at less mature companies.



I backup my (encrypted) Signal conversations and sync them to some computers with Syncthing. I think it covers the "phone is at the bottom of the ocean" case.

To be fair I haven't done any disaster recovery yet, so it might not work that well...


That's not something you can ask of a casual user though. Not saying I'm one but I can already not recommend Signal to random friends and family that are not tech savy for reasons like this one.


Yep, that's the old convenience vs. privacy dilemma...


How do you backup Signal stuff? Is it included in local "iTunes" Backups?


Apparently it is not possible to automatically backup your message on iOS. (I run Android.)

Source: https://support.signal.org/hc/en-us/articles/360007059752-Ba...


K what about via Android, is it more flexible on that side?


You can export all your FB data easily anytime, in the sense that is also easy for casual users.


Yes, the storage back-chains epoch secrets specifically for this purpose of restoring history.


The white paper "Server-Side Message Storage" section links to a google doc, labelled as "draft" and with no public access. Should that point to https://engineering.fb.com/wp-content/uploads/2023/12/TheLab...? Pretty poor review.


The real link is https://engineering.fb.com/wp-content/uploads/2023/12/TheLab....

I also stumbled over that; only the link from the other whitepaper is broken, the one on the parent page works.


Can I just say that I am a little surprised that their engineering blog is hosted on wordpress?


As someone who runs an enterprise WordPress host, Facebook aren’t that surprising - loads of large organisations use WordPress either within their marketing department, or as their “second CMS” (AEM is very often the primary). We’re still seeing adoption growing too.The ones that might surprise you are banks and other financial institutions :)

Ultimately, WordPress is as secure as any other piece of software, but the ecosystem is so large and varied that there’s a low bar for many add-on plugins. A lot of enterprises build their own plugins for that reason, rather than using the full power of the ecosystem.

(Disclaimer: I’m also a member of the WordPress security team, but not speaking on behalf of them.)


As a Meta employee, I consider this a victory against NIH :)


Why? Afaik all Microsoft blogs are hosted on WordPress too. Everyone uses WordPress.


Wordpress is pretty big in enterprise blog-like sites, with their WordPress VIP offering.


Meta gets a lot of flak for privacy, but at the same time, they end to end encrypt the majority of communication happening globally (Whatsapp+Messenger), at cost to the company, with no obligation to do so.


> at cost to the company

If by "cost" you mean Meta being in the business of siphoning user behaviour, Meta controls the E in E2E a.k.a the apps, so it's a matter of trusting them to not do covert on-device analysis + result exfiltration.


Plenty of people have reverse-engineered the apps and found no evidence of this. They use the same protocol as Signal under the hood.

Many cybersecurity engineers passionate about this stuff have worked for Meta. They, too, would have blown the whistle at some point.


The E2E protocol is immaterial, it's about what the endpoint app does and which telemetry it reports.

Not saying they do it, again it's about trust in the context of:

- Meta (the company, not the employees) having a bad track record

- Meta's business model being what it is (building profiles and selling tooling around that), creating tension with privacy matters


Their track record with E2EE is pretty great, given that they opt to disable WhatsApp in countries that ban E2EE instead of disabling E2EE.


No company is obligated to do anything. Such lack of obligation is not sufficient reason to praise companies that do the bare minimum to keep user data safe. Sure they aren't obligated but how on earth does that matter?


E2EE certainly is not the "bare minimum", TLS is. Maybe encryption at rest, but even that's debatable.


If E2EE is “the bare minimum”, how are there so many successful and thriving companies who don’t do it? And why are you even on HN, which doesn’t do it?


Good for Meta and their user base! It's great to see Big Tech follow suit. We've been doing this for a decade already as only end-to-end encryption can truly protect data.

Plus, it's going to help with fighting bills like the Online Safety Bill and Chat Control when huge corporations join us; so bottom line: great news!


Possibly good for meta (if they were forced to do this by law, this means they did not want to do this them selves, which by definition makes it "not good for Meta").

Certainly not good for their user base, as (as many pointed out) it's not safe if the clients are all closed source. This promotes a false sense of security, which is worse than an understood lack of security.


Agreed, should have phrased this more carefully!


What Big Tech really wants is metadata. Metadata and which images are sent isn't encrypted. So this is E2EE minus what Meta wants to see. If one cares about privacy, one cares about metadata. Access to metadata equals poor privacy and is fluff encryption at best.


Absolutely right, and for that reason most tech-savvy people will still not trust Meta with their data. But that they start encrypting end-to-end is a good thing, regardless.


I wish they didn't do this.

They already have an end-to-end encrypted messaging application: it's called WhatsApp. I have seen so many people (and have myself been) bitten by WhatsApp's E2E implementation: messages lost because your phone was barely online and you "read" the message but didn't fully receive it, leaving you to awkwardly ask people to re-send things. Plus the constant need to backup your messages because if you don't you can lose access to them forever. Plenty of my family have lost messages/images that were sent to them and were important to them.

I'd rather not deal with this. Sometimes I want all my messages to be stored on a big company's servers. They should at least give people the option to choose.


Apparently in FB Messenger, conversations will still be stored in the cloud, albeit encrypted:

> Messenger has always allowed clients to operate off of a small stored local cache, relying on a server-side database for their message history. Neither WhatsApp nor Secret Conversations operated in this manner, and we didn’t want all users to have to rely on a device-side storage system. Instead, we designed an entirely new encrypted storage system called Labyrinth, with ciphertexts uploaded to our servers and loaded on-demand by clients, while operating in a multi-device manner and supporting key rotation when clients are removed.


The big company does not want to store your messages, as they need to deal with Chinese, Turkish, Saudis and other people whose messages someone wants to read. If there is “nothing to read” (let’s forget about metadata) then governments and such should abuse the system less.


Doesn't matter, if they want to keep operating in such countries, they've got several options to choose from, amongst providing a backdoor or disabling E2EE.


Sometimes I feel like these are the “plastic-free bag” solution[0] designed to make the market embrace the “tried and true” old way.

[0] There was this chip bag from a major chip manufacturer and it proudly claimed the bag was plastic free (might even have been compostable), but it was the thinnest, loudest, crinkliest bag you’d ever heard. It seemed like the chip manufacturer board meeting went like this: “The people want us to cut out plastic! For the environment or something. Don’t they know how much easier and more profitable plastic bags are?? You know what? If they want plastic-free we’ll give them plastic-free... We’ll make them regret even asking...”

That bag didn’t last long before it vanished never to be seen again.


Honestly, this seems like the sort of thing that they just didn't consider when testing. Maybe they even saw some 'Bag is unusually loud' notes and thought 'How bad could it possibly be?' and greenlit it. Feels more like incompetent bureaucracy (which I'm sure we all understand) than something malicious.


Much more likely they had a strict time limit (internally or externally imposed) on what they considered biodegradable. If you make it stronger but thinner, bacteria can break it up much faster which leads to faster breakdown. By comparison the bag being loud seems pretty irrelevant if it's just not possible to make a thicker bag degrade faster.

Also, many biodegradable plastics are more brittle than more common plastics. They only way to keep the bag flexible in that case is to make it thinner.


Any sufficiently advanced incompetence is indistinguishable from malice.


I’m not trying to prove that there was malice in the case of the chip bag, I’m just using it as a way to explain the tactic.


> We are beginning to upgrade people’s personal conversations on Messenger to use end-to-end encryption (E2EE) by default

The first line of the article suggests that it's an option


> They should at least give people the option to choose.

Messenger does. You can have a normal chat and a private, end-to-end encrypted chat at the same time with the same person, both completely separate.


Possibly related: Meta is removing cross-platform chats between Instagram and Messenger [1].

[1]: https://news.ycombinator.com/item?id=38528306


Interesting, I thought the plan was to finally make all three Meta messengers interoperable with end-to-end encryption.

I wonder what changed. Maybe testing showed that people actually prefer them to be separate?


That was an insane idea that was only put forward as a hedge against someone breaking them up.


Imo maybe it's bc of eu regulations


What are the implications?


Daily active users are going to grow on both platforms.

Let's consider you have Instagram users on one-side, and Facebook users on one-side.

As long as you have at least one contact using only Facebook (like your parents), then you have to be active on both platforms in order to talk to your contacts.

If the two platforms would be unified, then you would be active only on Instagram for example.


or you would leave one of them.


> Message contents are authentically and securely transmitted between your devices and those of the people you’re talking to. This is, perhaps, the primary goal of E2EE, and is where much E2EE research and design work is targeted, such as the Signal protocol we use in our products (such as WhatsApp, Messenger, and Instagram Direct), or the IETF’s Messaging Layer Security protocol, which we helped to design and was recently standardized.

Will Messenger eventually use IETF MLS?


I imagine somewhere in the planning stages is complying with DSA by adopting MLS and allowing interoperability between WhatsApp and FB/Insta Messenger and other services.


Messenger/Facebook and Instagram interoperability is apparently being discontinued: https://help.instagram.com/654906392080948


oh wow, so will E2EE apply to Instagram too? or just Messenger?

Edit: nvm just read the last paragraph of the post


By now we know none of this means anything if a notification is triggered with the message content.


That's not true. Both iOS and Android support sending encrypted notifications that are decrypted on-device.

How do you think Signal notifications work?


Note that Signal only uses push notifications to wake the app up. Then it directly connects to the Signal service to receive messages.


We are talking about whatsapp, no?


It's strange that your comment is not on the top.

I looked at the whitepaper and they didn't even mention it.

The reason e2e encryption got enabled is to make people feel safe.


This is at the same time as they've announced they're getting rid of encrypting their outgoing messages with PGP! (If you add your public key of course!)

I was always very impressed by this-- every service that sends emails should support this. Even banks don't!


This is because hackers were using this to lock people out of their own accounts. They would add a PGP key and then the user could no longer read any emails from FB to recover their account. There are maybe alternative solutions, but it's not a bad reason to remove it IMO.


This seems like a pretty dumb reason.

If they can set the PGP key they can also change the email. If the account recovery team allows access to recently removed emails as part of the recovery process then it should also allow contacting those addresses without a recently added PGP key.

Logically adding a PGP key is equivalent to changing the email, the previous person can't access the messages anymore. If the recovery process handles these cases differently it is a flaw in the process.


Regular people talk illegal things on messenger all the time, and if law enforcement gets addicted to sweet data requests, users will flee to wherever.

E2E messaging is a better product. Offering it free (at a loss) makes it hard to beat.

What's more, some major govts may be prompted to ban E2E, so they keep their data and kill Signal and others, while they are the 'good guys'.

FB can probably create a 99% accurate ad profile on you with just metadata, likes and tracking you on the web. If not they can push local profiling models on your phone.

With all that said, I still think it is a genuinely good thing for humanity as it is now, and I am cautiously optimistic.


I don’t know how Meta will benefit from this (perhaps they are protecting themselves from upcoming regulations in EU). Important question is who is owning the key, if they own the key, that means nothing in terms of protection against their usage.

Even if they don’t have the key, they don’t even care about messages itself anymore 1. It’s risky business, regulations might hit you hard 2. Metadata is good enough for them 3. They own the client, so they know how to extract more than useful data in the end.

E2EE is important marketing trick nowadays, most users see it as if this makes them completely anonymous to companies like Meta. After all ads are their only source of generating money. They will do whatever it takes to satisfy advertisers, not the users.



Just to be clear though, Messenger is still closed-source, so this all still gets lumped into the "source: trust us" bucket, no?


I think practically the best thing you can have is independent audits. Ideally multiple of them, over time. This is the same for open source and proprietary stuff. Otherwise, even if the code is not malicious and not backdoored, there's still no guarantee that it's not accidentally buggy.

That doesn't prevent a malicious update from coming around and just sending the entire database wherever, but nothing stops that from say, Element, if you're not actively vetting the updates. The best you can really do is hope that nobody compromises it (or that if somebody does, it gets caught as early as possible). Thankfully it seems like outright compromises to this degree are rare (as far as we know) whether the software is open source are closed source.

Basically imo it's a mixed bag. I don't see any obvious way to push the status quo vastly far forward because there's no way to really prove, especially to non-technical users who aren't cryptographers and programmers, that the software is 1. secure 2. doing what it says.


There's no process for verifying that a particular binary is built from known source code, or that the source code lacks any sneaky back doors.

The gold standard is and probably always will be analyzing the binary itself, with disassemblers, debuggers, etc.


Or reproducible builds to prove that the app I downloaded from the Apple walled garden matches the one I built myself from this known-good source code.


It's not even possible to extract the executable without jailbreaking


That’s a good point, but in theory only one researcher needs to confirm that an executable from a jailbroken app contains a build that’s consistent (or inconsistent!) with the published source code. We don’t all need to do it.


In theory they can force apple to serve a backdoored version to a particular region/person, which means one confirmation from a random security researcher isn't enough.


You wouldn't actually need to extract the executable for reproducible builds to be useful.

You could also just have the ability for your phone to reliably tell you the hash of the executable, without giving you the executable itself.


Why would you trust the hash function if you don't trust the rest of the platform?


This assumes you trust eg Apple (to a certain degree, eg to have their hardware provide legitimate hashes, but eg not to just run a messaging services), but you want to avoid also having to trust Meta.

More generically: you might trust that a company can do the Right Thing now (or at most points in time), but you avoid having to trust that they always do the right thing at all points in time.

See how Apple famously could refuse to give law enforcement access to people's phones, because Apple deliberately designed their systems in such a way to remove that ability from their future selves.

Similarly, a company that doesn't keep any logs, can't be forced in the future to divulge those logs.

They can be forced to start keeping logs, and then be forced to divulge those. But doesn't work retro-actively and is still one extra hoop for the forcing party to jump through. And perhaps you can even set up matters such that adding this vulnerability can't easily be done in private.


Ah, yes, I was mixing up iMessage and Messenger here (too much messaging encryption news these days!) – for the case of trusting your OS and hardware vendor, but not a third-party messenger's vendor, reproducible builds would indeed be advantageous.

It's a real shame the app store does not allow for reproducible builds.


Pretty much all security is trust based. Any product you pick you trust the vendor not to fuck up or be corrupted.

You can argue then lets just use open source. Ok but is it that much of a security guarantee? Open source products are of limited functionality. It is great for web servers and frameworks, stuff developers care about. When it comes to feature rich client applications the track record is not nearly as good. Also who is going to pay the server costs?


If there are no security audits by disinterested third parties, then it can clearly not be trusted. If they want trust and can provide it, they likely would have done this. Have they?


Sure but you then have to trust the auditors. It all ends up in trust.


Sure, trust in a third party that has a good reputation versus the company itself that says "trust us". Big difference.


It's slightly better, because lying about this would be securities fraud for the company.


My personal security audit is to look at what a company returns when compelled to do so by law enforcement.

A passing grade is a block of garbled encrypted mess for which they have no unlock key.


Yes this is 0% trustworthy


So the client basically transmits the messages twice - once to relay to recipients via e2ee a la signal which specifically prevents the decryption of historical messages (forward secrecy), and the other to the storage backend using a different e2ee approach which allows the recovery of history (labyrinth via epoch segmentation and back-chaining of secrets).


WhatsApp is end-to-end encrypted, but all messages have to be stored locally, which is a problem on cheap smartphones. Telegram can work on a cheap smartphone by storing most messages server-side, but it's not end-to-end encrypted by default. Would be great to get the best of both worlds with Messenger, and see the other messaging apps follow suit.


>but all messages have to be stored locally, which is a problem on cheap smartphones

Why is that a problem for cheap phones?


Because if you send and/or receive a lot messages with large attachments (pictures and videos), then it will eat up a lot of storage on our your storage (can be gigabytes), and if you use a smartphone with only 64 GB of storage, it can quickly become an issue, and then you have to decide what to delete.


That's a non issue for most cheap phones form the last few years, even sub 200 Euro phones, as most ship with at least 128GB as base storage. My OnePlus 3T came with 128 GB of base storage. In 2016! 7 years ago.

Hell, on Amazon right now I can find a brand new 99 Euro 'Chinese Brand' Android phone with 128GB of storage and 8GB of RAM. Granted, I wouldn't recommend anyone actually go and buy that one, but it shows even if you're tight on cash and need a phone with lots of storage you can get it even on rock bottom prices.

Only Apple is the one left who shortchanges you in 2023 wiht 64GB base storage even at +500 Euro phones, but that's an Apple-only problem, not a smartphone problem.

Still, I'd much rather pay a bit extra for more storage on a phone to keep my encrypted messages locally than in the cloud of some shady app like Telegram that's "FREE" and yet needs to finance it's massive cloud bills somehow.


iPhones have 128GB base storage and have had that much since last year. 2021 was the last time a 64GB phone was made by apple.


Only if you choose to ignore the iPhone SE, the cheapest iPhone currently on sale by Apple.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: