Since the issue was in both the iOS and Android versions of the app, and it was caused by an integer overflow, does that mean that the bug was in a bundled C++ library implementing webRTC? Is there any information about the source-level cause of the issue?
Notably on iOS there's no good way to isolate unsafe native libraries from the rest of your app without violating app store policies, because Apple enforces apps to be single process and doesn't allow use of its own sandboxing apis.
That sounds like a huge hit to energy use, if it is even feasible for something like implementing webrtc because of communication costs with the native process.
Hrm. If you had a specific WASM engine embedded in the app, you might be able to get it to precompile the bytecode (to avoid violating the runtime modification policy); that might be low-enough overhead (since you also control the communications layer & could make it cheaper).
Plenty of mobile code, especially at large companies like this, rely on a ton of C code. It makes it easier to support features on both Android and iOS. I’m sure there are more benefits I’m not aware of.
The usual argument that safer languages are needless, because bugs happen anyway, yet Apple is going Swift, and adopting hardware mitigations to fix these kind of issues.
Regardless of the actual usage surface, its need is seen as relevant enough for Apple management to release the necessary budget to spend in engineering to make it happen across the whole stack.
funny how all the whatsapp advisories since 2019 just move the same vulnerability around. Always an innocent stream processor missing a bounds check. Ooops.
I noticed the same thing with Cisco vulns a while back. How many times do you hard code credentials before it becomes an intentional backdoor rather than negligence?
It is more the corporate culture on how security is treated .
Sure it is might convenient for NSA who probably use it when it is found , but is less likely that company of cisco size can intentionally do something like that coordinated and keep it secret too.
If you keep finding bedbugs in your house it doesn’t mean someone is intentionally putting them there. It just means that it’s really hard to get rid of all of them and more pop up naturally.
Out-of-bounds indexing is always fun. I'm interested in programming languages with mostly-watertight spatial memory safety, which can prevent many exploits at a minimal ergonomic/flexibility cost, compared to temporal memory safety which requires a borrow checker and endless compiler complexity (plus I find it easier to statically verify you don't use-after-free in the limited code interacting with resource lifetimes, than index out-of-bounds in the majority of business logic interacting with arrays).
Google just published that 50% of exploited vulnerabilities in Chrome are Use After Free, so I'm very bearish on the "temporal safety will be enough" thing being pushed.
Temporal safety does not require a borrow checker etc, you can use a GC and get it.
Any language can implement safe vector / array accessing with a `get` call returning either a sum type of throwing an exception when the element does not exist at the index.
Quite frankly, manually indexing into an array should be entirely avoided if possible, even more so with safety critical software.
It's easier to name languages without ergonomic bound checks: Assembler and C. They were done in STL for ages, but only in debug mode, which makes them useless, but it's not a technical problem.
What is the worst case scenario here? Will the adversary be able to break out of the sandbox? i.e. will the adversary be able to access non WhatsApp data?
I dont think they can break the sandbox. because then apple would have most likely removed them from the store already.
However what they can do is everything the app can do:
- get your contacts
- get your messages
- get your photos
- get your location
- get the people you chat with
- read the stati of your contacts
therefore i would assume with the right tools you can directly identify a phone use, its social circle, most talked topics on an "encrypted messenger" and since the messages are right there: sentiment analysis of conversations had and therefore social status of the pople you chat with
actually its quite terrifying. but so is the idea also that our faces and words are being handed through the world by a tech company without international regulations...... soooo business as usual =)
What is the impact of this vulnerability? I don’t see what an attacker can do if they successfully exploited this. Since this is in a bundled library does it help me get past the iOS sandbox for example? If so can one steal WhatsApp keys? What is the fallout, does anyone know?
> An integer overflow in WhatsApp for... iOS prior to v2.22.16.12, Business for iOS prior to v2.22.16.12 could result in remote code execution in an established video call.
Huh, I guess you can finally run your own software on your iPhone. Whatsapp FTW.
These applications should be treated as Trojan horses. If they aren’t open source and you are a journalist/dissident or anyone targeted by nation states you have got to assume your WhatsApp/Facebook is being used to compromise your device.
I enables independent, non-involved, non-interested parties to check it. Also when the protocol is open, it enables multiple implementations; keeping a known-by-few trojan style bug in all of them is specially difficult.
That's true. And yet, the linux kernel consistently has bugs like these in it. If you want exploitable vulns in literal media codecs go have fun taking a look at the history of ffmpeg.
I love open source. In so many ways it is uniquely responsible for the development of our technology landscape. It is observably not a meaningfully different path to secure code than closed source development.
I think that’s true of all software, people are fallible open source or not. I’d love to see average time to discovery and reporting in closed versus open source though. I’ve always heard it’s better in open source, which intuitively makes sense, and by the nature of closed source I think gathering the data will be challenging but valuable to see a tight comparison.
Lots of people have attempted this sort of analysis. You can find attempts at this in ICSE of FSE or whatever. But frankly there is no way to make effective science out of this. All of the data are always messy and make huge compromises to get anything even close to resembling an apples-to-apples comparison. I don't believe that anybody who claims it is meaningfully better in open source has any actual data really backing that up.
If you want my opinion, there is a huge gap between the tiny portion of open source projects that get any real professional scrutiny and the rest of the open source ecosystem. For something like the linux kernel, there are a lot of professionals who are deliberately focusing their novel tools at it and reporting issues. This is clearly better than nothing - though I'm not certain it is so much better than nothing to call it a big win. And this is the result of a large number of different teams all looking at this one codebase.
But pretty much immediately below "the linux kernel" in visibility, everybody stops caring. Even hugely deployed security-critical open source projects that manage media decoding and network stacks get absolutely zero professional analysis. All these projects get is the useless "drive-by CVE-report" garbage where somebody throws an off the shelf system at the repo and reports everything it spits out, no matter how useless the report.
Seems straightforward to compare open-source vs closed-source bug fix performance. The tricky part is adjusting for size. "Many eyes make all bugs shallow" but I wonder how many eyes are on the typical random java library that's been built into Spring Boot forever and no-one even thinks about anymore (there are probably 30+ of these). How often does any company using Spring Boot, even the very security conscious ones, look at the source of these dependencies, even when they update? I've heard Google code reviews literally all of their dependencies, but who else could afford to do that?
(In fact, this impossibility is a big part of why I'm so bullish on redbean - being really small means really fast and really really secure. It is a joy to deal with so few moving parts in a server!)
> Seems straightforward to compare open-source vs closed-source bug fix performance.
You can publish a best-paper in ICSE if you could pull this off. There are so many things that make this challenging. For starters, we don't even have the ground truth for what bugs exist. Even just looking at bugs we've already skewed our process dramatically based on the various different development processes of different projects.
You are right to question how many eyes are on typical random libraries. The answer is zero. Even huge libraries have extremely few eyes on them. When it comes to "many eyes" it is actually basically just the linux kernel and a very small number of other projects that get this sort of attention. The large majority of all open source projects, even those used by millions of projects, get zero meaningful attention beyond "hey I threw my tool at everything on github and spammed owners with nearly useless reports."
Good insight about the long tail of open source projects that don't have the same level of activity or interest from the developer community. I hadn't considered how sharply that drop off is, even for some what still widely used projects simply because the amount of people with the know how, and interest, to look for vulnerabilities is a lot smaller than the available project surface area.
I'm not even sure that "long tail" is the right phrase for it. I'd say "virtually all." The number of open source projects that get meaningful external scrutiny from security researchers is in the tens. Tens.
There is some automation out there. It is largely worthless. Some stuff is real like "hey, you've got a private key committed over here" but pretty quickly you run into high false positive rate garbage when looking at automated systems.
>Good insight about the long tail of open source projects that don't have the same level of activity or interest from the developer community
I don't think "long tail" is a good way to put it. Both OpenSSL and Log4j had millions of deployments and had pretty major bugs. I'd argue it's Linux then everything else.
It's definitely a lot better in memory safe languages (and especially in those applications that don't depend on C libraries under the hood). You can still have security bugs due to logic errors, but you won't ever get remote code execution or ability to read arbitrary memory. And in general bugs are much more likely to cause a crash rather than give the attacker access.
I suspect once C has been supplanted all the way down the stack it might actually be feasible to eliminate these kind of vulnerabilities entirely for apps where security is of utmost importance.
It is true that memory-safe languages are a massive massive massive boon! I believe that the entire industry needs to be making plans to find a way to shift all applications that operate on untrusted data away from C and C++. But this is completely orthogonal to the purported security benefits of making your source available.
IMO it’s not entirely orthogonal. One of the main benefits (from a security perspective) of open sourcing your app is to allow it to be audited more thoroughly. But even with that kind of auditing it’s hard to make thing secure in non memory safe languages. If you have both, then we might expect open sourcing to more reliably lead to an actually secure app.
I don't understand. Is the idea that memory errors are too hard to find but that once we've eliminated them at the language level that now auditors can review OSS projects effectively to verify that they are free of vulns? log4j should be a very clear example of how that'll fall over.
They are the only party who have access to the code. They don't even need to pretend about unintentional bugs. The world had a lesson with PRISM, if we learned anything... that is a different matter.
Seems you might be missing a key point - see, without transparent, open access to the source code, there is nothing easily found.
At a certain point, if a murderer keeps "losing" the murder weapon, you might consider the evidence you find to be that of criminal obstruction.
There is evidence that everything is more easily found when it's not hidden, obfuscated, or obstructed.
Sure. It is easier to throw an off the shelf analysis at source than worrying about binary decompilation with ghidra or whatever (well, for binaries - for bytecode it is almost exactly the same when given bytecode or source). But is this a meaningful difference? Real researchers, both academic and non-academic, do inspect open source code and report vulns they find. But this isn't actually actionable information from the perspective of a user who wants to make a risk assessment about their software choices. "Hey, you can run ${STATIC_TOOL} on this app" does not actually convert to "app is free from vulns." It just doesn't.
I love static analysis for vuln detection. I did my PhD on it. It remains my day job. It helps us find vulns. It doesn't actually convert us from unsafe software to safe software.
Even the App Store version of Signal is allegedly not the same as what's in the open source project. So unless you compile and install the applications yourself, there's no way of knowing anything.
Apple has been building their brand on privacy and trust for at least a couple of years now. Can you be sure they're not sending everything to the NSA? Of course not. But they also make their money by directly charging users for services unlike the ad-based companies. There have also been many attempts by various governments to publicly force Apple to insert backdoors or prevent them from fixing security vulnerabilities which have failed.
> But they also make their money by directly charging users for services unlike the ad-based companies.
this does not make them more trustworthy
> There have also been many attempts by various governments to publicly force Apple to insert backdoors or prevent them from fixing security vulnerabilities which have failed.
Apple's privacy is a marketing farce. They run data centers in China that provide full access to the government. Their anti-ad campaign was simply a push to gain dominance in the space themselves. They make a big fuss about end-to-end encryption but don't even bother to end to end encrypt your photos and backups!
I actually worked at Apple a few years ago in security. I was wondering why we didn't E2EE photos. The reason seemed to be - from what other engineers told me - is that it was at the behest of law enforcement. Lot easier to cooperate with LE and comply with NSLs when you can simply hand over the data they need.
Until Apple end-to-end encrypts these two things, it's all for naught. It doesn't fucking matter if your HomeKit data is E2EE if someone can take a look at your nudes without any cryptographic barrier.
Take that for what you will. Having worked at both companies during my career in a security capacity, I see no reason to trust one over the other wrt cloud services.
N.B. There are people at Apple that are very passionate about security and privacy. I was privileged to work with these people during my career. They really try to - and do - make a difference. My post is not an attack on them, but on the wider vision of the company, which is somewhat hypocritical.
I really need you to understand the difference between their marketing claims and reality. Apple is really not the champion for privacy they claim to be beyond the extent that they can try and hurt Google in their marketing.
If you're running iOS then I always assume the random number generator is backdoored by the NSA anyway. That's got to be the single juiciest target going; frankly if the NSA haven't backdoored that then what are they even spending tax dollars on?
It's not a realistic danger and just fear mongering. I'm not sure why people on HN feel the need go after Signal so hard. I do think criticism is important (and Signal definitely deserves plenty) but these types of criticisms are off base and not specific to Signal, nor are they that relevant (kinda how people post on Signal's tweets about Iran complaining about lack of usernames. Not the time nor place).
It isn't meaningfully different from saying that Google/Apple can pretend to put the real App in the App Store but replace it with one that has a backdoor. This is entirely possible. But also the risk of this is extremely high and people do decompile apps like Signal, WhatsApp, and Telegram (albeit this can only go so far). These are all high profile and highly scrutinized apps. It is just fear mongering.
No one knows for sure, though compromised compilers are not far fetched - there has been an implicit trust on compiler toolchains. Reproducible builds are a few years out from full general adoption.
Also, I remember in the 90's, people talking about a virus that infect pascal source code files. Memory is spotty about it.
> While I don't know if the current incarnations of Nix/Guix will succeed, I think we are slowly making progress towards reproducible builds everywhere.
There was an interesting case where a bunch of Android messenger things got a WebRTC based remote code execution[1]. Signal got dinged to the extent that an attacker could trigger it with no action on the user's part.
The root problem here is that users want lots of features. Each added feature, particularly super complex ones like video, takes away from security. There is not point in spending a lot of time on your own code if you are going to end up invoking a whole lot of code that you can't control.
Off topic: Why does WhatsApp don't give the option to block all calls and texts by default? That way, I can only talk with folks I want. The signal app has that option. Random businesses can send you texts to promote their shity services (typically, your number is grabbed from data brokers or leaks). Of course, you can block and report such spam, but there is no DnD option right now.
Edit: I forgot to mention almost all spam is from verified whatapps business accounts. So I believe they/FB are selling data directly under their updated TOS.
I am one of these "WhatsApp spammers" (well, I don't consider myself a spammer but you might!).
We sell financial services in a developing country. We're not a mobile app—we're just a mobile-first website (a common gripe on HN is 'there's too many apps, just make a website'. Well, we're one of them).
We need to be able to get in touch with our customers for transactional purposes (changes to their account, delivery notifications, login links, that sort of thing). Our customers don't have email. SMS gets filtered at the phone level (and uses untrustworthy, shared numbers). The only option is WhatsApp.
Most of the world does not have a computer, they have a phone. So at this point it's either WA or a native app + push notifications. Which would you prefer?
Just for reference, facebook has pretty strict guidelines for sending unsolicited messages.
In order for us to send you an unsolicited message, that message must use a preapproved template. Those templates are not supposed to be used for marketing purposes (although it's easy enough to craft a seemingly transactional template that is actually marketing). And there's also some cases that are a bit of a gray area.
However, in our experience, users are brutal flagging spammy messages as spammy, and facebook has pretty strict deliverability rules. If your quality drops, your messages stop being delivered.
I think unsolicited was the wrong word. "Transactional messages that are not a reply" is better (like a login shortcode, or a 'payment processed' message)
Wanna talk about how the WhatsApp client on macOS (and probably also Windows) by default shows your webcam on screen if someone videocalls you? That way if you are sharing your screen and someone happens to call you, everyone will be able to suddenly see you without warnings.
For me is such an enormous privacy violation that I removed the client (which is also a memory hog) and now use only the browser version.
Definitely not, but I was referring to the macOS version. AFAIK you always need to have the app installed on some phone that is connected to the internet but things may have changed since I last checked. It doesn't bother me much on phone since i have never shared the screen but on computers is a real concern.
Wouldn't that actually help? As another commenter said, some apps actually allow blocking random callers. So, presumably, such an app could be used instead of WhatsApp while still being able to contact people on that network. Kind of like in the '00s, when you could use pidgin or some other third-party app to avoid the annoyances of msn or yahoo messenger.
iOS now provides this as an OS features ("Focus"). You cab block notifications from all but certain apps and/or all but certain contacts. And the contacts feature works with WhatsApp.
On this subject, I like to quote Pavel Durov, the founder of Telegram:
"Since the creation of WhatsApp, there's hardly been a moment in which it was secure: every few months researchers uncover a new security issue in the app. I wrote about this in detail 2 years ago (read here if you missed it). Nothing has changed since then.
It would be hard to believe that the technical team of WhatsApp is so consistently incompetent. Telegram, a far more sophisticated app, has never had security issues of such severity."
I strongly dislike this perspective and find it naive. It is similar to saying Mac is more secure than Windows. WhatsApp is a huge target compared to Telegram.
I guarantee you if we all switched to Telegram nothing would change, and I would bet money these exploits boil down to open source libraries which are commonly used in these apps.
It does not pay to be high browed with security. Even Chrome, with all its investment into security, gets pwned on a regular basis.
I wonder if someone more informed could help me understand Telegram's business model, as I don't think I could rightly describe the startup and product in a way that wouldn't sound like I was casting aspersions.
Why would anyone use Telegram over something end to end encrypted, like Signal, Matrix, WhatsApp, Facebook Messenger, etc.?
I’ve tried all of the apps you listed and they all have significantly less polished UX, except perhaps for Messenger. In an alternative universe, I could very well be using Messenger.
My personal assessment is that if you have to communicate something that must not ever leak out, you shouldn’t use a chat app at all, period — because in many many cases my interlocutor is less careful than I am (or their degree of carefulness is unknown). You can use an E2E video app but not a chat app. Telegram’s video is E2E.
If my entire Telegram history leaks out, I estimate that I’ll be in a bit of trouble, but not significant trouble.
Of course, I might be wrong. In fact, while writing this comment I realized that the risk is probably somewhat bigger than I think it is, and in an ideal world using E2E would be advisable.
However, this isn’t “why you should use Telegram” but rather “why do you use Telegram”, so this is why I use it — significantly better UX, partly network effect, and partly that leaking my entire history is not even in the top 100 worries I have in life.
It has features that regular users really, really like. Not having to associate the account with a phone number, scheduled messages, groups/channels with thousands of users, the ability to program bots, silent messages, editable messages, ...
Some people care more about these than security or privacy. It's that simple.
As for monetization, I believe they have premium stickers and such.
I think the irony is that so many attack Signal for pursuing more features. While they aren't features I personally care about I do recognize that I can't have secure communications with people that are unwilling to use secure means of messaging. While I want anonymous identities (not actually usernames akin to what we have here) I do think the social graph is far more important. Not that you can't work on both at the same time (though Telegram and WA have significantly more developers)
Hey, I'd love to hear that one. Moxie has been around for a long time. If somebody has rationalizations for everything he released broken and talked about in context of being part of Mossad that should be a fun read.
«The Russian government hates him too.»
Telegram is one of few popular messengers that are NOT blocked/prohibited in Russia. So government and Durov have some agreement.
Russia’s main security agency, the FSB (a successor to the KGB) has branded Telegram the messenger of choice for “international terrorist organizations in Russia.”
The government’s first attempts to ban it, a year ago, resulted in entire sections of the web, online stores, services—even the Kremlin museum’s ticket sales—being inadvertently blocked. But the messaging app has adopted a clever system of changing IP addresses that currently outsmarts the government ban.
Meanwhile, users have continued to access Telegram through VPNs, or virtual private networks, which have become increasingly popular.
It is difficult or impossible to block Telegram in Russia.
> Russia’s main security agency, the FSB (a successor to the KGB) has branded Telegram the messenger of choice for “international terrorist organizations in Russia.”
Telegram implements video calling using bunch of sketchy C code same as WhatsApp and Signal. There's no reason to think it's less vulnerable these sort of bugs.
Sqlite has had multiple CVEs featuring use-after-free, heap overflows, usage of null pointers, use of uninitialized memory, and array bounds overflows. [1]
Coming from an app with a quarter of the users (so to say it's been less of a subject of investigation as such). "Far more sophisticated" also? What does that mean?
If Whatsapp has voluntarily been adding these issues, or has been targeted somehow, I would love to dig into research related to that. I'll check out the details regarding this attack in some hours.
This perspective seems extreme given the current evidence though. Switch to something like Matrix for sure though u.u
Edit: I'm not a proponent for whatsapp. I just understand telegram also isn't the best, and has a good incentive to shit on whatsapp
> It would be hard to believe that the technical team of WhatsApp is so consistently incompetent. Telegram, a far more sophisticated app, has never had security issues of such severity.
This says a lot more about the technical competence of Pavel Durov than it does of the WhatsApp team.
There are several good reasons why WhatsApp bugs sell for 1.5 million dollars, and Telegram bugs sell for only $500k. It mostly comes down to supply and demand.
waiting for the time when i can only use my matrix/element and be able to talk to whatsapp or instagram or snapchat users without creating and maintaining accounts there.
It's going to take nothing short of massive legal action to get any sort of competitive compatibility like that. As much as I wish for that to happen my hopes aren't very high. So until then I'll keep chugging along on whatever open solutions I can, hoping that my small contribution to network effects will help steer things down the line.