Also, WhatsApp is such an obvious target for a state actor. I saw several articles of the last year that mentioned Jared Kushner using Whatsapp so I assume a lot of government folks use it for off the books "encrypted" communication.
About the partnership:
But of course, in this case the issue seems to be either compromise of the device(s) via zero days, whatsapp usage simply being the target matrix - and/or a leveraging a zero day in whatsapp for full device compromise.
It's unlikely signal would be immune - they didn't Crack the encryption, they cracked the app/os.
In olden times the vector might have been a font, or a gif.
The only advantage signal has is a conservative interface and small userbase. I'm not sure if they do some kind of hard-line whitelisting of attachments though - if you can pack an exploit as a file, I'm pretty sure you could send it via signal.
Signal is open source, WhatsApp is not. So how did you determine Signal has only one advantage over WhatsApp, without access to the WhatsApp source code?
Nothing seems to indicate a back door here.
1. Do you trust Facebook (or open whisper systems) with your metadata/expect them to delete it?
2. How likely are there to be bugs (in the app, not in the protocol itself) which lead to exploits. On the one hand WhatsApp probably have more people working on the app and likely more security people too. On the other hand they may be pushed to add more features and having lots of code churn may introduce security holes.
3. How much work will be put into exploiting each app. On the one hand more people use WhatsApp but on the other, I guess security conscious people may be more likely to use signal.
A known exploit to WhatsApp happened due to 2 with a bug in how audio calls were initiated. I don’t really have a good guess as to how the apps compare on points 2 and 3 but I guess WhatsApp loses on 1. A more practical point is that it’s likely easier to convince someone to use WhatsApp than signal, especially for group chat.
(Widely reported, see Guardian article)
From WhatsApp’s point of view this was a reasonable ux trade off. It is a major pain point of signal when it does this (particularly in group chats where it is more likely to happen).
But I agree that from a strict security focused point of view this is a disadvantage to WhatsApp.
Even moxie, who created this stuff, more or less admitted this, by saying the rekeying notification is the only defense, but that one is off by default in whatapp last I checked (which moxie confirmed), which makes whatapp insecure by default at the very least. I wouldn't be surprised if WhatApp servers know if this notification setting is on or off, which would enable them to e.g. target people with insecure default settings only to avoid detection.
I already said this in , but let's repeat it: This is essentially the same as if a webbrowser would just accept any TLS certificate without showing a warning no matter if valid or the issuer trust.
Sure, this is hard problem to solve UX-wise and user-education-wise, but that doesn't excuse that you advertise your known-and-deliberately-insecure-by-default default-MITMable product as "secure communication using end-to-end encryption".
Personally I can not imagine Human Rights Activist having the Rekeying Notification set OFF.
>Personally I can not imagine Human Rights Activist having the Rekeying Notification set OFF.
I can. A lot of those people are not tech savvy. And the targets of e.g. the most recent NSO story weren't just activists, but a lot of other people too, politicians, state officials, lawyers, journalists, etc.
And on top of that, this system becomes MITMable as soon as one of the communicating parties has notifications off (or ignores them, which then comes back to the UX and education issue).
From a career in software development, I tend to feel that the more devs, the buggier. Maybe, MAYBE (number of QA)/(number of devs) = reliability coefficient.
Oversimplification: WhatsApp is based on Signal, but repurposed for Facebook.
Your best bet is to use something obscure so you cannot easily be targeted or not being connected to the same net.
At the Israeli army we had phones that ran completely separate software stacks and talked to a different network for this. Nothing was on the internet.
Of course it is better to not be on the public internet with any sensitive device if at all possible. Anything on the internet is considered public.
You have less eyeballs looking for exploits on the good guys side as well. So problems stay open.
OTOH, we usually only hear about the failed attempts, so there's a selection bias.
I agree about the low profile being necessary (even if not sufficient).
When you want the biggest circle open source is safer. If your circle is small closed will be safer.
But in any case, the original comment was using 'obscure' in the sense of uncommon not in the sense of 'secret'. As far as I can tell.
Nobody listens though, until something like this happens.
The actual flaw was this one: https://nvd.nist.gov/vuln/detail/CVE-2019-3568
There is already an official backdoor, or how should I understand that?
Dont like it? Go with a security-minded service like signal. Or, better yet, something totally severless and open source.
I.e keeping a continuous web client active which the user isn't notified about
Having a phone number is almost same as having person's location and identity. That's why really secure messengers won't require a phone number.
There's no security-minded service which use Google services.
What does that even mean? There are so many different kinds of metadata I'm not sure why it's useful to compare Telecom metadata collection to Whatsapp.
They literally moved off it a few days ago when shit hit the fan into more secure software because Facebook is targeting NSO employees.
Many of the Indian activists, journalists, lawyers who were targeted are working for low caste victims of false cases filed against them. John from Citizen Lab personally called them and told 'Your government attacked you'.
If anyone from Citizen Lab/WhatsApp is reading this, please corroborate with evidence that the governments themselves spied on its citizens when these activists sue them in court.
Here's a translated page.
*I know, I know, this probably wasn't done with a backdoor -- it's just funnier to lie in this context.
“A rogue big tech employee puts on a grey hat to orchestrate a hack + personal blackmail on government officials, to instill a deep fear out of them and make them realize the folly of refusing end-to-end encryption.”
*With the exception of people forced to work to death, or under unsafe conditions, or that were doing a perfectly reasonable amount of hard work that none the less triggered a heart attack. Or...
Personal means, and should continue to mean, something. I'm not willing to let governments define what is personal to me.
Up front the prospect of even greater damage might deter people.
Similar to how punishing wrong-doers in general doesn't help anyone after the fact, but one justification people bring up is deterrence before the fact.
(I profess no firm opinion in the matter. This is just giving the argument a steelmanning.)
It's one thing to say "they shouldn't use a facebook channel to talk securely", it's another to say "they shouldn't use a facebook channel on the same device as they use another channel to talk securely". My understanding of this was that it is the latter.
Unfortunately people often don't have the luxury of doing the latter.
> Unfortunately people often don't have the luxury of doing the latter.
Back when I first heard that, the bad guys were bored and/or hyperactive teenagers who understood how computers work. Today we are talking nation-state actors and nation-state-sponsored actors with practically unlimited resources.
Keeping your code secret will not stop them, cf. WhatsApp. "Responsible disclosure" will not stop them either, and sets up all the wrong kinds of incentives for vendors to sit on problems until just before the "responsible" disclosure window closes. And FFS, there are still grown adults that believe building backdoors into E2E won't be exploited by the bad guys first, insofar as they are not the bad guys themselves.
I only have questions, not answers, and I don't know what we do from here.
It will take time but generations are slowly waking up to the bullshit. We need to be able to run open source code on servers in a transparent and reportable way that everyone can understand
I see services like whatsapp taking the stance it is essential. Its the only place I (barely) feel like I can speak my mind when chatting with non tech savvy friends (of which it would be a hard sell to get to install anything else more rock solid). I really hope we can adopt widespread e2e encrypted chat platforms moving forward and dont regress in this front else it will be a sad future
Or Signal, but without the phone number signup
If that's the case, the articles are focusing too much on WhatsApp failure, but not enough on the failure of the Android OS. To me there is some kind of shared responsibility between the app and the OS here.
Who knows how many CVEs are hiding in the Signal and Riot.im apps? And Riot.im asks for many permission...
This seems to imply that the hackers were able to escape from the WhatsApp mobile app to perform other actions on the phones.
How would this be possible?
Or is this just likely careless journalism, and the exploit was that the server breach allowed the attackers to exfiltrate WhatsApp data only?
What WhatsApp & CitizenLab said to the victims in India about the attack.
The actual complaint reads: "Between in and around April 2019 and May 2019, Defendants used WhatsApp servers,
located in the United States and elsewhere, to send malware to approximately 1,400 mobile phones and devices (“Target Devices”). Defendants’ malware was designed to infect the Target Devices for the purpose of conducting surveillance of specific WhatsApp users (“Target Users”). Unable to break WhatsApp’s end-to-end encryption, Defendants developed their malware in order to access messages and other communications after they were decrypted on Target Devices. Defendants’ actions were not authorized by Plaintiffs and were in violation of WhatsApp’s Terms of Service. In May 2019, Plaintiffs detected and stopped Defendants’ unauthorized access and abuse of the WhatsApp Service and computers."
In other words: WhatsApp does not allege a server breach. It alleges that phones were hacked via carefully crafted messages to trigger an exploit on the phones, but to do that, hackers sent the messages via WhatsApp servers. They mention that, because otherwise they would have less of an argument to sue over. As far as I can tell, they're alleging that NSO Group was part of a scheme to interfere with the WhatsApp relay servers and to break the TOS, stated several different ways under different theories and laws.
 this one https://nvd.nist.gov/vuln/detail/CVE-2019-3568
Anecdotally, from a friend at WhatsApp, their engineering has been distracted by integration with Facebook. Holes that would have been patched in an independent WhatsApp may have been left to fester in the-now Facebookdivision.
Another comment mentions Pegasus... that was an iOS exploit patched in 9.3.5 (3 years ago). Does that line up with the timeline of this article?
Given that Android exploits are far more common than iOS, I would expect they had one of those too.
But then, where are Apple and Google in this case? It wasn't solely Facebook who was exploited; their app was just the initial vector to escalate to an attack on the OS. NSO probably could have achieved the same with dozens of other apps, but WhatsApp was chosen because of ease of deliverability (messaging) and popularity.
The Pixel was the only device that was not pwned in the 2017 Mobile Pwn2Own competition - the iPhone, running iOS 11.1, was exploited 4 times via both WiFi and Safari.
I'd be incredinly surprised if NSO were able to compromise an up to date Pixel phone.
(That’s not entirely snark. I do wonder how high risk individuals are choosing which devices they use for communication.)
Apple have single handedly done the internet a gross disservice by not making it easy for users to set up webhooks for push notifications. If it weren’t for that chat would have a chance at being sane.
Of course, I don't think this particular report is correct (I am sure the US spies on its allies like all big countries but I doubt it would need NSO and given the US regulates the body that regulates who NSO is allowed to sell to - I doubt it)
But don't just take my word for it -- here's a good place to start your own research:
Another possibility is them faking a public annoucement to all Whatsapp users about some update or something. That announcement could then contain an image that exploits a vulnerability in the operating system's image decoder libraries. Or something like that. Without further information you can only speculate, but what's certain is that hacking the WhatsApp servers gives them ways to hack the phones that they previously didn't have.
What makes it notable is which app it was found in, the reach of that app's userbase, and that a company was selling these exploit services.
In the early days of android, it was also possible to have a rogue google update service on a wifi AP, so when android connected to it you could push an update with malware.
In this case, it seems like WhatsApp itself was compromised; malware was distributed through the server directly to the app, or something along those lines. (Y'know how you can dynamically change apps nowadays using Codepush, etc? If Whatsapp had something like that in their pipeline, and that thing was compromised, and that compromise could be targeted at specific devices, then no amount of encryption can save you.)
There’s only about 8 million people in the whole country. There are plenty of cities with more people than that.
How come they punch so far above their weight?
Similar cultural enclaves exist elsewhere in the world and are presumably similarly effective, but they just aren't as large a portion of the population.
Even small cultural differences can have outsized effects because of pollination-- e.g. it's easier to learn from an expert in a domain if there are more experts around you.
I think your question isn't really that much different to asking why the bay area has "punched above its weight" in terms of producing successful tech companies compared to other regions around the US... or why other regions have had a lot more success in other domains.
The difference with Israel, specifically in regards to hacking, is purely the amount of government support and protection independent actors receive for things that might otherwise be considered nefarious in other countries.
So these cultural affects will snowball.
I don’t think it’s that uncommon.
You can probably say something similar about Finland or Ireland.
Being civilized seems to be a low priority in the civilized world.
When I worked in the radar business we couldnt just sell them to anyone.
Our punch is above our weight because of hard work and skepticism.
That's not the terrible stuff like NSO. NSO is just a clever way for the government (and the US government - don't forget) to export the military technology, regulate it and then claim they did not do it but a private company did.
Virtually everyone in NSO was in 8200, this is a way for the US and its allies to get around military trade issues.
If you ever get the chance to talk to someone from Israel, especially someone in tech who was raised in Israel, I recommend it. Ask them about where they went to school. You'll learn a lot.
Is that so?!
I’m an Israeli and I assumed that it’s the really bad education that is responsible for stronger skepticism and Chutzpah.
In any case, it's far away from being a "sweatshop", technology companies are a nice gig.
(Normally I get flagged on HN for saying things like this, would love an explanation if it happens this time.)
I don't have the numbers but I would be very surprised if the majority of Israeli cyber-ops ex-military sells spyware to authoritarian regimes. A few did do that but I don't think it's justified to blame the military service and whatnot for this.
Another thing to remember is that an Italian company also sold spyware to authoritarian regimes. The things they did was really similar to the Israeli one. Same excuses, same pr, same blabla... but without the compulsory military service and whatever reasons you listed.
For instance we tech worker share a very distinct culture from the general culture. Maybe the general culture/ a big subpart of the general culture in Israel is closer to the tech culture, and thus spurring more tech development.
It doesn't seem plausible that war or murder leads to a positive force on someone's character or natural ability. There are many great tragedies where the flower of a nation's manhood has been lost in war and left the country in a worse off position to grow it's populace. The Battle of the Somme for one.
Most religions and ethnic groups has at least one persecution story if not centuries of them, I don't see the evidence that it leads to a uniform positive increase in outcomes.
Wow! Does it work too for the blacks? Maybe, blacks run fast to escape from whites on electric cars!! And women! Women are prettier than men to manipulate men!!
Please, don't do that. Please DON’T.
> "Nobel Prizes[note 1] have been awarded to over 900 individuals, of whom at least 20% were Jews, although the Jewish population comprises less than 0.2% of the world's population"
This is also true in my personal experience as well. A lot of my professors had Jewish ancestry and are some of the smartest people that I know. A lot of famous computer scientists have Jewish ancestry too (Sussman, Stallman, etc).
Sort of similar to how “Asian parents push their children to become doctors.” It’s not necessarily that Asians are smarter, but they do work hard to achieve what they want… statistically speaking.
Of course, that is an extreme. But assigning intelligence to a "gene" and excluding any kind of environmental or societal/cultural reason for this kind of outlier is not very "smart"
I simply and in good faith answered jonplackett's question based on my personal opinion and observations. My intention was not to provoke, although I do realise that this is a sensitive topic for some.
> But assigning intelligence to a "gene" and excluding any kind of environmental or societal/cultural reason for this kind of outlier is not very "smart"
Certainly, I do not believe that genetics is the only factor that affects intelligence. That being said, in the same spirit, I think that excluding genetics as a potential factor just because it makes some feel uncomfortable is not very smart.
The people with the greatest influence over our language have succeeded in merging the two concepts, walling them both off from polite discourse for reasons known only to themselves.
I could be wrong ofc, but I was under the impression that it is widely accepted in the scientific community that intelligence is at least partially affected by genes.
> saying that one race is smarter directly implies that another race is dumber
Note: I said on average.
A standard deviation increase in the average leads to a huge difference in demographics at the extremes. The population with lower mean will have the majority of the lower tail of the distribution, and the the other population will dominate the upper tail. In the highest 1% they will appear enormously over-represented.
In the middle is where it matters least, because near average people for both populations might have very similar iqs.
For a paper on the issue of specifically ashkenazi (rather than 'jewish') IQ I recommend this one from the University of Utah:
Note the increased IQ is specifically for askenazi jews, as opposed to say sephardic jews. The paper identifies
historical social issues that would make selection for intelligence a strong factor in ashkenazi history, explores possible genetic causes identified through unique mutations not present in adjacent populations, and goes into depth on IQ and IQ heritability.
A high level summary available from the economist behind a paywall but viewable here:
I hope you find it interesting. I think its scientific enough that few people should find it controversial, except those who believe that IQ is not a useful measurement of intelligence, which I don't think is an opinion you share given your previous posts.
The controversial part would be Jordan B Peterson talking about it here:
No matter your feelings on him, worth a watch.
> Oversec constantly monitors the text on your screen. When it finds an encrypted text, it tries to decrypt it and then shows the decrypted text as an overlay in place of the encrypted text.
> In order to encrypt a text, Oversec shows a button next to an active input field. After having entered the secret text, tapping that button makes Oversec read the text, encrypt it and put back the encrypted text into the field. It is now ready to be sent in the subjacent app as usual - the app doesn't even know that it is sending encrypted data!
Edit: Created a separate submission at https://news.ycombinator.com/item?id=21414464
Additionally, the signal has had a long history of being feature hostile to strongly secure use, through things like making it very difficult to cryptographically verify the identity of the party you're talking to... or automatically resending the last message you sent when the far end merely claims its key has changed.
I recommend people treat signal as unencrypted communications-- _actually_ unencrypted private communications are too absurdly insecure to use. But in practice signal does not provide the kind of strong security that we would associate with 'encrypted communication', and maybe UI considerations make that an unrealistic goal. Instead signal provides the kind of security we should expect from _ANY_ communication, but which isn't actually provided due to pervasive surveillance.
I have and use Signal with some friends, but there are also loads of people I communicate with who couldn't even tell you what open source software is, let alone articulate a preference for it over good UX.
Are all of your friends software engineers and/or technophiles?
Edit in case of potential ambiguity: s/a couple of/2
When all your friends are already using Facebook and you start telling them to use Signal instead... well, I can tell you from experience that it's almost impossible to break the status quo.
What has happened to me is that usually there's 1-2 persons from each friend group who care enough that they will relay information to you through Signal.
I still don't have a friend group that is 100% Signal. For that to happen, more than 50% of the group would need to care enough about privacy to completely abandon other communication channels and accept the cost of switching platforms. The rest would probably follow. In reality, I don't have a single non-tech friend who would give a fuck about encryption. You tell them about Signal and they go "cool", that's it.
Plenty of us still prefer using command-line to this very day. Most of my work is still done on DOS 6.22.
Most new social platforms that make it big don't really take over older ones, they just grab a younger generation - usually just by being the network their parents aren't in.
It's network effects
In my own anecdotal experience Signal ranks way below Viber (popular with migrants + expats), Wickr (popular with people doing illegal things and corporate executive scheming), Telegram (popular in crypto, scammers and terrorists)
The only real broad use of Signal i've seen is amongst journalists - and even there i'm not certain how much they actually use it or if it's just the "i'm crypto aware" version of a blue checkmark for their Twitter profiles
Signal's lack of a web interface is another reason.
Moreover, Telegram took the users that left WhatsApp for more secure alternatives (even though Telegrams homemade encryption doesn't look promising).
Same reasons why a lot of people don't leave Facebook.
A lot of Signal is basically "trust Moxie".
Let me come out and defend Signal
(I usually defend Telegram, but I don't think we should be unfair to anyone):
As far as I am aware no one who knows what they are talking about has come out with anything that says Signals end-to-end encryption is broken.
If I have understood it correctly an as long as that is true, NSA, FSB and the Chinese might be running the message handling together and there's still no reason to be worried that your messages will be intercepted in transit.
- as far as I am aware Signal is the safest messenger available for everyone
- even if all the above is true you are still trusting them with your metadata. I think they are good people. If you are scared of them, be aware that they know who you talk to and when. This is however true for any mainstream technology as far as I am aware.
- being good at crypto doesn't make them immune to bugs. There was a nasty vulnerability a few months ago that was remotely exploitable. Again, this is the same, or even worse for every other messenger.
You have to check xmpp with omemo, it has libre servers and in federated
It's too bad bitmessage can't scale :/
In the modern world we basically outsource everything, including trust and verification. An open, social process of verification can be better, though.
So you can't just generate an MD5 of your APK and match it against the store description like the good old days when you could make sure your Linux ISO was legit, but there's probably some way to make it work?
EDIT: It might be possible to circumvent Google's bundling/optimizing by just uploading a regular old APK, but IIRC that was becoming more difficult these days. Unfortunately I'm not an Android dev expert.
For me, I look at encryption as a mitigation for surveillance. Anything that increases the marginal cost to monitor an individual makes broad surveillance less economic.
Signal will always have the edge for surveillance due to the relative difficulty of hiding a back door. Whatsapp will always be suspect in that they could easily be forwarding everyone’s messages to third parties.
Some despotic regimes do have large kidnap-and-murder programs (ex Rwanda) but if you just want to keep tabs on exiled dissidents and learn exactly who they're talking with back home, NSO Group has a product for you.
Plus, why would you hire a team of people to kidnap a citizen and beat them when you can assign a ticket to a government blackhat at the NSA who will run the commands against your devices and take what they need without you ever knowing.
Even then, there is substantial risk of whistleblowing for illegal data collection against citizens (Snowden et al) so they would instead in a clandestine manner ask a fellow member of the Five Eyes to perform the surveillance "legally".
Our society has known about Five Eye roundabout spy agreements for a long time and has largely shrugged, so the risk of public political blowback doing this would be minimal.
I don't really worry about the spy agencies themselves -- I am not of any interest to them.
However, I worry a lot about the likes of NSO and the tools they produce. They are likely to end up being used, in one form or another, by criminals and corporations.
Indeed. I think I covered that in "criminal" category, but perhaps I should have been more explicit.
Sanctions on selling exploits seems easier to achieve though since there is less of a conflict with economic interests
However, if we're playing poker and I learn your tell, it's in my best interest that you are naive to that fact. While not the best analogy, I would think that the same concept would apply to state actors.