That surprises me. I don't trust Chrome for confidentiality; I assume it collects data for Google and I don't know that it protects my data from others.
If Chrome isn't trustworthy for confidentiality, it would seem to fatally cripple the security of Signal Desktop. However, I believe the people at Whisper Systems would see that obvious flaw so I suspect that I'm misunderstanding something - what is it?
Edit: Tried to word the above most neutrally; I believe this approach has both pros and cons.
Why is Telegram always under for their flawed security challenge despite the good-enough track record then? They also have perfect usability, native apps, 3rd party clients, etc. If the security is not the first priority then Signal isn't very attractive compared to the competition I think.
Here's an example, from adc and Juliano Rizzo, who co-discovered the TLS BEAST and CRIME vulnerabilities:
I could have compared it to Threema as well, actually.
If you care about security, use Signal. If you care about UX, and Threema has a better UX, use Threema. Threema is a closed-source system that apparently relies on Nacl. That is a much better answer than the one Telegram can give, but it still leaves a whole lot of questions unanswered. There's a lot that can go wrong in the layers above Nacl.
Signal is pretty up front about being "good enough" security.
You may have missed parts of their track record; part of the problem is that Telegram tends to hide these types of things in its past:
"Telegram protocol defeated. Authors are going to modify crypto-algorithm" https://news.ycombinator.com/item?id=6948742
1. In Telegram's threat model the servers are not trusted.
2. You can't verify server side code anyway.
> The official source code of the app contains binary blobs, so this tracks a fork which builds those from source. Hence, versions might become available with a certain lag.
1. Simple encryption for everyone to prevent mass data collection
2. Prevent targeted collection for the crypto enthusiasts
The main focus is on step 1. now and they are doing a great job at it. It's slow like anything new, but I managed to convince my parents to switch so that means it's working!
Google might be the second biggest mass data collector, after the U.S. government. Using Chrome, one of Google's tools of collection (if I understand correctly), wouldn't seem to further that goal.
Or is this idle speculation.
In other words, you have to trust Google to not insert such a trojan by itself and to not bow down to the U.S. government should it try to secretly force Google to do so.
You also have to trust that their security is good enough to prevent third parties from covertly performing such a feat.
That's not say that Google should not be trusted or should be trusted less than other parties, but that's the threat.
This is not an unrealistic example.
"Accidentally" logging keys is possible, as Google can modify the content of the JS on request – for example, in case of being served an NSL, Google can be forced to modify the JS to log that.
As you are doing automated updates, Google can just add Analytics to the browser, and even publish it as something "good".
You can take the app and make your own standalone runtime version if you want and audit / test your standalone version to make sure it's only communicating with open whisper systems.