I am not usually one for paranoia, but is anyone else becoming more suspicious about Facebooks motivations and involvement with gov? This feature is a massive boost for intelligence services dealing with unsophisticated actors. This reduces the haystack significantly, by users self flagging messages that may be incriminating. Multi-millions of FB messages must be sent every day, brute-forcing encryption on all of these is probably not possible. A small % marked as 'secret conversation'? Much easier.
Why doesn't FB just apply encryption on all messages? Surely they have the resources avail. Is it because this feature makes somebody else's job a lot easier?
If my suspicions are correct, what sort of threats would this pick up. Are serious threats likely to use FB messages with 'secret conversation' flagged to co-ordinate actions?
- FBM is multi-device, and we'd like to see E2E usability improve to support this. For now, pick one device and keys never leave it
- Secret conversations don't currently support popular features like searching message history, switching devices, voice/video, etc
- Hundreds of millions use Messenger from a web browser. No secure way to verify code or store keys without routing through mobile.
"We don't want to disrupt people's current experience."
Hundreds of millions use Messenger from a web browser.
No secure way to verify code or store keys without
routing through mobile.
This trend makes me very sad... IM networks are getting more centralized as ever. I don't feel thankful for this kind of development. End-to-end encryption should not be a feature of the service provider, but the client. The way this works with Whatsapp/FB/Google just requires us to believe that their proprietary client is actually doing what it promises. And for me that I don't have a smartphone they just don't even promise anything.
I just wish I could use XMPP or Matrix with my non-nerdy friends. There was this time Google seemed to not be evil with GTalk/XMPP, but then... The business cynicism that dominates this industry, allowing people to claim that they "connect people" at the same time that they put everyone in digital prisons, makes me really want to leave computing and go live in a cave.
The good news is that the Olm end-to-end cryptographic ratchet that Matrix is in the process of deploying (https://matrix.org/git/olm) is built using the same algorithms as Signal Protocol's ratchet (although it's an independent implementation) - so we're hopeful that at least technically the window is open in future for using Matrix to defragment all the services who have adopted Signal Protocol (WhatsApp, Google Allo, FB Messenger, Signal itself etc) without compromising the E2E privacy. Right now this is total sci-fi, and won't likely happen (whilst preserving E2E crypto) without cooperation from FB, Google etc.
However, we hope to get Matrix to the point where they see that the longer term benefits of participating in a healthy open ecosystem outweigh the short term benefits of trying to lock users into a silo - just as email eventually interoperated over the early internet. That's a while off, but this is still the goal.
The best way to make sure this happens is to play with Matrix in its current form, and help us write bridges (e.g. https://github.com/matrix-org/matrix-appservice-bridge/blob/...) to interface as many silos as possible into Matrix. The more bridges, the more useful Matrix is, and the higher the chance of building an ecosystem which eventually Google, FB and friends will find attractive.
On the client side, it depends on the client you're using. Many native Matrix clients and many bridged clients give the option to store local history (whether it was originally encypted or not). Smarter clients will store it encrypted at rest by whatever mechanisms the OS and hardware provides.
This is why I will not use or recommend Signal. Moxie's anti-federation stance is unacceptable to me. It's replacing one problem with another.
Beyond federation, I would also like the solutions to have good features and usability. Right now I'm bouncing between a few walled systems:
1. Telegram - really fast development pace, poor crypto, E2E encryption is only for chosen chats and single device.
2. Signal - slow to deliver messages, not multi-device and has usability bugs and issues. I update it somewhat regularly and try using it, but still go back to Telegram because basic expectations aren't met.
3. Wire - I discovered this recently and like the feature set (it's a lot richer than Signal). It claims to use the Signal protocol and has support for voice and video too. All chats are E2E encrypted (unlike Telegram where non-secret chats are by default not encrypted on the device or on Telegram's servers), and it has multi-device support with message sync (only from the time the device is joined to the account). Clients are available for different mobile and desktop platforms. But this one also has poor usability in getting started with it and has simple things missing - like no message delivered or message read indicators (the latter could at least be present as a user selectable option for privacy). I don't know how slow this is to deliver messages, but it's definitely not as fast as Telegram is. Not knowing if a message reached or not is unacceptable in this era.
I'm still waiting for some more strong solutions to appear in this space. Seeing that more platforms are adopting the Signal protocol, it would be great to have some standardization in user identification and federation. I actually do not want any of these solutions to be completely free and wish they would provide some way to help them monetarily (at least for the people who do want to help them). I feel repelled by the "free forever" and "we'll sell premium things later, like stickers" parts. That also brings suspicions about the motives of the company/developers and the future viability of the application or platform. At least allow people to donate to you so that you feel some kind of return obligation for all users!
I have a feeling that federation is one of those Good Things that reduce the potential user base until it isn't good any more.
What you won't be able to do is federate with the official servers.
Oh, and there's also a WebSocket transport (used by the Desktop client) that doesn't involve Google. That just doesn't provide a pleasant experience on mobile.
I'm sorry, but is this a joke? "To not use a centralized server that you can neither audit nor trust, you have to recompile the client, but that's easy?"
This smacks of "oh, PGP for email is fiiiiiine." To say nothing of the silliness of the inability to federate.
Don't casually disregard him because you or others can't understand basics of doing what it takes to alter and run a service in your own private space.
I don't need Signal to communicate with knowledgeable people. We need something to communicate with everyone else.
The moment you open the door for federation, the protocol is written in stone forever. All it takes is one server in the federation network with a substantial user base that chooses not to update. (See SMTP).
OWS decided that relinquishing the ability to force updates (i.e. away from a broken cryptosystem) would sacrifice too much in the way of security to be consistent with the project's goals.
Did you just coin that? It appropriately captures what is going on, but without having the positive connotation that comes from a 'walled garden'. I love the phrase.
As an example outside of messaging, I have a fitbit and 'digital prison' so aptly describes what happens with my personal health data. I can't get my heart rate data out of their prison, because the fitbit warden doesn't see it fit to grant me the privilege to access my raw data.
Which is exactly why people go to them.
> while in reality you go there because it's the most crowded place.
Directly true of social networks (where the "crop" is "people you can interact with through the network"), perhaps less directly true of some other walled gardens (though network effects are a thing.)
But also directly opposite of what you'd expect from a "cult compound", which people go more to escape what is most popular, than to experience what is most popular.
Agreed, I thought of this point while sending my comment but wasn't sure how to put that in words. So maybe it's the "most crowded garden party".
Part of cult indoctrination is giving up personal and private information, documents, secrets and property to participate and become part of the whole. Meanwhile, leaders profit from the property and information given up and use secrets to blackmail or breakdown an individual's identity so they become dependent on the group.
Here's a list of actual cult characteristics . The comparison is an idiotic one to make.
Or at least your data can't.
A lot of people like to sideline complain about that, but tech is no longer its own customer -- there are billions of users who have different preferences than us and they are a lot more lucrative.
I've seen people that ritualistically open certain apps to talk to certain friends and networks of friends, but have no idea what apps they are using beyond the background navigation needs of "the one with the fuzzy green icon on my last page" and "the blue one with the annoying notifications".
Locked in platforms are a consequence of network efforts, not a "preference" for some mystical "guaranteed experience": the experience and the platform don't matter if the social interactions aren't there.
The diaspora of communications platforms hasn't hit home to the average consumer yet, and it's currently background inconvenience that people are using five to ten different apps to communicate these days in some cases, but that doesn't mean average consumers are entirely ignorant of the situation either. (To some extent that's why OS-level notification systems have become so important to the average consumer; at least when you have a half-dozen messenging apps all the notifications arrive in the same place.)
Voice/video etc are obviously straightforward; hopefully they'll continue to iterate towards support for e2e by default.
This is what I thought. If one were starting a new messaging platform, how would you implement the Signal protocol from scratch I wonder? I'm assuming for people who don't have strong security backgrounds, this means dissecting the Signal source code from Github.
That's already a bad start.
You mean inventing Signal from scratch (which is rough) or incorporating the libsignal protocol into a new messaging app?
All of the libsignal repos have a good readme that explains init  , so you can start there. Browsing Signal source is helpful not so much to understand the protocol, but to see if any special precautions were taken against side-channel and other implementation pitfalls.
I mean, you could contract moxie (hey moxie, what's your price?)
EDIT: there's also this  independent implementation of libsignal in golang that tries to make some targeted modifications to fit their need. Can't vouch for its quality, but it's an interesting effort nonetheless.
I swear HN has become riddled with people who want to be contrarian for the sake of being so.
I believe it's a reasonable expectation that people who implement secure messaging be domain experts in crypto AND messaging.
There's a saying about sex: make one mistake and you have to support it for the rest of your life. Security and cryptography are orders of magnitude worse. It's bad enough that errors compound, but it doesn't end there. You can have subtle and counter-intuitive failure modes where a single step outside the happy path is enough to completely annihilate the security of your system.
I feel there is no analogy that could capture the absurd complexity and catastrophic failure potential.
To give some background - I've been working with applied crypto since 90's, and professionally (on and off) since early 2000's. That experience is still next to worthless: I know for a fact that I am not good enough to actually implement anything that could withstand the attacks of a motivated and well-funded adversary. (Or even that of a bored PhD student.)
The best I can do is find tools and components that have been battle hardened by the handful few exceptional professionals. At least that way my hubris shouldn't amount to too much damage.
You don't say? (not you, Facebook) How about the dozens of times Facebook disrupted the user experience of the service for its own benefit? How about the dozen+ times it changed people's settings from private to public, after people previously manually enabled a certain setting to be private, or after having a setting by default as private initially and letting people believe that such action is private? Wasn't THAT disrupting to users' experience?
Of course it was. But it benefited Facebook, and that's the difference here. They just don't want to "disrupt" the experience in a way that also hurts the company's bottom line, even if it's better for users.
Weeeeeeell, not quite. Every device and browser the user uses can get it's own private key, and then you use Facebook's central servers and SSL to exchange keys.
If I don't trust Facebook for key exchange, then I can't trust them with their app. Any encryption they implement, they can trivially circumvent by putting a backdoor in their app.
But if I trust their (closed source, frequently updated) app at all, then I can trust them to relay people my public keys. Especially since MITM can be discovered by comparing hashes over a hard to manipulate channel (telefone, video, or IRL).
And if you're really paranoid, you could think about using an open-source app and letting a trusted third party handle the keys.
Though public key exchange can be improved on mobile by better direct communication capabilities like barcode scanning, rfid, bluetooth etc.
Am I missing something or is this more of a "JS is not reliably fast on all devices so we rather don't" kind of thing?
Certainly, there are mitigations such as very short lived private keys and relying on existing sandboxing and XSS protections browsers already have to do for JS local storages, but it's easy to understand how from a paranoia standpoint there's no guaranteed safe key store just yet in a browser, especially not one backed up by OS-level security guarantees as one would be able to use on mobile devices.
The way it's implemented now still makes it a pain/inconvenient thing to do when you want private conversations. So let's face it. Facebook just wants to get away with the minimum necessary to convey that it cares about privacy, while knowing that only 0.1% of the conversations will ever be encrypted this way.
Sounded plausible until I saw that line. When did FB suddenly start caring about that?
• E2EE by default, in groups too
• Has solved the multi device, sync problem
• Has webapp
• Fingerprinting of all devices
• Does not sacrifice features for security - voice, video, media
• Crypto and comms protocols open source https://github.com/wireapp
• Privacy and security whitepaper https://wire.com/privacy
Facebook have added contextual ride-share ordering, person-to-person payments, bots, etc. Unlike WhatsApp or Signal, Messenger also still works over the Web without an app or a smartphone-based login: how do you implement credible E2E over the Web, without using the phone as a crutch? How do you allow multi-device support, with a message history, and do E2E on all conversations?
I think they've (rightly, IMO) assessed that most people value those features and convenience over 100% E2E conversations, but want to offer the option to those who don't. Besides, if Facebook were really completely in bed with the surveillance state, why would they have just rolled out full E2E on WhatsApp?
Wire  does multi-device E2E encryption, sync of message history and allows users to use phone numbers and email addresses as identifiers. It also uses the Signal protocol.
Note: I do not work for Wire nor am I associated with it in any way, except as a user. I discovered it only recently and am trying it out, in addition to using Telegram as my most frequent client and Signal.
I worked for Facebook; I am friends with the people who developed this: I would like to reassure you strongly (well, as much as an Internet stranger can) on their motives. They are the good guys, and this was develop with people being spied on by abusive governments in mind — because those people use Messenger and would like to do it for those conversations too. Don’t trust me, but reach out to them if you are curious: few people appreciate their work, so they are generally happy to talk (about published work; unannounced products are very much off limits).
Facebook does collaborate with governments when the messages are not encrypted and the crimes are clear, and the court issued warrants. I seriously doubt any of those are political dissidents. Facebook engineers receive very generous poaching offers all the time, and they would have minimal economic damage from leaving the company over something like this. The shit-show from engineers leaving over that would massive (and many engineers are true believers).
I wasn’t in the company six months ago when that project was decided, but from my experience of the internal culture, I know that there was a debate on whether this feature could cover crimes that Facebook would object to — and the need for protecting the good guys obviously had the upper hand.
> Why doesn't FB just apply encryption on all messages?
> Surely they have the resources avail.
Scaling. I know nothing about this, but I have no doubt that this is the main reason. Any tiny amount of extra memory, computation, etc. times a billion becomes massive. Facebook struggles building enough capacity for all its services: the data center expansions you hear about are done a break-neck pace, to meet service expansion deadlines. The engineers working on this are heroes internally, and the units they use are unheard of outside of astronomy.
More specifically, they probably want to test some scaling aspects, but it might be unethical to use the usual approach of A/B testing.
However, scaling and/or capacity is not the reason E2E encryption isn't applied on all messages. The crypto operations are relatively trivial in terms of cpu.
This comment summarizes what FB's CSO said about why they are not launching E2E broadly yet. It boils down to usability concerns. Sounds like they are working on it:
I have no reason to trust you, or them. Just because you think people have good motives, doesn't mean it's true. I'm sure there are great people working there, but I'm also sure there are shady people working there. Just like at any big org. "Even though you don't know me, trust me, these guys are cool" arguments don't really help anything.
Secret Conversations is a step in the right direction.
The same reason Gmail can't work with end-to-end encryption--they want to advertise at you based on message content.
I highly doubt there is any government intervention in FB's business strategy, but there seems to be plenty of cooperation after the business decisions are made. (The same is largely true with Microsoft, Google, and yes, even Apple.) It's not really a conspiracy, or a matter of paranoia--it's been very widely reported for a while now, and people just generally don't seem to give a shit (myself increasingly included).
EDIT: I do care. But I think (a) people need to take privacy into their own hands, since companies will never be incentivized to do it and (b) we've entered a new cultural era, where the levels of privacy enjoyed in the past are no longer socially normal.
I wonder how they'd do if they were more open about it. "You're getting Gmail for free because we read your email and advertise to you. However, if you want to pay for a premium account (or Google Apps for Work) then we won't advertise to you, won't read your email and we'll even make end-to-end encryption easy and convenient".
I mean, it seems like a good compromise. I'd be happier with the rampant advertising and profiling going on if it was only for signed-in users and everyone was given the choice - use it for free or pay and get guaranteed privacy and no ads.
Here it is (google as an example only, simplified) the answer:
* Put two googles side by side. competing.
* one is a current one, earning money from ads on you being product, and keeping it under the radar (although in fine print etc etc)
* second is the one devised: premium accounts plus free w/ads
* wait 5 years and observe by market efficiency evolution which of the (competing) companies wins. The first one. In this specific scenario, the second loses because time and money spent on 'premium accounts' will not be compensated by revenues from it. While at the same time, the first google, spending this capital difference purely on the ads department will make its ads department (here 100% of it) much better than ads part in second company. hereby winning on the market.
In general (as the google example is SIMPLIFIED, due to the alphabet scale. Please don't use google apps FOR WORK (emphasis mine ;) as an counter example - it's B2B product :) it is because: we, humans, don't like companies who use fine print etc, yet those companies win case by case with the 'moral, pure pricing and fine ethics' companies BECAUSE OF BIASES AND ERRORS (+) OF HUMAN BRAINS during choice making shopping, exploited day by day.
(+) read stupidity also known why-I-buy-overpriced-sweets-at-checkout ;)
Then he'll be referred to a human, who may or may not understand sarcasm, jokes, or hyperbole?
This is exactly why these companies don't want to be more transparent unless they absolutely have to (like what the EU is doing to Google), but it really should be more regulated by governments, because I think it's a very "fair" thing to do - sharing everything you're doing with someone's info. It's not about restricting the data collection through regulation, just being transparent about it. That might lead to less useful information for the data collectors, but people should be informed and it should trump everything else.
I mean, a large number of people here do know that machines scan their email. In what manner do the majority act? That shows us the revealed preference.
If that's not the definition of an ad company then I don't know what is.
 - http://adage.com/article/digital/google-q4-2015-earnings/302...
You could go further and do something like, email sent by people to people is off-limits and will be end-to-end encrypted and not looked at - but (almost) all automatically generated email is fair game and will be processed. Google already does that with Google Now (and shows you your flights, package deliveries etc.). I really prefer them making more explicit what they look at and what not.
Shh, don't tell everyone about my next project. I think it can be done with images as well sending 5 of them for various demographic groups would be doable and should create enough noise (one tampon ad, a video game ad, an ad for amazon, a mountain dew ad, and a retirement plan ad for example). For those who care not for anonymity, but more for bandwidth they can opt out and only download the ad's they need.
Facebook can do fine without that information: they have more information than they can develop ad services for at the moment; they could use a lot more engineers though. The goodwill from the dev community in allowing appropriate targeting is far more important to them.
Two good examples being location and language: you can advertise for people in a certain place or who speak a certain language, but there are many unserved combinations, like tourists, language minorities and commuters. In my case, I’d love to see ads on learning Swedish: Facebook can see some of my friends speak it, and this is new.
Source: worked for Fb, but in Engagement, not the Ads team; offered a lot of targeting suggestions, but most where “presumably not the biggest opportunities (they) could work on”. At Facebook’s scale, we are talking really big numbers.
Yeah, that whole Bill of Rights thing just needs to go away, right?
Dropping secure E2E encryption into an app that is deployed on hundreds of millions of phones across the globe is a game changer. Period. Once you climb down from your lofty ivory tower and consider the impact this will have globally then perhaps you you will be a bit less dismissive of the goals and efforts of the team that convinced a company that lives on data to delierately blind itself to some for the sake of their users privacy and security.
They probably wouldn't have to brute-force them. Backdooring's still quite possible - the NSA are perfectly capable of compelling Facebook to install a keylogger in their app.
(where properly means that the protocol and implementation are sound and so on)
If you don't trust Facebook, you don't need to evaluate the quality of their implementation of secret messaging, you just shouldn't use it.
It might be true that FB adding E2E breaks existing bots, but it's false that it breaks that ability to use bots on FB.
All the bot needs to is be able to use the Signal E2E protocol to establish key & message exchanges.
Nothing dishonest about this and nothing different than existing bots other than one uses E2E and one does not.
It's kind of a Hobson's Choice. You can message in the clear, or you can encrypt your messages and the government can keep your messages forever.
If we can't get those things then it's pointless to fight back with encryption. We must have our constitutional protections.
A lot of this technology is enslaving us. Piling it higher and deeper isn't the solution.
Although you could use local storage, you would have to do that with every browser you logged into and also somehow make it user-account-specific. I definitely see the challenges there for them.
What would you think about using a user's password to encrypt the browser keys and store them on Facebook's servers? Is that too large a compromise for usability?
What did the OP mean by "No secure way to verify code or store keys (in web browser) without routing through mobile" ?
This is why Signal and WhatsApp require the client to run on the phone - the phones are doing the decryption for the web apps.
This is flaky, consumes a lot of battery and generally is somewhat error-prone - probably not something FB wants to deal with.
You could even register only the browser, without having a smartphone at all.
Installing a malicious extension, tricking users into typing commands in the developer tools, XSSing FB, all of these are much easier to do than attacking a native app on a phone.
Certainly it can be important to know when you have a "no compromises" security option versus "mostly better than plaintext but maybe not secure".
It could be a UX judgment to not confuse users they have a "secure connection" when in fact they might not. Look at all the various attempts over the years browsers have made to keep the UX semi-reliable and easy for users to understand whether or not their SSL connection is secure.
- Signal does have some multi-device support (the Android and Desktop clients, iOS not yet). I still sometimes have minor issues but overall it works very well.
- Signal does include end-to-end encrypted voice calls (what used to be called RedPhone) that also work quite well. It's my go-to "call from Wifi abroad" solution to avoid roaming charges, and also works very well with a good 4G/3G signal
- The browser issue seems unsolved as of now, WhatsApp's web thingy (routing through the phone) seems to work quite well but obviously only if the phone is on, and WhatsApp requires a phone while FB messenger doesn't so this isn't an option for them.
It's going to be pretty high standards of proof to give this anything that resembles credibility.
What sane person would ever trust Facebook to keep something private? If anything, the addition of "secret conversations" will be more intrusive, because this service delivers more nodes to the social graph: how many secret conversations, with whom, where, at what times, etc. Facebook and the global spy regimes it provides content for care more about building a network of associations than actual content. Who cares about the needle when you can control the haystack?
Edit: and it is http://www.whispersystems.org/blog/facebook-messenger
For me personally, they've lost a huge chunk of credibility by lending credibility to companies for apparently rewriting the definition of "end-to-end-encryption" from previously referring to the users as the end to now referring to some magical proprietary blackbox as the end.
And the other chunk of credibility sort of died off, as they apparently seem to think depending on proprietary Google software for their products is how you do privacy.
That is, their Android-client for Signal depends on Google Play Services for receiving notifications, even though LibreSignal exists, which is a fork specifically to remove that dependency, and from which they could have easily pulled that changed code in as a fallback.
And when they figured they should make a desktop client, apparently the best technology that they could think of, was a Google Chrome extension.
also, here is the white paper (from the above post): https://fbnewsroomus.files.wordpress.com/2016/07/secret_conv...
Page 10 of the white paper mentions that there is a remote key stored on Facebook servers which can be used to decrypt the local key. If Facebook still is to be trusted, I don't see what's the deal here.
I think that as soon as you put the words "end-to-end" encryption on a marketing material, you have to be ready to open-source your client. This is the cost that companies aiming to be credible can't escape.
End-to-end encryption without open-source has no value. It is a waste of energy for the company doing that too - or perhaps a marketing cost.
It means nobody is going to be able to read your data other than:
A) A nefarious Facebook staffer who has hacked their internal systems
B) A government entity with a court order
It's a step up from no encryption
(Assuming it is properly implemented and doesn't have backdoors, which can only be practically verified if the client is free software).
If you want to resist mass surveillance this is not a good solution.
Sure, there is room for improvement, but having 0.000001% of t he population using an ultra-secure messenger doesn't have the same impact.
And hopefully those improvements will happen in time.
you're suggesting that the metadata of "alice messaged bob at 1:20am" is worth more than "hey bob, I'm looking to have $100,000 laundered, same methods as last time. I'll drop to the usual location. thanks, Alice."
Whereas with just meta-data, you can still at least figure out who the guy is talking to, even if you cannot figure out over this way that the guy is laundering money.
Mass surveillance wants both metadata and content. Metadata happens to be a bit easier to analyse, but often they are only analysing metadata to lead them to the content in the first place.
This app has access to:
find accounts on the device
read your own contact card
add or remove accounts
find accounts on the device
read your contacts
modify your contacts
precise location (GPS and network-based)
approximate location (network-based)
edit your text messages (SMS or MMS)
receive text messages (SMS)
send SMS messages
read your text messages (SMS or MMS)
receive text messages (MMS)
read phone status and identity
read call log
directly call phone numbers
reroute outgoing calls
modify or delete the contents of your USB storage
read the contents of your USB storage
take pictures and videos
view Wi-Fi connections
read phone status and identity
receive data from Internet
download files without notification
run at startup
draw over other apps
pair with Bluetooth devices
send sticky broadcast
create accounts and set passwords
change network connectivity
prevent device from sleeping
read battery statistics
read sync settings
toggle sync on and off
read Google service configuration
view network connections
change your audio settings
full network access
I don't want to be encouraging Facebook's user-hostile moves, so I'll stop using messages completely if they make my life hard enough, but Swipe is a nice stopgap.
Your goal of making encryption easy to use by the masses is coming come true. It looks as if PGP's days are numbered.
This seems gratuitously hostile (and, even worse, is irrelevant). PGP is a very useful piece of software, and it does something completely different from Signal, and I'm glad both exist.
Comments like this are inevitable and represent Facebook's attempts to induce us to route all of our communications though its platform. For the quoted commenter, it's probably too late.
And runs as a Tor hidden service at https://facebookcorewwwi.onion/:
Reading the technical docs now (https://fbnewsroomus.files.wordpress.com/2016/07/secret_conv...).
Edit: Yep, this seems device-to-device; there doesn't seem to be a web component here. Still useful given how many people use messenger primarily via phone, and I suspect implementation wasn't hard given WhatsApp did it first. It would be neat to see if Messenger and WhatsApp are ever bridged through this.
Anyway, there's an upcoming defcon talk which'll lightly touch on how web standards were mangled and viciously abused to make that happen, but since the talk is deliberately not vendor-specific, the focus on it will be brief. Disclaimer on my end is that I was involved in the initial review of their code-signing implementation. https://www.defcon.org/html/defcon-24/dc-24-speakers.html#Za...
It's been well vetted enough. It might be a hack, but it's a sufficiently effective hack. We can agree to disagree :)
You coming to either Black Hat or DEF CON? Let me at least connect the two of you after the AppSec Glory talk.
As to the former, are you referring to specific known side channel attacks against asm.js/wasm, or just the usual XSS risk?
Of course, if you've filed for patents on this, as was suggested upthread, that somewhat cuts against that argument, doesn't it?
WebSign does have a patent pending on it (I think it's fair enough to say that the whole system of accomplishing in-browser code signing this way was non-obvious), but HPKP Suicide itself doesn't. Bryant (eganist) and I are actually disclosing and open sourcing implementations of a few non-code-signing applications of HPKP Suicide at Black Hat and DEF CON next month.
Yay, another technology to boycott entirely.
This always s gets brought up when discussing in-browser crypto and could never get a satisfactory answer.
Implementing this with JS is far, far, far more difficult, and the only solution known (touched on in my other comments) still pisses people off because it's running in a web context that, if improperly mitigated, can still facilitate disruption via code injection i.e. XSS.
That said, we're all conveniently ignoring the fact that all of this assumes that the devices themselves haven't been owned. If you think you're a target of entities capable of getting into a fully patched phone, you've got bigger problems.
Facebook could afford a security team just as capable as Apples security team, and make sure the js server remains secure. And if they can't, their own signing procedure can get compromised before the app is uploaded to Apple for review.
I'm still not seeing the difference. Anyone?
No one's gonna get it perfectly right. Signal's probably the closest in terms of the cleanness of the protocol. It'd be neat to see Cyph implement the Signal Protocol in-browser as well, but that's neither here nor there. Ultimately, all of this will shift an attacker's focus away from owning the messaging application (or any component of it, be it the servers, the connection, etc.) to owning the devices and the users directly.
Your next technical battles will be in device security and user-friendly secure authentication. Your existing wars against your users (think phishing) will continue to heat up.
Can you cite some other ways browsers leak information besides browser plugins? It's you who are making the extraordinary claim, that Cyph has come up with a way to make browser crypto reliably secure. I think the onus should be on you to demonstrate how carefully you've thought this out.
I'll make you a deal: if you name all the ones I know about, I'll tell you so. :)
I still find it hard to believe so many people trust what they believe to be private communication with close-lipped advertisement companies.
What are the licensing conditions / restrictions for using the protocol?
Besides, the fact that it is licensed under the GPL might mean that somebody porting the protocol to a different platform / language might be creating a "derived work" which also must be licensed under the GPL.
Now both whatsapp and Facebook have this, but surely they have the encryption keys too, or how else would they seamlessly fetch your messages and decrypt them when you get a new phone?
If they do, then what's the point?
I believe you can also disable that iCloud backup and thus the ability to retrieve those messages with a new phone.
End-to-End means encrypted in transport and not saved (in any form that is decryptable) on their servers. To that, I believe whatsapp fulfills their end of the bargain.
Message appeared in the Facebook page as "encrypted message".
I guess you hardly can get better than this.
"One of the controversial things we did with Signal early on was to build it as an unfederated service. Nothing about any of the protocols we've developed requires centralization; it's entirely possible to build a federated Signal Protocol based messenger, but I no longer believe that it is possible to build a competitive federated messenger at all."
I'm scared of what will be possible to extract from my chat logs in a few years, but the benefit of being able to IM people that only have FB feels greater right now.
Biggest problem I see so far is the multiple devices issue, but for most it will be just Desktop and Mobile, so why can't you send each message twice, encrypted separately for each device (automatically, not manually)? Does OTR3 have this feature?