This audit document alone does not give enough context. This audit was commissioned by us and concerns a pre-release version of Cryptocat for iPhone. Many of the bugs it found are due to the fact that it was reviewing a prototype with debugging features (such as NSLog) turned on. While this audit definitely does find some vulnerabilities and room for improvement, none of the critical bugs in this audit ever made it to Cryptocat for iPhone's release.
It's very unfortunate that this audit is being taken out of context like this and used to attack our effort. I'd appreciate it if you could please upvote this comment and help me contextualize this audit. Again, please, read the blog post for context (and also for the results of another audit we comissioned in parallel.) We've done our best to address these issues and are working towards an open discussion on how to improve accessible encryption. https://blog.crypto.cat/2014/04/recent-audits-and-coming-imp...
The blog post's last section ("On the Significance of Audits") discusses why it is that Cryptocat has seen more audits published about it than other encryption projects. Please, dare to discern. Read what we're doing to improve the security of accessible encryption and our reasoning for publishing these audits. I'll be grateful for you taking the time to read on what we're doing and I am more than happy to discuss with you and answer your questions.
From the ISEC REPORT:
"CryptoCat's OTR implementation on all platforms allows a chat peer to change their OTR key during a chat session without user notiﬁcation. An attacker performing a man-in-the-middle attack against the client's XMPP or HTTPS stream can inject their own OTR key in the discussion after a user has authenticated their peer's OTR ﬁngerprint. This permits the attacker to decrypt all messages that follow, and no user would have reason to suspect the compromise. Group multi-party discussions do not seem to suﬀer from the same vulnerability." (emphasis mine)
Your response in the blog:
"... This is a problem that can be exploited in some real-world scenarios and needs to be addressed with appropriate authentication warnings. Thanks to these audits, this issue was caught in Cryptocat for iPhone before it was released. Therefore, unlike the desktop version, Cryptocat for iPhone was never affected by this issue."
> Everything is encrypted before it leaves your computer. Even the Cryptocat network itself can't read your messages.
You're seriously misleading users regarding the security of crypto -- the web is an environment that has ZERO controls on code updates. Given this remarkably self-serving and ill-conceived stance, it's difficult to imagine how your crypto could ever be considered trustworthy.
Web-based distribution simply is not, in its current form, a viable model for distributing code that must survive the compromise of the original distributing party. None of the technical (off-server code signing) or social (review of update notices) tools available in non-web distribution methods are available.
The damage that you and other projects cause to public awareness and comprehension of crypto issues is potential staggering.
When you factor in the real risks people take when relying on crypto to communicate in hostile situations, you're doing more than just making nerds grumpy -- you have the potential to significantly harm people's lives.
Crypto is not an amateur's game. Some things are too important to be left to experimentation by unqualified engineers.
edit for sub-comment: A traditional web app updates every time you hit the URL. In a browser extension your code is not necessarily tied to any remote origin, including the Chrome/Firefox stores. It is a user's choice to automatically receive updates from the vendor and this is the same choice that you make if you use apt-get vs manual download/checksum/sig check/audit.
Actually, browser extensions have the exact same properties except for being code signed. That's not enough: http://arstechnica.com/security/2014/01/malware-vendors-buy-...
> I don't think CryptoCat has been distributed as a traditional web app for at least a year (probably more).
That they ever shipped in-browser crypto demonstrates that they shouldn't be shipping crypto.
What you are saying is that you don't trust any kind of application to do crypto unless you have previously audited it. That's a reasonable stance to take but it's irrelevant whether you distrust a third-party browser extension or a third-party native app.
The main argument against crypto in JS extensions is that getting crypto correct in regards to timing based side-channel attacks is very hard to impossible.
But if you are dealing with a specific browser in an extension context, this might be somewhat mitigated which would bring us back to the trust issue, which, again, IMHO is not dependent on the platform you use
This is why it's a very bad idea to implement silent automatic updates, and why they're the wrong thing to copy from the web.
OTF, again (smartly) using US taxpayer dollars, funds audits of a variety of privacy technologies.
For instance, they also funded a good-sized chunk of the Truecrypt audit.
OTF provides a form projects like ours can fill to commission this type of audit. So basically, we asked OTF to commission it for us and they accepted. TextSecure, a great encryption app that I've seen you recommend, also approached OTF and obtained an audit from iSEC via this same process. However, TextSecure decided not to publish their audit results.
You can read about OTF's reaction to these audits here: https://www.opentechfund.org/article/bringing-openness-secur...
For whatever it's worth: I have no commercial relationship with the TextSecure team, have never worked with them, have never been paid to audit their code, and am only faintly acquainted with Moxie (I've talked to him in person to know that he's extremely pleasant and surprisingly soft spoken, but not more than that).
Trevor Perrin, who worked with the TextSecure project to help design their cryptography, is someone I know a little bit better; it would be safe to say that Trevor Perrin is the only reason I know anything about cryptography, and, given a few bar napkins, I can outline a pretty convincing story that he is the root of basically every TLS vulnerability discovered after Marsh Ray found the resumption bug.
TextSecure is a great project, and if anyone was debating between it and some other cryptographic messaging application, I hope I've made that decision a little easier.
Also, during the period of time when TextSecure received audits from firms as part of an OTF grant, publishing the results was not an option that was contractually available to us. It's not something we "decided."
That being said, what is stopping you from publishing the audits today?
What I'm curious about is, why don't other projects such as TextSecure publish their audits as well? I'd certainly appreciate Moxie answering this question.
The OTF blog post certainly makes good points for this to happen. I also personally believe that this reticence to publish audits is damaging to the opportunity for the honest evaluation of encryption software and the establishment of a realistic perception of encryption software. It also misleads users.
Its embarrassing, duh.
In fact, OTF has actually complained about their projects choosing not to publish audits: https://www.opentechfund.org/article/bringing-openness-secur...
Whatever Moxie's reasons for not having published their audit, I'm sure they're valid. Either way, no amount of innuendo about TextSecure is going to change the ground truth about your own project.
There might be no person on the Internet more poorly positioned to cast aspersions on other people's projects than you. Please stop.
I'm trying to have a serious discussion regarding transparency of audits. I feel your reaction is overly aggressive and not genuinely constructive.
I was asking for clarification, but my writing style is dry and blunt, so I'm not surprised if I managed to convey something different. I apologize in advance if so.
More than anything else, I wrote the comment as a (hopefully mild) F-U to the sentiment that the USG is hellbent on destroying privacy on the Internet. Big parts of it are, sure, but there are good people working inside of it too. :)
The Open Technology Fund (OTF) engaged iSEC Partners to perform a source-code
assisted security review of the CryptoCat iOS application.
a) don't do an audit - and the public would ask what they are hiding
b) agree to an audit being done, meaning you 'commissioned' it.
So it is just as important to know who approached who in this situation.
No, what's unfortunate is your continued and repeated claims that cryptocat should be used today by people who expect privacy, when it is not yet safe or secure for private messaging.
You're going to get people killed. You don't seem to care.
I don't know if this is news to some people, or what, but it is usual, when having a product professionally tested, to have the testers test it in earnest, as if it were ready for release. They won't avoid telling you about some obvious hole just because it was obviously accidentally left in and would be a 10 second fix (or whatever - that is just a random example).
If you've never been through this before then, presented with a big pile of bugs, it can seem like the project has been proven a total fuckup, and its authors obvious failures, and so on. But in fact, once you've got this concrete list of all the ways in which your program has so far proven itself hopelessly unsuitable for release, it's actually quite surprising how quickly they all get fixed...
(Edit - This is more of a general comment than anything specifically about cryptocat)
The reason we commissioned this audit is to make sure our prototype was audited before release on the App Store. We're very happy to have benefited from this audit, but linking to this PDF alone de-contextualizes the effort and makes it seem like it's an audit of the production version of Cryptocat for iPhone, whereas the version we provided was an early prototype. The audit did find some issues with the (already-released) desktop version and server configuration, and those were also fixed and documented in our blog post.
I sincerely appreciate you taking the time to read our blog post on the matter and thank you for your understanding.
Addendum(3/15/14): The iOS application was in development code that at time of testing was available only in a preproduction form on GitHub and not distributed via the AppStore. The CryptoCat team
had time to review the vulnerabilities prior to publication in the AppStore and claims to have addressed them; however, iSEC has not validated any fixes and cannot make any claims to the current status of any vulnerabilities
CryptoCat's OTR implementation on all platforms allows a chat peer to change their OTR key during a chat session without user notiﬁcation. An attacker performing a man-in-the-middle attack against the client's XMPP or HTTPS stream can inject their own OTR key in the discussion after a user has authenticated their peer's OTR ﬁngerprint. This permits the attacker to decrypt all messages that follow, and no user would have reason to suspect the compromise.
In your blog post there seems to be little context that can excuse such a mistake and nothing that explains how you fix it? Am I correct in reading your blog post that right now there isn't a fix? I.e. it's an open attack assuming someone compromises a CA or a cryptocat server?
Isn't this a rather big issue since the only point of cryptocat is to protect against that kind of an attack. If you just wanted security only against eavesdroppers(i.e. you trusted the chat server), xmpp over TLS would work fine.
This is a recurring theme with cryptocat. Stay away for 5-10 years until they get their act together.
The fix for the MITM bug is to offer proper notice via the user interface when a user re-keys with a different public key. There's a demonstration of the user interface element in the blog post.
(The 'issue E' that it references is https://github.com/cryptocat/cryptocat/issues/606 .)
A really nice job by iSec, though.
Always be sure of what you are doing, rely on external review and never, I repeat, _never_ overstate the security of your crypto system.
That being said, you and I both know that people are using Cryptocat in dangerous situations. And having worked on both medical imaging and secure messaging systems, I have a healthy respect for the consequences of implementation failure. As such, I feel that your disregard for these consequences in broadly releasing such broken software would displease any professional review board, and I frankly doubt you'd ever attain such a license given such a history of poor professional judgment.
In short, I take my profession damn seriously, and jokers like you are why nobody trusts software.
Please make sure to read our blog post and Github discussions to see the kind of open discussion we're hoping to lead so that our software can benefit.
That being said, I suppose comments like yours are why I've been having recurring suicidal thoughts for the past two years. I don't know what else to say at this point.
While I agree that the degree of openness your team has maintained is highly desirable, repeatedly shipping bugs which adherence to industry best practices such as "don't use fixed IVs" or "always use constant-time compares" would have avoided makes it difficult to believe that your team possesses the competence you claim as well as undermining the credibility of your communication about such issues. Thus my failure to be impressed by a post which only proposes band-aids and completely fails to apologize for the lapses in judgment which led to this state of affairs.
I don't take using this level of harshness in a public forum lightly, and I'm truly sorry to contribute to your unhappiness as a result. Please do talk to somebody, even if it's not a professional, I've found it always helps.
For example. We didn't simply "re-use fixed IVs". We know not to do that. The resulting bug was the series of a much more complicated and hard to spot issue with the re-keying mechanism. Understand you might not have the full picture here.
Simply put, I refuse the assertion that Cryptocat's team has not dealt with its software development in a competent, professional, responsible and honest fashion.
I want to discuss this further with you. I want to convince you of my point of view. Please email me at firstname.lastname@example.org so I can have the opportunity to discuss with you and hopefully convince that your perspective isn't exactly right on this.
I appreciate your willingness to continue this discussion, dropped you an email.
I urge you to talk with somebody.
That's both not true, and misleading; even comparing it to applications that have had serious published flaws, this one has vulnerabilities of a number and magnitude that distinguish it.
This audit report is another fascinating read to see all the mistakes I would probably have made as well. Secure crypto is so incredibly difficult to get right...
Are you saying that it's good news because they tested something that you weren't ever going to release in that state? That's a tougher row to hoe, unless you're going to claim that your team inevitably would have found the same set of vulnerabilities that Scott, David, Alban, and Zooko's team found.
For a typical application --- yours isn't typical for any number of reasons --- prerelease or not, the state the application is in when a pentest team gets it is, from the perspective of security, the application customers would have received.
Reading that, I still am not sure why anyone would use CryptoCat especially with things like TextSecure on the market that seem to take crypto far more seriously. The only reason I can see for that is that they have clients on more platforms, but if this is similar to the state of all of them, then what's the point?
At a previous company we had to have words with a company that performed a software audit as they failed to find two issues we'd planted to test them. (Of course, they did find several things we didn't know about.)
mandalar12 asks about "the same level of analysis on TextSecure". Since our (Least Authority's) audit and the iSEC audit are public, random people on the internet can know what that level is, without having to know or rely on Least Authority or iSEC to be non-malicious.
With enough eyes all bugs are shallow, but to whom? We need to communicate transparently to achieve this ideal.
Disclaimers: I worked on the Least Authority audit of Cryptocat; I've worked at iSEC in the past; I've met Moxie Marlinspike and Trevor Perrin and greatly respect their expertise and motivations; I also don't know if they've been infected by malicious puppetmaster aliens since I last saw them.
My own opinion is that both strong cryptographic and security engineering expertise on the part of designers and implementors, and multiple independent published security audits, are necessary to consider an app trustworthy -- but still not sufficient, given the poor state of platform security and the lack of support most current platforms (operating systems, browsers, etc.) give for isolation between apps.
Disclaimer: I worked on the Least Authority audit of Cryptocat, and in general get paid for similar auditing. I'm also designing a programming language (Noether) that is intended to facilitate security reviews.
Even with 'amazing' contributors, bugs are still there in TextSecure.
You could not have reviewed all open and closed bugs in 8 minutes ;)
Another one: Just sent an encrypted photo to someone? The photo is stored unencrypted on your phone.
Edit: group messages were also sent unencrypted, but this was fixed. https://github.com/WhisperSystems/TextSecure/issues/32
Edit2: ok, I see your point. Maybe they do crypto well as you say, but they don't seem to do software engineering well, or UI design for secure text messaging well. Sending messages unencrypted, and leaving messages unencrypted on the phone is a big fail in overall security of the app. It's meant to encrypt things, and it doesn't. That it doesn't put in tests for these things, and that it did not publish the independent security audit should cause everyone to reconsider using the TextSecure app for its stated purpose.
A crypto protocol bug is something like "TextSecure tried to implement secure file transfer, used the CTR block cipher mode, and managed to make the keystreams collide, so that passive attackers could recover file contents from encrypted streams".
The general point you made, that all applications have bugs (and thus security bugs) is valid. I'm drawing a line that you're not drawing, between design flaws in cryptosystems and random iOS/Android mistakes.
It's also true that until recently we didn't have an encrypted group chat protocol, but at no point in the past did TextSecure ever indicate that group chat was encrypted. Just the opposite.
It's important to realize that TextSecure is simultaneously a standard unencrypted SMS app as well as an encrypted chat app.
Sorry to bother, but is there any article/blog post/something to read about that?
As XMPP MUCs don't seem to have E2E encryption, mpOTR seems to be in infancy, and I'm unaware of anything else, this sounds really interesting.
I will agree with you about the UI design until the revamp in the latest release, though.
We have gotten very, very good at taking low-level devs and turning them into terrifying killing machines, and if you don't mind having all your flesh removed and your skeletal musculature replaced by pistons and servomotors, we'd be happy to do the same to you.
Ah, but have you pen-tested the pistons and servomotors to make sure they're secure against attacks?
Have you developed them under any applicable safety standards?
Otherwise I'd prefer to stay a purely organic organism.
If anyone has any idea on how to help a hapless team of security researchers manage many thousands of people looking to get through the crypto challenges, we'd be t-h-r-i-l-l-e-d.
We were able to keep up last summer, but then Microcorruption happened, we got into a hole, and we're only slowly digging ourselves out of it.
Alex, Sean, and I are turning the challenges into a book, which we're going to release "choose-your-price" with all funds directly going to a charity (I like Watsi, but who knows); that book will also include Set 8, which is all ECC.
I'm hoping we'll be done with that by August.
Is that still part of the backlog?
People could sign up and post their solutions, and others would be able to validate entries of levels below them (so someone on level 3 could validate a level 2 submission). Or hell, the webapp may even be able to validate some of it (obviously not entries where you paste code, but things like "Decrypt !4321hj4123$@!#$" could be automated).
This way, you could submit solutions via your login (attached to your email) and people validating the entries wouldn't be able to see the email address, just the answers, and could comment on the submission.
Sorry, this was a bit stream of consciousness. If this was particularly barebones, you might be able to slam it out in not too much time. If you don't mind PHP or Common Lisp in the backend, I might even be able to help out a bit.
Two things complicate managing the challenges:
* We have to actually read the code people send us (if you get things wrong, there's a pretty good chance we'll catch it). We try hard not to move people forward a level until they've gotten the previous challenges right.
* We don't dictate any particular format for submissions. Some people paste code into their email messages; some people attach lots of little files; some people attach archives. Some languages let you stick all that code in 1 file; others, like Java and C#, have complicated directory hierarchies.
Does anyone know how much these audits typically cost, if you're not being subsidized?
Compliance-only is usually cheaper; you can buy rubber stamps for <$10k.
Places like that are just running a scanner against your website. In the case of an app like this (which was an iOS app), you might find a cheap place to run it through a source code analyzer (either through a cloud-hosted service like Veracode, or by running an app like AppScan).
Assuming you wanted to hire a "respectable" firm to actually perform a real application assessment, I'd say it's closer to 30-50k.
If you look at the report, it was scoped at 3 man-weeks of testing (which in this case looks like it was 3 engineers for 1 week). Even if you don't include any additional overhead (like hours for the report generation, or project management hours), you're looking at ~24k just for the engineers effort (if they just priced it t&m, which hopefully they don't).
To be fair, this is a pretty exotic application though, compared to what a lot of other people might be working on. The scope for a project to test a more "normal" app would be less. Maybe even half that.
You can also get a better deal if 1) you're open source, and the audit becomes part of someone's portfolio 2) you provide really clear documentation, security model, etc. to make the process more efficient 3) your app is already well-architected so the security-critical part is small, and you only audit that part.
There's probably a 100x bigger pool of people capable of doing "IT audits" and "hosting environment audits" well vs. appsec for webapp or mobile app (or especially desktop app).
CORRECTION: the 3rd party apps were part of it, along with Apple's own logging, when reported in the news.
Edit: OK, apologies to iSec for my mistake. I thought he was still affiliated with them. In any case, it looks like a top-notch report, so it would have been a shame to detract from that accomplishment.
I agree that the title is bad, and (belatedly) asked 'dang to revert it.
Many people on HN seem to be reading the review without actually looking at Nadim's response on the Cryptocat blog - which I urge everyone to read first before commenting.
As far as I understand the username for Nadim (Kaeporan) was also blocked from HN last night so probably he isn't able to continue responding to the comments up here.
Your former company did an audit for a customer, then you posted it to HN calling it "Brutal"? Really?
I also do wish that the original post had pointed at CryptoCat's blog post - aside from CryptoCat's context about problems and resolutions, people should also be aware of the separate report by Zooko's team.
Everyone's life has a purpose. Consider the possibility that yours is to serve as a warning to others.
Kudos to Nadim and OTF for releasing the info and starting these discussions, as inevitably arduous as they are.