Hacker News new | past | comments | ask | show | jobs | submit login
CryptoCat iOS Application Penetration Test [pdf] (isecpartners.github.io)
168 points by secalex on April 2, 2014 | hide | past | favorite | 131 comments

Hi, I'm the lead developer for Cryptocat. I strongly urge you all to please read our blog post regarding this audit: https://blog.crypto.cat/2014/04/recent-audits-and-coming-imp...

This audit document alone does not give enough context. This audit was commissioned by us and concerns a pre-release version of Cryptocat for iPhone. Many of the bugs it found are due to the fact that it was reviewing a prototype with debugging features (such as NSLog) turned on. While this audit definitely does find some vulnerabilities and room for improvement, none of the critical bugs in this audit ever made it to Cryptocat for iPhone's release.

It's very unfortunate that this audit is being taken out of context like this and used to attack our effort. I'd appreciate it if you could please upvote this comment and help me contextualize this audit. Again, please, read the blog post for context (and also for the results of another audit we comissioned in parallel.) We've done our best to address these issues and are working towards an open discussion on how to improve accessible encryption. https://blog.crypto.cat/2014/04/recent-audits-and-coming-imp...

The blog post's last section ("On the Significance of Audits") discusses why it is that Cryptocat has seen more audits published about it than other encryption projects. Please, dare to discern. Read what we're doing to improve the security of accessible encryption and our reasoning for publishing these audits. I'll be grateful for you taking the time to read on what we're doing and I am more than happy to discuss with you and answer your questions.

Hi, read the blog post. According to both the blog post and ISEC, you had a pathetically easy man-in-the-middle attack against all of your code, including deployed code in real world use, not just the "buggy" IOS client. Care to explain how that can be taken "out of context?" Even if the rest of the audit was a complete lie, that would still be "brutal."


"CryptoCat's OTR implementation on all platforms allows a chat peer to change their OTR key during a chat session without user notification. An attacker performing a man-in-the-middle attack against the client's XMPP or HTTPS stream can inject their own OTR key in the discussion after a user has authenticated their peer's OTR fingerprint. This permits the attacker to decrypt all messages that follow, and no user would have reason to suspect the compromise. Group multi-party discussions do not seem to suffer from the same vulnerability." (emphasis mine)

Your response in the blog:

"... This is a problem that can be exploited in some real-world scenarios and needs to be addressed with appropriate authentication warnings. Thanks to these audits, this issue was caught in Cryptocat for iPhone before it was released. Therefore, unlike the desktop version, Cryptocat for iPhone was never affected by this issue."

You're doing crypto in the browser, while claiming on your home page:

> Everything is encrypted before it leaves your computer. Even the Cryptocat network itself can't read your messages.

You're seriously misleading users regarding the security of crypto -- the web is an environment that has ZERO controls on code updates. Given this remarkably self-serving and ill-conceived stance, it's difficult to imagine how your crypto could ever be considered trustworthy.

Web-based distribution simply is not, in its current form, a viable model for distributing code that must survive the compromise of the original distributing party. None of the technical (off-server code signing) or social (review of update notices) tools available in non-web distribution methods are available.

The damage that you and other projects cause to public awareness and comprehension of crypto issues is potential staggering.

When you factor in the real risks people take when relying on crypto to communicate in hostile situations, you're doing more than just making nerds grumpy -- you have the potential to significantly harm people's lives.

Crypto is not an amateur's game. Some things are too important to be left to experimentation by unqualified engineers.

I don't think CryptoCat has been distributed as a traditional web app for at least a year (probably more). The browser code is distributed as an extension, which does not have the properties you describe.

edit for sub-comment: A traditional web app updates every time you hit the URL. In a browser extension your code is not necessarily tied to any remote origin, including the Chrome/Firefox stores. It is a user's choice to automatically receive updates from the vendor and this is the same choice that you make if you use apt-get vs manual download/checksum/sig check/audit.

> The browser code is distributed as an extension, which does not have the properties you describe.

Actually, browser extensions have the exact same properties except for being code signed. That's not enough: http://arstechnica.com/security/2014/01/malware-vendors-buy-...

> I don't think CryptoCat has been distributed as a traditional web app for at least a year (probably more).

That they ever shipped in-browser crypto demonstrates that they shouldn't be shipping crypto.

Malicious people buying extensions and then malwareing it is as likely or not as malicious people buying the vendor of whatever tool you are using on the desktop once its auto-updating feature is good enough (that would actually be slightly worse, because in the case if the extension you at least get to read what's actually executed).

What you are saying is that you don't trust any kind of application to do crypto unless you have previously audited it. That's a reasonable stance to take but it's irrelevant whether you distrust a third-party browser extension or a third-party native app.

The main argument against crypto in JS extensions is that getting crypto correct in regards to timing based side-channel attacks is very hard to impossible.

But if you are dealing with a specific browser in an extension context, this might be somewhat mitigated which would bring us back to the trust issue, which, again, IMHO is not dependent on the platform you use

> ... once its auto-updating feature is good enough ...

This is why it's a very bad idea to implement silent automatic updates, and why they're the wrong thing to copy from the web.

Was it commissioned by you? The audit I saw had the Open Technology Fund's logo on it. OTF is a US Government effort driven by Radio Free Asia and the Broadcast Board of Governors.

OTF, again (smartly) using US taxpayer dollars, funds audits of a variety of privacy technologies.

For instance, they also funded a good-sized chunk of the Truecrypt audit.

Hi Thomas,

OTF provides a form projects like ours can fill to commission this type of audit. So basically, we asked OTF to commission it for us and they accepted. TextSecure, a great encryption app that I've seen you recommend, also approached OTF and obtained an audit from iSEC via this same process. However, TextSecure decided not to publish their audit results.

You can read about OTF's reaction to these audits here: https://www.opentechfund.org/article/bringing-openness-secur...

I feel very comfortable recommending TextSecure. TextSecure might be the only secure messaging app I feel that way about.

For whatever it's worth: I have no commercial relationship with the TextSecure team, have never worked with them, have never been paid to audit their code, and am only faintly acquainted with Moxie (I've talked to him in person to know that he's extremely pleasant and surprisingly soft spoken, but not more than that).

Trevor Perrin, who worked with the TextSecure project to help design their cryptography, is someone I know a little bit better; it would be safe to say that Trevor Perrin is the only reason I know anything about cryptography, and, given a few bar napkins, I can outline a pretty convincing story that he is the root of basically every TLS vulnerability discovered after Marsh Ray found the resumption bug.

TextSecure is a great project, and if anyone was debating between it and some other cryptographic messaging application, I hope I've made that decision a little easier.

I agree! TextSecure is an excellent project. We're actually very lucky to be able to bring in Trevor Perrin this month to contribute to Cryptocat.

It's really great seeing Trevor contributing on the IETF [TLS] working group mailing list.

To clarify, these audits aren't a choice, they are a contractual obligation as part of receiving grant funding from OTF. Projects are allowed some flexibility in terms of when they schedule it, but it has to be done. I think it's a great idea.

Also, during the period of time when TextSecure received audits from firms as part of an OTF grant, publishing the results was not an option that was contractually available to us. It's not something we "decided."

Since I am familiar with the process, I know for a fact that OTF asks every project whether they'd like to publish their audits, including TextSecure.

That being said, what is stopping you from publishing the audits today?

They do that now. Dan's OTF post suggests that it wasn't always that straightforward, because the auditors had a proprietary interest in their reports.

Hmm. Cryptocat was actually the first ever OTF project. I believe they've always asked for the publication of audits.

What I'm curious about is, why don't other projects such as TextSecure publish their audits as well? I'd certainly appreciate Moxie answering this question.

The OTF blog post certainly makes good points for this to happen. I also personally believe that this reticence to publish audits is damaging to the opportunity for the honest evaluation of encryption software and the establishment of a realistic perception of encryption software. It also misleads users.

>What I'm curious about is, why don't other projects such as TextSecure publish their audits as well?

Its embarrassing, duh.

For who, TextSecure or the auditors? Both are possibilities, one slightly more likely than the other. Auditors hate having reports published with no major findings.

Why isn't Moxie replying? Publishing an audit, with or without vulnerabilities, surely is beneficial to TextSecure. I don't understand their reticence to publish audits. We know OTF has commissioned at least two audits for them, but not a word has been heard about them.

In fact, OTF has actually complained about their projects choosing not to publish audits: https://www.opentechfund.org/article/bringing-openness-secur...

This is the sixth time you've "asked" in this thread about TextSecure's audit. Cryptocat has literally never implemented a crypto feature of any sort, from random number generation all the way through user authentication, without some terrible vulnerability. TextSecure, on the other hand, is the subject of a total of zero published crypto vulnerabilities.

Whatever Moxie's reasons for not having published their audit, I'm sure they're valid. Either way, no amount of innuendo about TextSecure is going to change the ground truth about your own project.

There might be no person on the Internet more poorly positioned to cast aspersions on other people's projects than you. Please stop.

To be clear, I'm not trying to cast aspersions. As I've already stated, TextSecure is a great project that I strongly recommend.

I'm trying to have a serious discussion regarding transparency of audits. I feel your reaction is overly aggressive and not genuinely constructive.

I can't tell if you actually don't know who commissioned it or if this is your way of suggesting that the parent comment is a lie. It seems like information you would have access to, considering your connection with iSEC, no? (I'm not trying to stir up shit, just genuinely curious.)

Whoah! I'm not saying anyone is being dishonest. I know approximately as much as anyone who can read the linked PDF knows, modulo that I also know a bit about how OTF works because I worked with Matthew Green on coordinating the Truecrypt audit.

I was asking for clarification, but my writing style is dry and blunt, so I'm not surprised if I managed to convey something different. I apologize in advance if so.

More than anything else, I wrote the comment as a (hopefully mild) F-U to the sentiment that the USG is hellbent on destroying privacy on the Internet. Big parts of it are, sure, but there are good people working inside of it too. :)

I do not work for isec or matasano and I had the same question tptacek posted. The lead developer says that audit was commissioned by "us" and yet page 7 of the audit states:

  The Open Technology Fund (OTF) engaged iSEC Partners to perform a source-code
  assisted security review of the CryptoCat iOS application.

Another option is that OTF approached CryptoCat, from where their options would be:

a) don't do an audit - and the public would ask what they are hiding

b) agree to an audit being done, meaning you 'commissioned' it.

So it is just as important to know who approached who in this situation.

One answer, given downthread, might be that the audit was a requirement attached to grant funding from OTF.

It is nice that OTF is funding audits and releasing the reports. It is too bad we can not find out more info on the mythic SJCL audit.

I agree; the work Dan Meredith at OTF is doing is truly impressive.

they(cryptocat) commissioned it or were at least involved. https://news.ycombinator.com/item?id=7519431

> It's very unfortunate that this audit is being taken out of context like this and used to attack our effort.

No, what's unfortunate is your continued and repeated claims that cryptocat should be used today by people who expect privacy, when it is not yet safe or secure for private messaging.

You're going to get people killed. You don't seem to care.

The audit doesn't even seem terribly brutal. Just thorough. I wish every bug report were like this ;)

I don't know if this is news to some people, or what, but it is usual, when having a product professionally tested, to have the testers test it in earnest, as if it were ready for release. They won't avoid telling you about some obvious hole just because it was obviously accidentally left in and would be a 10 second fix (or whatever - that is just a random example).

If you've never been through this before then, presented with a big pile of bugs, it can seem like the project has been proven a total fuckup, and its authors obvious failures, and so on. But in fact, once you've got this concrete list of all the ways in which your program has so far proven itself hopelessly unsuitable for release, it's actually quite surprising how quickly they all get fixed...

(Edit - This is more of a general comment than anything specifically about cryptocat)


We commissioned this audit in late December and iSec began working on it in January. They audited a pre-release prototype that we provided. This is noted in the audit document, but it's hard to spot unfortunately.

The reason we commissioned this audit is to make sure our prototype was audited before release on the App Store. We're very happy to have benefited from this audit, but linking to this PDF alone de-contextualizes the effort and makes it seem like it's an audit of the production version of Cryptocat for iPhone, whereas the version we provided was an early prototype. The audit did find some issues with the (already-released) desktop version and server configuration, and those were also fixed and documented in our blog post.

I sincerely appreciate you taking the time to read our blog post on the matter and thank you for your understanding.

How was this a "pre-release prototype" audit of Cryptocat if the app was for iOS was rejected from the Apple app store in December.... and the audit took place after that in mid January? Seems dubious to say it was all about some extra debug logging when there are some serious flaws here found weeks after it almost was approved on the Apple store.


We submitted a pre-emptive build just to obtain approval from Apple. We were going to wait to update it with the audited build before actually releasing it (Apple lets you schedule releases in advance.) In retrospect, it was lucky we got Apple's rejection early on, so we were able to deal with it better.

Page 7/35 of the report:

Addendum(3/15/14): The iOS application was in development code that at time of testing was available only in a preproduction form on GitHub and not distributed via the AppStore. The CryptoCat team had time to review the vulnerabilities prior to publication in the AppStore and claims to have addressed them; however, iSEC has not validated any fixes and cannot make any claims to the current status of any vulnerabilities

Rest assured of my utmost respect and support, Nadim.

Upvoted your comment, flagged OP post. Don't think I've flagged anything on HN before, but I think this warrants that.

This is most alarming.

CryptoCat's OTR implementation on all platforms allows a chat peer to change their OTR key during a chat session without user notification. An attacker performing a man-in-the-middle attack against the client's XMPP or HTTPS stream can inject their own OTR key in the discussion after a user has authenticated their peer's OTR fingerprint. This permits the attacker to decrypt all messages that follow, and no user would have reason to suspect the compromise.

Fixes and improvements to this, and more, are covered in our blog post. I strongly urge you to read it. This audit alone doesn't give enough context. https://blog.crypto.cat/2014/04/recent-audits-and-coming-imp...

What's your fix for the man-in-the-middle attack on all platforms (including deployed ones) the audit identifies and your blog acknowledges?

In your blog post there seems to be little context that can excuse such a mistake and nothing that explains how you fix it? Am I correct in reading your blog post that right now there isn't a fix? I.e. it's an open attack assuming someone compromises a CA or a cryptocat server?

Isn't this a rather big issue since the only point of cryptocat is to protect against that kind of an attack. If you just wanted security only against eavesdroppers(i.e. you trusted the chat server), xmpp over TLS would work fine.

> In your blog post there seems to be little context that can excuse such a mistake

This is a recurring theme with cryptocat. Stay away for 5-10 years until they get their act together.

Hi there,

The fix for the MITM bug is to offer proper notice via the user interface when a user re-keys with a different public key. There's a demonstration of the user interface element in the blog post.

This issue (or one with very similar effect) was also found by the Least Authority audit: https://github.com/cryptocat/cryptocat/issues/607

(The 'issue E' that it references is https://github.com/cryptocat/cryptocat/issues/606 .)

Actually I strongly suggest reading these in conjunction with iSec's issues 12 through 16, because each team spotted some details that the other missed.

Are you sure that's not because the different teams had different scopes? The iSEC audit was specifically tied to the iOS application.

The scopes had a great deal of overlap; although we (Least Authority) didn't consider the iOS client at all, the rest of iSec's audit has essentially the same scope as ours. The point I was trying to highlight is how easy it is to miss things, and therefore that having independent concurrent audits is actually a really good idea. It would probably be even better if the teams worked mostly independently but were able to exchange draft versions of their reports (before the mitigations necessary for a public release).

When I saw one of the main CryptoCat developers present in 2012, I came away with the impression that nobody on the core team understood crypto, security, or software engineering. This audit is another rock on the mountain of evidence I've seen supporting this impression in the following years.

A really nice job by iSec, though.

It is also important to always remember that a crypto app isn't like other apps. If the protesters in Turkey rely on shoddy crypto today, it might cost them their lives a few months from now. I sincerely hope it doesn't come to that but it is a realistic example.

Always be sure of what you are doing, rely on external review and never, I repeat, _never_ overstate the security of your crypto system.

This point cannot be emphasized enough. In the extreme case, which Cryptocat marketing materials have employed, secure communications software is life safety critical on a level similar to medical or aviation software. As such, the admit-your-mistakes-and-fix-them-later model of development isn't agile, open, or any of those buzzwords. It's a way to get people killed.

Cryptocat has always provided ample warnings that no software can ever be trusted with your life. These warnings appear every time you launch Cryptocat, on the website and in various guides and blog posts.

And I agree, that's a bare minimum warning for all such software. I appreciate your efforts to make strong crypto more accessible to the general public.

That being said, you and I both know that people are using Cryptocat in dangerous situations. And having worked on both medical imaging and secure messaging systems, I have a healthy respect for the consequences of implementation failure. As such, I feel that your disregard for these consequences in broadly releasing such broken software would displease any professional review board, and I frankly doubt you'd ever attain such a license given such a history of poor professional judgment.

In short, I take my profession damn seriously, and jokers like you are why nobody trusts software.

Perhaps we shouldn't call people names?

It's important to note that this audit was commissioned to evaluate a prototype build before release. It was expected to find bugs, and all bugs were fixed before release. I believe I take my job very seriously when I commission such audits on a bi-annual basis and transparently discuss the results. Independent individuals who find bugs (such as "Decryptocat") are also listened to and rewarded for their effort. I believe that I and my team have been competent, honest and hard-working. If all encryption projects were as transparent as us, you would realize that this kind of issues happens everywhere.

Please make sure to read our blog post and Github discussions to see the kind of open discussion we're hoping to lead so that our software can benefit.

That being said, I suppose comments like yours are why I've been having recurring suicidal thoughts for the past two years. I don't know what else to say at this point.

I read the blog post reacting to this batch of audit results quite carefully, in point of fact. In general, when I read vendor responses to such devastating findings, I'm looking for a concrete plan to improve the threat modeling and development practices deficiencies which are inevitably the root cause of the class of issues uncovered by the iSec and Least Authority audits. Without such changes, saying that you're going to keep getting audited is precisely equivalent to saying that you're going to to keep writing security bugs and hope someone finds them before the actual red team owns you.

While I agree that the degree of openness your team has maintained is highly desirable, repeatedly shipping bugs which adherence to industry best practices such as "don't use fixed IVs" or "always use constant-time compares" would have avoided makes it difficult to believe that your team possesses the competence you claim as well as undermining the credibility of your communication about such issues. Thus my failure to be impressed by a post which only proposes band-aids and completely fails to apologize for the lapses in judgment which led to this state of affairs.

I don't take using this level of harshness in a public forum lightly, and I'm truly sorry to contribute to your unhappiness as a result. Please do talk to somebody, even if it's not a professional, I've found it always helps.

I disagree; I don't think your summary is accurate. This is an audit of a pre-release prototype. All the bugs were fixed before release, and our blog post at https://blog.crypto.cat/2014/04/recent-audits-and-coming-imp... does not discuss mere band-aids. It discusses, at length, real solutions to complex problems that many encryption apps face. It resolves pitfalls that even companies like Apple commit on a much wider scale and on a much more dangerous level.

For example. We didn't simply "re-use fixed IVs". We know not to do that. The resulting bug was the series of a much more complicated and hard to spot issue with the re-keying mechanism. Understand you might not have the full picture here.

Simply put, I refuse the assertion that Cryptocat's team has not dealt with its software development in a competent, professional, responsible and honest fashion.

I want to discuss this further with you. I want to convince you of my point of view. Please email me at nadim@nadim.cc so I can have the opportunity to discuss with you and hopefully convince that your perspective isn't exactly right on this.

I made no criticisms of how you responded to individual issues raised by the audit, and in fact it's encouraging to see many of the long-standing contact authorization issues finally being addressed as well as what I would generally consider an acceptable approach to handling individual security issues. But I don't think either of these points addresses my concerns regarding security conscious development practices.

I appreciate your willingness to continue this discussion, dropped you an email.

You keep saying "I commission the audits". Isn't OTF the one paying for these audits? Are you taking OTF grant money? If so, aren't you required to have the audits done?

I was the person who wrote to OTF asking them to fund our audit. I have no idea if they require it — I'm always the one to initiate the process.

> I've been having recurring suicidal thoughts for the past two years.

I urge you to talk with somebody.

You lampoon yourself with this extremist attitude.

Go read http://tobtu.com/decryptocat.php and earlier sources before deciding he's being extremist.

I have, and I still believe he's being extremist. No one is in this space unscathed. For no other product have I seen the same level of vitriol and hate being spit at Cryptocat, despite it being absolutely not the only entrant.

> No one is in this space unscathed

That's both not true, and misleading; even comparing it to applications that have had serious published flaws, this one has vulnerabilities of a number and magnitude that distinguish it.

I wasn't going to post this, but I had the same feeling at a conference. Also, if I remember correctly, the first version that got audited turned out to be extremely insecure as well, with their own custom crypto protocols? I haven't recommended CryptoCat to anyone since, and still wouldn't.

This audit report is another fascinating read to see all the mistakes I would probably have made as well. Secure crypto is so incredibly difficult to get right...

It's important to note that this audit concerned a pre-release, debugging version of Cryptocat for iPhone. The audit document alone doesn't give enough context; I strongly urge reading our blog post: https://blog.crypto.cat/2014/04/recent-audits-and-coming-imp...

Are you saying that it's good news because it means that no actual users were exposed to the flaws? To the extent that those flaws apply only to the iOS version, I agree: that's good news.

Are you saying that it's good news because they tested something that you weren't ever going to release in that state? That's a tougher row to hoe, unless you're going to claim that your team inevitably would have found the same set of vulnerabilities that Scott, David, Alban, and Zooko's team found.

For a typical application --- yours isn't typical for any number of reasons --- prerelease or not, the state the application is in when a pentest team gets it is, from the perspective of security, the application customers would have received.

Speaking of the vulnerabilities that our team found, here is our blog post about it and a link to our report and the github issue tickets that we opened: Here is our blog post about our audit of Cryptocat, which was also announced today: https://leastauthority.com/blog/

There is great stuff in here, including the CTR nonce reuse bug that Nadim wrote about earlier. Ouch.

Thanks! I'm very proud of our work in this audit.

Also of interest is their blog post about how they plan to handle the issues described in this report: https://blog.crypto.cat/2014/04/recent-audits-and-coming-imp...

Reading that, I still am not sure why anyone would use CryptoCat especially with things like TextSecure on the market that seem to take crypto far more seriously. The only reason I can see for that is that they have clients on more platforms, but if this is similar to the state of all of them, then what's the point?

Thanks for linking to the blog post. This audit concerns a pre-release version of Cryptocat for iPhone. Many of the bugs were due to debugging code and were fixed before release.

Which of the bugs were due to debugging code?

Findings iSEC-RFACC0114-1 and iSEC-RFACC0114-3. (2 out of the 17 vulnerabilities found by iSec, of varying severity.)

Why use donated money to pay for an audit of software that has known bugs and isn't ready yet? That's wasteful. The point of an audit is to find bugs you don't already know about.

It can be a useful technique for testing the quality of the audit.

At a previous company we had to have words with a company that performed a software audit as they failed to find two issues we'd planted to test them. (Of course, they did find several things we didn't know about.)

Has anyone done the same level of analysis on TextSecure ? I feel like their model/seriousness is better but there might be flaws in the implementation (or protocol) and being audited might highlight some of them.

I don't believe they've had an independent security audit. I think their team however is comprised of more respected cryptographers like Moxie Marlinspike, who introduced the concept of SSL stripping, one of the issues that was found in the CryptoCat app. I mean no disrespect to CryptoCat, and more eyes can always find something someone overlooked, but I think the Open Whisper Systems (TextSecure) team is stronger and using better cryptographic techniques.

TextSecure also has design contributions from Trevor Perrin, who is also amazing.

natdempk and tptacek have both asserted that TextSecure is trustworthy and solid by dint of being implemented and designed by reliable experts. This pretty much boils down to argument by authority.

mandalar12 asks about "the same level of analysis on TextSecure". Since our (Least Authority's) audit and the iSEC audit are public, random people on the internet can know what that level is, without having to know or rely on Least Authority or iSEC to be non-malicious.

With enough eyes all bugs are shallow, but to whom? We need to communicate transparently to achieve this ideal.

Disclaimers: I worked on the Least Authority audit of Cryptocat; I've worked at iSEC in the past; I've met Moxie Marlinspike and Trevor Perrin and greatly respect their expertise and motivations; I also don't know if they've been infected by malicious puppetmaster aliens since I last saw them.

I don't actually see any assertions on this thread by natdempk or tptacek claiming that TextSecure is "trustworthy" and/or "solid". Did I miss something?

My own opinion is that both strong cryptographic and security engineering expertise on the part of designers and implementors, and multiple independent published security audits, are necessary to consider an app trustworthy -- but still not sufficient, given the poor state of platform security and the lack of support most current platforms (operating systems, browsers, etc.) give for isolation between apps.

Disclaimer: I worked on the Least Authority audit of Cryptocat, and in general get paid for similar auditing. I'm also designing a programming language (Noether) that is intended to facilitate security reviews.

No need for an of audit TextSecure... just go to their github issues page. Plenty of open bugs and regressions, some of them as serious as some of the ones in the latest CryptoCat app audits. For example, one issue where some messages sent are not being encrypted.

Even with 'amazing' contributors, bugs are still there in TextSecure.

I just reviewed all open and closed bugs in TextSecure's Issues page and didn't see a single crypto protocol bug. Admittedly, I looked quickly and casually. Could you point us to one?

"I just reviewed all open and closed bugs in TextSecure's Issues page and didn't see a single crypto protocol bug. Admittedly, I looked quickly and casually. Could you point us to one?"

You could not have reviewed all open and closed bugs in 8 minutes ;)

Issue 1073: https://github.com/WhisperSystems/TextSecure/issues/1073

Another one: Just sent an encrypted photo to someone? The photo is stored unencrypted on your phone.

Edit: group messages were also sent unencrypted, but this was fixed. https://github.com/WhisperSystems/TextSecure/issues/32

Edit2: ok, I see your point. Maybe they do crypto well as you say, but they don't seem to do software engineering well, or UI design for secure text messaging well. Sending messages unencrypted, and leaving messages unencrypted on the phone is a big fail in overall security of the app. It's meant to encrypt things, and it doesn't. That it doesn't put in tests for these things, and that it did not publish the independent security audit should cause everyone to reconsider using the TextSecure app for its stated purpose.

I read that issue especially carefully since it was on the front page, and it doesn't look like a crypto protocol bug. I'm actually not sure what that bug is.

A crypto protocol bug is something like "TextSecure tried to implement secure file transfer, used the CTR block cipher mode, and managed to make the keystreams collide, so that passive attackers could recover file contents from encrypted streams".

The general point you made, that all applications have bugs (and thus security bugs) is valid. I'm drawing a line that you're not drawing, between design flaws in cryptosystems and random iOS/Android mistakes.

These are bugs, but I'm not sure they are what you think they are. I would be worried if TextSecure indicates a message is going to be sent securely, but it is sent insecurely. This bug, however, is an edge case where TextSecure indicates that it is going to send a message insecurely, and that's what it does.

It's also true that until recently we didn't have an encrypted group chat protocol, but at no point in the past did TextSecure ever indicate that group chat was encrypted. Just the opposite.

It's important to realize that TextSecure is simultaneously a standard unencrypted SMS app as well as an encrypted chat app.

> encrypted group chat protocol

Sorry to bother, but is there any article/blog post/something to read about that?

As XMPP MUCs don't seem to have E2E encryption, mpOTR seems to be in infancy, and I'm unaware of anything else, this sounds really interesting.

I'd also add as a TextSecure user that I assumed both of those scenarios were plausibly insecure until the recent release with secure group messengering. They're pretty open about the weak points in the application generally speaking.

I will agree with you about the UI design until the revamp in the latest release, though.

iSEC also audited TextSecure, but TextSecure chose not to publish the audit.

They didn't choose not to publish.


And? What does this have to do with the parent comment?

The grandparent asked whether they have had a security audit, and parent said "I don't believe they've had an independent security audit." Seems like a straightforward reply to those.

Oh. Duh. You're right. I was skimming my threads and lost the context. Thanks for pointing this out.

There is a "many ways to skin a cat" joke here somewhere. It is actually terrible - the hmac timing attack requires around 3 minutes of google searching to avoid and is basic public domain knowledge. The other are much worse.

I wish I knew how to find my way into security as a hobby. Such a fun topic.

Why find your way in as a hobby? Are you a professional developer now? Do you like low-level code? Are you OK with jumping directly into the deep end of the pool and maybe drowning a little bit? Why not reach out and talk to us about working on a professional security team?

We have gotten very, very good at taking low-level devs and turning them into terrifying killing machines, and if you don't mind having all your flesh removed and your skeletal musculature replaced by pistons and servomotors, we'd be happy to do the same to you.


...if you don't mind having all your flesh removed and your skeletal musculature replaced by pistons and servomotors...

Ah, but have you pen-tested the pistons and servomotors to make sure they're secure against attacks?

We only care about their ability to inflict harm, not their ability to withstand it. :)

I'd be more interested in the safety of those electromechanical components.

Have you developed them under any applicable safety standards?

Otherwise I'd prefer to stay a purely organic organism.

And try their crypto challenges :)

Coding is a purely ancillary function to my day job; I am not a developer.

"I hate coding" would be a problem for us, but "I ship software every day", not so much.

How about "I really enjoyed microcorruption and the crypto challenges, but am mostly working as a mechanical engineer"?

"Really enjoying Microcorruption and the crypto challenges" puts people pretty high up on my priority list.

I am in the same position. Stanford Online started a Coursera course on cryptography yesterday, might be interesting for you.


Long-time HN member tptacek's company has a challenge set that many praise highly: http://www.matasano.com/articles/crypto-challenges/

Are the Matasano crypto challenges currently stuck in some way, like with a grading backlog? I signed up some months ago, sent my first set of answers just after the new year, and have never heard back about the second challenge set.

We are way. way. way. backlogged.

If anyone has any idea on how to help a hapless team of security researchers manage many thousands of people looking to get through the crypto challenges, we'd be t-h-r-i-l-l-e-d.

We were able to keep up last summer, but then Microcorruption happened, we got into a hole, and we're only slowly digging ourselves out of it.

Alex, Sean, and I are turning the challenges into a book, which we're going to release "choose-your-price" with all funds directly going to a charity (I like Watsi, but who knows); that book will also include Set 8, which is all ECC.

I'm hoping we'll be done with that by August.

I know I haven't submitted in forever (and may have been removed of the set of active participants), but I got your New Year's bankruptcy mail, but never the "bonus set".

Is that still part of the backlog?

Have you considered taking people who have passed challenges (and are excited about them) on as volunteer assistants for the challenges?

If anyone has ideas on how we could do that, I'm all ears. We'd have to mitigate the privacy issues somehow, because not everyone has given us permission to reveal that they're working through the challenges, and passing submissions to people outside the firm would be that.

Not sure what you guys are using to track all of it, but maybe a simple web app would help you out. For instance, an email address would be attached to a "level" (so you guys could input your previous members and attach what level they're on so if they sign up with that same email it'd be aligned).

People could sign up and post their solutions, and others would be able to validate entries of levels below them (so someone on level 3 could validate a level 2 submission). Or hell, the webapp may even be able to validate some of it (obviously not entries where you paste code, but things like "Decrypt !4321hj4123$@!#$" could be automated).

This way, you could submit solutions via your login (attached to your email) and people validating the entries wouldn't be able to see the email address, just the answers, and could comment on the submission.

Sorry, this was a bit stream of consciousness. If this was particularly barebones, you might be able to slam it out in not too much time. If you don't mind PHP or Common Lisp in the backend, I might even be able to help out a bit.

We actually do have a web application we use to track the challenges (the mail is all done through Mailgun).

Two things complicate managing the challenges:

* We have to actually read the code people send us (if you get things wrong, there's a pretty good chance we'll catch it). We try hard not to move people forward a level until they've gotten the previous challenges right.

* We don't dictate any particular format for submissions. Some people paste code into their email messages; some people attach lots of little files; some people attach archives. Some languages let you stick all that code in 1 file; others, like Java and C#, have complicated directory hierarchies.

We have a pretty cool submission system in use at my university, that will spin up a VM, compile and run your code and have a set of validators check if the output is correct. That way people can test their submission and immediately get feedback. You would still have to look into the code to make sure it's really valid, but I guess you could filter out lots of invalid submissions. If you are interested, I can hook you up with one of the developers of that system.

This is awesome. I'm sad that CryptoCat is getting slammed for this for being one of the brave few to post this online. I am sure there are an infinite number of "security-critical" apps which would fail an audit like this, but who never even thought to GET an audit -- much less post it online. The software development community is much stronger for being able to see professional stuff like this posted.

Does anyone know how much these audits typically cost, if you're not being subsidized?

Generally 10-50k is a good starting point for this level. Most firms are full up on work most of the time, but will often try to get interesting new companies or projects even if they're less profitable since 1) they can grow into better stuff 2) good for reputation and for retention of their own employees.

Compliance-only is usually cheaper; you can buy rubber stamps for <$10k.

It's important to distinguish that the places offering the <10k rubber stamps aren't really offering the same service (even if you concede that it's only for compliance).

Places like that are just running a scanner against your website. In the case of an app like this (which was an iOS app), you might find a cheap place to run it through a source code analyzer (either through a cloud-hosted service like Veracode, or by running an app like AppScan).

Assuming you wanted to hire a "respectable" firm to actually perform a real application assessment, I'd say it's closer to 30-50k.

If you look at the report, it was scoped at 3 man-weeks of testing (which in this case looks like it was 3 engineers for 1 week). Even if you don't include any additional overhead (like hours for the report generation, or project management hours), you're looking at ~24k just for the engineers effort (if they just priced it t&m, which hopefully they don't).

To be fair, this is a pretty exotic application though, compared to what a lot of other people might be working on. The scope for a project to test a more "normal" app would be less. Maybe even half that.

Right -- a $10k real audit is a "deal" of some kind -- either a firm trying to win future business with a discount, or an individual doing it directly (especially from overseas, or as a side job).

You can also get a better deal if 1) you're open source, and the audit becomes part of someone's portfolio 2) you provide really clear documentation, security model, etc. to make the process more efficient 3) your app is already well-architected so the security-critical part is small, and you only audit that part.

There's probably a 100x bigger pool of people capable of doing "IT audits" and "hosting environment audits" well vs. appsec for webapp or mobile app (or especially desktop app).

Here is our blog post about our audit of Cryptocat, which was also announced today: https://leastauthority.com/blog/

Didn't realize how vulnerable even a simple NSLog was... I wonder how many websites have sensitive information they console.log but forgot to take out for production

Remember when the iPhone was under fire for logging everywhere a user went? That was due to careless devs logging location data. Also, logging data is a blocking operation, so if done too often it slows the performance of an app.

CORRECTION: the 3rd party apps were part of it, along with Apple's own logging, when reported in the news.

Wow, remind me to never have an audit done by iSec. "Extremely thorough" would have been tough but appropriate, but "brutal" seems just gratuitously provocative. Was that what you were going for?

Edit: OK, apologies to iSec for my mistake. I thought he was still affiliated with them. In any case, it looks like a top-notch report, so it would have been a shame to detract from that accomplishment.

Alex (the submitter) doesn't work for iSEC; he's the CISO of Yahoo now. He left iSEC last year to start Artemis.

I agree that the title is bad, and (belatedly) asked 'dang to revert it.

I for one think that Nadim and the Open Tech Fund are to be applauded for opening up their review to the public. I have to wonder how many other commercial and non-profit organisations would ever consider doing this? (Especially those which are in the field of communications and are relied on by people for their lives).

Many people on HN seem to be reading the review without actually looking at Nadim's response on the Cryptocat blog - which I urge everyone to read first before commenting.


As far as I understand the username for Nadim (Kaeporan) was also blocked from HN last night so probably he isn't able to continue responding to the comments up here.

Huh, this was apparently submitted by Alex Stamos, a co-founder of iSec partners (who did this audit). And he editorialized the title, "Brutal Professional Audit of CryptoCat Published."

Your former company did an audit for a customer, then you posted it to HN calling it "Brutal"? Really?

I was certainly not comfortable with that either; however, Alex is his own man now, free to use whatever inflammatory adjectives he chooses. Not much we can do about it, but also not unusual for him to follow the news on a company he put so many years into.

I also do wish that the original post had pointed at CryptoCat's blog post - aside from CryptoCat's context about problems and resolutions, people should also be aware of the separate report by Zooko's team.

Former employee adds adjective to HN submission title, film at 11.

The good thing about CryptoCat is that everyone wants to trash it, so it's becoming better.

The wisest words I've ever seen office administrative staff post over the FAX machine are:

Everyone's life has a purpose. Consider the possibility that yours is to serve as a warning to others.

Precisely. I'd also like to reiterate that CryptoCat had no special obligation to release the report, and doing so was remarkably transparent; Nadim was also super open and responsive throughout the whole audit process. Regardless of anyone's opinion of the findings, the issues are out in the open, discussable and resolvable, which can be nothing but beneficial.

Kudos to Nadim and OTF for releasing the info and starting these discussions, as inevitably arduous as they are.

@secalex Dumb post man. Way to de-contextualise something, cause a drama and damage reputations needlessly.

Reputations? What reputations, exactly? The only reputation CryptoCat developers have is for writing incredibly poorly thought out software full of holes. Combined with their taste for publicity I would rather call them a public menace.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact