Hacker News new | comments | show | ask | jobs | submit login
Police decrypt messages after breaking pricey IronChat crypto app (arstechnica.com)
92 points by MrMember 12 days ago | hide | past | web | favorite | 72 comments





For the English readers without Dutch sources, here is some additional information which might or might not be in the article:

Police seized and operated a server of a company called Blackbox Security which offered 'crypto phones'. Basically phones pre-installed with some software sometimes with all ability to communicate disabled aside from that application. The price for these phones was 1500 EUR including a 6 month plan, afterwards 750 EUR per 6 months for usage.

While the Dutch DA and Police have not given any details as to how the security was broken there are some clues (this is speculative as fuck):

  * A users guide that used to be published on the Blackbox Security website hints towards their chat application being XMPP+OTR.
  * Real-time access but no historical access hints towards an MITM to change the previously exchanged OTR keys (a common way to 'break' into a conversation).
  * The application seemed to not enforce and/or check the key signatures for changes.
This is not the first time the Dutch DA and Police have taken action against a 'crypto phone' provider. Ennetcom, another provider was taken down a year or so ago: https://www.zdnet.com/article/police-hack-pgp-server-with-3-... leading to arrests as well.

The reason given by the DA for the publication of this news is that threats of violence and reprisal were being made on the taken down chat messages by owners of these devices after police action was taken against them. They wrongfully blamed the people they were communicating with of leaking the information. As tensions increased retribution hits were likely and the risk to the public would be there. Hence: release the information.

To re-iterate, there is so far no reason to believe the actual cryptographic protocols in use got broken but that yes; taking over the server allowed them to MITM the OTR key exchanges and/or pretend to be another client. I could get more technical but since this is all speculation I don't see much value to it.


Please don't use code indent blocks for bulleted lists. They're absolutely unreadable on mobile.

Reformatted:

* A users guide that used to be published on the Blackbox Security website hints towards their chat application being XMPP+OTR.

* Real-time access but no historical access hints towards an MITM to change the previously exchanged OTR keys (a common way to 'break' into a conversation).

* The application seemed to not enforce and/or check the key signatures for changes.


They wrap on iPhone

No they dont. Are you using safari?

All web browsers on iOS use the Safari (or more accurately, WebKit) rendering engine.

I know. however I imagine that some of the other “browsers” messes a bit with settings and/or rendering.

not on mine

I've looking into this issue for the last 2 days: The APK we all looked into is IronChat-3.40-release.apk [1], after decompiling I think it is a copy/fork of Conversations version 1.14.6 with a few more commits until October 13th 2016 [2] with Secret Space Encryptor version 1.7.2c [3].

About unencrypted support: "supportUnencrypted() { return false; }" is in the config, so probably not [4]. There is a theory (I kind of started [5]) that it might be MitM because of bad UX. This is because they used OTR without TOFU which was only added in Conversations 1.15, after the fork [6].

[1] https://ironphonestore.com/IronChat-3.40-release.apk

[2] https://github.com/siacs/Conversations/tree/6371d2b7a9a2b5d7...

[3] https://paranoiaworks.mobi/sse/

[4] https://twitter.com/BWBroersma/status/1060136620383453195

[5] https://twitter.com/BWBroersma/status/1059851925234036739 / https://twitter.com/Schellevis/status/1059852605801852929 (= the author of the article quoted by ars: https://nos.nl/l/2258309 )

[6] https://twitter.com/BWBroersma/status/1060278116315262976


I have been designing cryptographic protocols for securing card payment transactions, card data and pins. I then was going with those systems through stringent certifications from various organizations, especially PCI.

Not enforcing signatures when exchanging keys? Is this crypto-kindergarten?

With a well designed payment system it is expected that the attackers have access to basically all of infrastructure -- servers, databases holding keys, network, disassemble terminals, bribe employees, etc. and still have no chance injecting their own keys, read pins or get any cryptographic material of value.

Why can't you just guys who pretend to build secure system spend some times reading real requirements from PCI or Visa or Mastercard to get at least some idea how real secure systems are built in at least one area?


I suppose that in the payment systems, there's a trusted party that has the ability to compromise transactions. The different parties in the system trust that party and rely on audit and potentially arbitration or litigation to resolve disputes about individual transactions or classes of transactions.

For end-to-end encryption for messaging applications, there may not be any such entity that everyone can trust. In that case, there needs to be a solution for key exchange to allow new parties and devices to join the system. In payments you could presumably say "the banks/banking association/cryptographic contractor of the banking association is the authority that certifies new entities that join the system". In messaging you probably can't do that if you're concerned that law enforcement will force that entity to add false certifications!

In other words, I think you're referring to a cryptographic problem with a somewhat different threat model.


There don't and should not be any trusted parties. Every trusted party is, by definition, a point of failure if the party is compromised.

PCI requires that organizations control cryptographic material using rules of dual control and split knowledge. No individual should have access to entire cryptographic key and any processes and devices should require at least two people to operate.

For example, HSM-s are ALWAYS operated by at least two security officers. Cryptographic keys are generated by HSM in the form of multiple components onto multiple smartcards. Each smartcard is stored in a separate safe where only the security officer/s assigned to that component have access. The HSM to be injected with keys must be operated by multiple security officers with their components. The HSM is regularly inspected -- each security officer brings his key from his safe, two keys are required to open the enclosure where the HSM is located. When the payment terminal is injected with keys there are two operators present monitoring each other to prevent tampering with the process. Etc.

With good understanding of the concepts it is possible to build secure system. It's not that hard.


I think we're talking about different levels of the system. The attack in this case was not about an individual employee unilaterally taking an improper action, but about a company being officially compelled by a government to take an action that was contrary to the interest of an end-user. In the financial system this happens all the time and is considered somewhat unremarkable.

If this company had had a dual control mechanism where multiple security officers had to be involved in order to issue signatures, presumably the company's executives would have told those security officers "we have to issue this signature because the government requires us to", and presumably the security officers would then have done it. It wasn't a rogue action from the organizational point of view, only from the customer's point of view.

Also, in a messaging application new public keys have to be certified extremely frequently because new users and devices are constantly joining the system with new keys. Presumably this happens in an automated online fashion (otherwise, the security officers aren't going to get much sleep). That makes it even more challenging to subdivide the responsibility for certification, for many reasons.

I don't mean to disparage the precautions that financial organizations have implemented, and I agree that some parts of the software world sometimes seem extremely cavalier in comparison. But I still think that in this particular case the threat models are extremely different.


Nope, not really.

1. They are choosing and then providing devices. This is very important because it means they have physical contact with the device, initially, so they have means to bootstrap cryptographic system by way of injecting keys, etc.

2. They are middlemen transferring messages between multiple parties that use their devices without having to understand the secret part of the message and only routing the messages.

3. They are paid well enough for the service that they should be able to cover expensive devices and processes like manual key injection or expensive hardware security modules.

4. The core of the business is security, if it is not provided nothing else will change the fact they have not provided what they were paid for.

The only real difference from payment industry is that the threat is from governments, too.


I forgot about the point that they physically provide the devices, which might be relevant somehow. But how could they use financial-industry-like controls to prevent themselves from being compelled by a government to certify a man-in-the-middle attack? How can the processes distinguish between "we believe this statement is true" and "the government compels us to state that we believe this statement is true"?

You'd think, but as I remember even the EMV specs can be a bit handwavy about the entropy of derivation components, and the tendency to rely on both limited attempts in hardware, and the security of keys generated and stored in HSMs (which people essentially use hashicorp to replace these days), the attack surface is fairly broad for a police service.

There is a basic bootstrapping problem with all these systems, where either you have a TSM facility of some sort, or you accept the ostensibly very low likelihood that your key provisioning protocol gets compromised.

Trouble is, if you are a target of any intelligence interest, any linux remote zero day means your provisioning server is probably going to get owned out of the gate as soon as someone seizes one of your devices.

Not checking a signature seems like an unforced error, but really, there are so many plausible ways this could have happened.


Bootstrapping can be done correctly, but it requires a lot of planning and preparation to do right. We had to scrap few attempts at bootstrapping. For example, we had to scrap our first attempt because we have allowed single employee to receive the package with the HSM.

The security of well designed system will not be impacted by any number of zero-days or attackers having free access to your network, devices, databases, etc. This is because well designed system will not base security of the data it protects on components and mechanisms that cannot be trusted to be secure.


Really depends on your threat model. If it includes international police and intelligence agencies, a COTS HSM seems optimistic.

To be honest if you own the server, you can push an update to the app to start sending data in clear and the user would be no wiser.

>* The application seemed to not enforce and/or check the key signatures for changes.

Xabber clone? If so, this, again, demonstrates that automatic key renegotiaton is a glaring security vulnerability open for real world exploitation.

Throw that into Moxie Marlinspike's garden.

Edit, seems their app store is still up. Checkout it out yourself http://www.appstoreprivacy.com


Pretending to be another client seems most likely

> Blackbox-security.com, the site selling IronChat and IronPhone, quoted Snowden as saying: "I use PGP to say hi and hello, i use IronChat (OTR) to have a serious conversation," according to Web archives. It wasn’t immediately known if the endorsement was authentic.

This strikes me as... very inauthentic. I think anyone with even a basic understanding of crypto would do things the other way around.


> I think anyone with even a basic understanding of crypto would do things the other way around.

Can you explain your reasoning? PGP lacks forward secrecy, a key feature that you'd want and which OTR provides.


For all the flak it's received, PGP has no known flaws if you know how to use it properly. IronChat may have had some security merits (I can't say), but until there's been some in-depth audits you can't trust it.

It's not about the technical features that are theoretically available, it's more about how much you can believe they actually hold in reality.


Saying PGP has no known flaws "if you know how to use it properly" is like saying it's easy to safely use AES GCM as long as you never reuse a nonce with the same key.

Sure, it's true. But that reasoning kinda obviates everything. Some things are easier to use safely than others. Your second paragraph is basically the essence of criticisms against PGP.


This was all in the context of Edward Snowden allegedly praising ironchat over pgp. I trust the man to be able to use the latter properly and thus being able to use it for secure communications, more than ironchat.

> but until there's been some in-depth audits you can't trust it.

OTR uses AES-128 w/the Diffie–Hellman key exchange, and SHA-1 hashes to confirm integrity. So in terms of technology it is a pretty well travelled road. And the advantages of OTR over PGP make it worth seriously considering for secure messaging.

As to specific implementations I cannot say, but that's true with both OTR and PGP.


> As to specific implementations I cannot say, but that's true with both OTR and PGP.

That's exactly the root of the problem. When you say you use pgp, there's a very high chance you're using gnupg from the command line, ie the one that has been reviewed by every security expert who has wanted any bit of recognition. When you say you're using OTR, everything depends on the specific implementation, so there are infinitely more ways your setup can be compromised.

I do agree that from a purely technical point of view OTR is better than PGP (except maybe the need for both parties to be online at the same time, but that's a minor inconvenience when comparing to the additional security OTR provides). But in this case the technical merits are not really important, what is really important is the complete system, and in that view the old, crufty, hard-to-use PGP wins.


Well, I took it to mean PGP vs IronChat. I'll take an open source implementation of a well-established protocol over a closed source implementation of another well-established protocol any day.

>blackbox security

That isn't a constructive reply. OTR is supported by dozens of clients with no connection to that company[0], and the protocol specification itself is available to the public[1].

The person I was responding to claimed that PGP was the superior choice to OTR, and I wanted to understand why.

[0] https://en.wikipedia.org/wiki/Off-the-Record_Messaging#Nativ...

[1] https://otr.cypherpunks.ca/Protocol-v3-4.1.1.html


I found it a reasonably constructive reply, because the operation of a "black box" cannot be properly observed, whether or not some of its behavior adheres to a particular contract or protocol.

You seem to be taking the name of one specific company a little too literally. As I showed previously it is a public protocol that interoperates with open source clients, it isn't a black box in any way shape or form.

I feel like this discussion is getting away from the original premise:

> anyone with even a basic understanding of crypto would do things the other way around.

i.e. use PGP instead of OTR. Nobody has yet even attempted explain their reasoning as to why? Bringing up one specific vendor is a deflection, rather than an answer.


You seem to be inferring focus on the name of the company more than intended. Is this not closed-source software? Might not it have some weaknesses at any point in its implementation or update mechanisms--by design or inadvertent--that put it at a disadvantage to PGP?

This discussion started because someone claimed that the crypto of one was superior the crypto of the other, the implementations aren't strictly relevant i.e.:

> anyone with even a basic understanding of crypto would do things the other way around.

Not least of all because they made no reference to which implementation of PGP they were even calling superior, only the protocol itself.


The choice isn't between using OTR vs. using PGP. It's between using unaudited (but perhaps convenient) commercial software vs. using possibly-audited, offline-friendly (probably inconvenient) PGP to exchange extremely-high-sensitivity messages. The apocryphal Snowden account even appears to suggest PGP for the lower-sensitivity message.

How were the police able to seize the company that sold IronChat in the first place? That's like shutting down Open Whisper Systems for criminals using Signal. Do we know if the company knowingly did business with organized crime? Or defied a court order? Or is it simply illegal to sell encryption hardware in .nl?

Not sure whether this is how they did it, but a new law has given police 'hacking powers'. Otherwise, it'd probably be a court order based on the claim they knowingly did business with organized crime.

Why didn't it remain court order based for stuff like this? What was their reasoning for needing to bypass the courts?

I think it still takes court approval for such a hacking operation, its meant to avoid notifying the targets.

> A 46-year-old man who owned the crypto phone service and a 52-year old partner have been arrested on charges related to money laundering and participation in a criminal organization.

My bet is those two were aware of the law and tried to do everything "above board".

The law said "most of your customers are criminals, helping them evade detection is criminal in itself, you're under arrest".

Pretty sad, considering the authors of PGP could be imprisoned on the same reasoning.


Since the new computer law of this year, you can be forced to submit your cryptography keys in The Netherlands, just like in the UK.

That means Amsterdam is no longer a viable option for hosting your servers or using a VPN service that's located there (most are).

The most obvious way this could have been done is by MITMing the key exchange. The giveaway is in the last paragraph: "The IronChat app, Schellevis reported, also failed to automatically check if the server it used to exchange messages with other users was the correct one."

Why would they even need to to trust a server for key exchange? Wouldn't it be more reasonable to just exchange public keys at the time of adding the contact to their contact list and then only update the keys using previously used keys?

Works until someone looses their phone.

At that point the server steps in and hands the new key to all the old users contacts. That's most likely where the flaw was in this system.


Every reasonable person that cares about privacy seriously has a second device with access to the same account and as soon as they loose a phone they tell everybody the account is compromised and make a new account. It's amazing how ridiculous some criminals happen to be, I know many don't even care to delete messages after reading.

Unless if they were forced to do this by the authority, that’s an amateur hour mistake.

I wonder why they would make public this information. Wouldn't it be advantageous for them to have those with criminal intent continue to utilize the platform and the police to have an established surveillance method? Or would it be unavoidable and become public information in criminal complaint filings and such?

It was made public because the exposed operations of criminals led to threats being made against assumed 'leaky partners'. The police did not want these people to retaliate against other people and possibly endanger bystanders.

While it might not be mentioned in this article, it's mentioned directly in an article of the Dutch DA here: https://www.om.nl/actueel/nieuwsberichten/@104414/doorbraak/


I can't imagine USA authorities taking the same course of action...

"Let criminals kill criminals. Not our problem".


Ah, okay. That makes sense.

> Or would it be unavoidable and become public information in criminal complaint filings and such?

I would hope this is the case. While details may be omitted it seems, to know how discovery took place is important in one's ability to challenge it.


Yes, absolutely. However, we know from leaks over the past decade that this is not always the case for this types of surveillance methods (the methods being made public).

I know nothing about the IronChat service, but if what is shown in that archived page is the real device then it's pretty obvious that it was doomed to be cracked. A cellphone? Seriously... a freaking cellphone?

Cellphones -all of them- use binary closed blobs to manage device drivers, and to date there is not a single cellphone in the known universe which is free of proprietary closed code. That includes also the Librem5, which is a wonderful step in the right direction, still not completely free of closed blobs, hence not secure.

So what's the problem with (closed) device drivers? Well, they run all the time, they run at maximum privileges (higher than root) and they cannot be audited to spot any malicious code, which makes them the most effective place to hide spyware code. If any government tells a hardware manufacturer to "put our spyware into your driver or your business ends tomorrow" they comply, nobody can spot the code and there's no anti malware software that will detect it.

But why one should care if all text is end-to-end encrypted? Well, on a bugged phone there's no such thing as safe encryption. Let me be more clear: if you tap the text on the virtual keyboard or any device connected to say the USB port or through bluetooth, the text is read by the relevant drivers (higher priority, closed, not auditable) before it reaches the encryption code (lower priority user app) then it can be stored, transmitted (network drivers are closed too) etc. Closed device drivers can be used on most platforms (including PCs) to build a covert channel where information (text, sound, images etc.) travels completely unbeknownst to the user, so a platform can't be considered as secure until every single bit of software and firmware contained can be checked.

So, how did the police decrypt that traffic? I can only speculate that they confiscated one of these devices, then built a bugged driver for some vital devices within it, then got to the manufacturer and forced them to inject that tampered driver as an online update for that given model of phone, possibly installing only if some conditions were verified to be sure it was one of the targets.

If that scenario is half true, then there is not a single piece of computer hardware in the world one can safely assume to be secure. An Arduino-like board, maybe, until the day they'll build faster ones around bigger chips carrying closed blobs inside.


While what you're saying is technically correct, I think you vastly overestimate the resources available and likely to be committed by law enforcement, and the competence of the people making/selling these phones.

When you overestimate a foe, you waste some resources; when you underestimate them, you die or get tortured or etc.

It's sad that the police are considered a foe. Aren't they meant to be the good guys?

Police today, maybe organized crime tomorrow? — encryption protects either without prejudice or not at all.

Ever heard of something called an unjust law?

Today there are very few of them in the US, and fewer still of them (if any at all) that actually matter.

But it's not hard to find other nations where the story is very different.


I don't think this comment is an accurate assessment of what makes or breaks security.

To address specific points:

> Cellphones -all of them- use binary closed blobs to manage device drivers ... not completely free of closed blobs, hence not secure.

Closed source software can be secure. Whether there are binary blobs or not is not really relevant.

> So what's the problem with (closed) device drivers? Well, they run all the time, they run at maximum privileges (higher than root) and they cannot be audited ...

Modern phones have baseband/AP separation; closed source components are often running more like a peripheral on good phones.

For any of those components to exfiltrate the data it would have to somehow get access to the network, persistently store it, or use some other side-channel... Yeah, I'm sure those tiny bluetooth chips can do all that over the limited peripheral interface they use to communicate with the kernel.

> So, how did the police decrypt that traffic? I can only speculate that they confiscated one of these devices, then built a bugged driver for some vital devices within it, then got to the manufacturer and forced them to inject that tampered driver as an online update for that given model of phone, possibly installing only if some conditions were verified to be sure it was one of the targets.

Absolutely ridiculous. Why would you spend the work to create a malicious driver when you could just update the app code itself if you can push updates to the phones?

There's no reason anyone would use malicious drivers when they could use malicious application code; the latter is a darn sight easier to manage.

The most likely scenario, however, is that there was a bug in the cryptography that assumed the servers to be trusted or assumed some specific key had authority to mint new keys (e.g. a trusted CA that the police got the private key to).

Your post is a rant against closed-source driver blobs, when the reality is they're a difficult to exploit vector, at best.

I would like to direct you to tptacek's comment on the Librem5 [0] where he indicates that he, a security professional, believes the iPhone to be the most secure because of the level of auditing and security work they've put into it.

> ... there is not a single piece of computer hardware in the world one can safely assume to be secure ...

Thanks are not either 'secure or not'. They are secure against something.

Security is a continuum. An iPhone (with secure enclave, good disk encryption) is more secure than my laptop, which in turn is probably more secure than the average wordpress server.

I fully believe that my iPhone will withstand even motivated attackers with physical access. I don't think my laptop will. I don't know how it would fare against a nation-state specifically targetting me, but I don't really have to worry about that.

[0]: https://news.ycombinator.com/item?id=17913148


If you were on the list before, I’m sure purchasing a 1500 EUR/6 month crypto chatting phone would definitely pique the police’s interest.

Anyone know if the security company had warrant canaries or any dead man switch situations specified which were or were not triggered?

Anybody wants a job to make a new crypto phone hit me on Wickr tomford614

rephrasing a popular on HN saying - if you're using somebody else's crypto app, you're doing it wrong.

But isn’t rolling your own crypto even worse?

You wouldn't be rolling your own crypto? You could use standard crypto but with your own app/UI to make the crypto easy to use.

Then you have to worry about security from side channel attacks. Even things like the iOS or Android task-switcher UI caching screenshots of your app can be vectors.

Oh yeah, making an app that is easy to use and still secure is definitely not easy. But at least you would know how you implemented the encryption on it.

And why would "your own" app be easier to use than somebody else's?

I didn't mean "your own" app would be easier to use, I meant the crypto would be easier to use via an app instead of using it through the command line.

If you don't write your own source code and write your own compiler/interpreter to run it you're doing it wrong. Oh, and let's not forget firmware.

...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: