
Police decrypt messages after breaking pricey IronChat crypto app - MrMember
https://arstechnica.com/information-technology/2018/11/police-decrypt-258000-messages-after-breaking-pricey-ironchat-crypto-app/
======
supakeen
For the English readers without Dutch sources, here is some additional
information which might or might not be in the article:

Police seized and operated a server of a company called Blackbox Security
which offered 'crypto phones'. Basically phones pre-installed with some
software sometimes with all ability to communicate disabled aside from that
application. The price for these phones was 1500 EUR including a 6 month plan,
afterwards 750 EUR per 6 months for usage.

While the Dutch DA and Police have not given any details as to how the
security was broken there are some clues (this is speculative as fuck):

    
    
      * A users guide that used to be published on the Blackbox Security website hints towards their chat application being XMPP+OTR.
      * Real-time access but no historical access hints towards an MITM to change the previously exchanged OTR keys (a common way to 'break' into a conversation).
      * The application seemed to not enforce and/or check the key signatures for changes.
    

This is not the first time the Dutch DA and Police have taken action against a
'crypto phone' provider. Ennetcom, another provider was taken down a year or
so ago: [https://www.zdnet.com/article/police-hack-pgp-server-
with-3-...](https://www.zdnet.com/article/police-hack-pgp-server-
with-3-6-million-messages-from-organized-crime-blackberrys/) leading to
arrests as well.

The reason given by the DA for the publication of this news is that threats of
violence and reprisal were being made on the taken down chat messages by
owners of these devices after police action was taken against them. They
wrongfully blamed the people they were communicating with of leaking the
information. As tensions increased retribution hits were likely and the risk
to the public would be there. Hence: release the information.

To re-iterate, there is so far no reason to believe the actual cryptographic
protocols in use got broken but that yes; taking over the server allowed them
to MITM the OTR key exchanges and/or pretend to be another client. I could get
more technical but since this is all speculation I don't see much value to it.

~~~
lmilcin
I have been designing cryptographic protocols for securing card payment
transactions, card data and pins. I then was going with those systems through
stringent certifications from various organizations, especially PCI.

Not enforcing signatures when exchanging keys? Is this crypto-kindergarten?

With a well designed payment system it is expected that the attackers have
access to basically all of infrastructure -- servers, databases holding keys,
network, disassemble terminals, bribe employees, etc. and still have no chance
injecting their own keys, read pins or get any cryptographic material of
value.

Why can't you just guys who pretend to build secure system spend some times
reading real requirements from PCI or Visa or Mastercard to get at least some
idea how real secure systems are built in at least one area?

~~~
schoen
I suppose that in the payment systems, there's a trusted party that has the
ability to compromise transactions. The different parties in the system trust
that party and rely on audit and potentially arbitration or litigation to
resolve disputes about individual transactions or classes of transactions.

For end-to-end encryption for messaging applications, there may not be any
such entity that everyone can trust. In that case, there needs to be a
solution for key exchange to allow new parties and devices to join the system.
In payments you could presumably say "the banks/banking
association/cryptographic contractor of the banking association is the
authority that certifies new entities that join the system". In messaging you
probably can't do that if you're concerned that law enforcement will force
that entity to add false certifications!

In other words, I think you're referring to a cryptographic problem with a
somewhat different threat model.

~~~
lmilcin
There don't and should not be any trusted parties. Every trusted party is, by
definition, a point of failure if the party is compromised.

PCI requires that organizations control cryptographic material using rules of
dual control and split knowledge. No individual should have access to entire
cryptographic key and any processes and devices should require at least two
people to operate.

For example, HSM-s are ALWAYS operated by at least two security officers.
Cryptographic keys are generated by HSM in the form of multiple components
onto multiple smartcards. Each smartcard is stored in a separate safe where
only the security officer/s assigned to that component have access. The HSM to
be injected with keys must be operated by multiple security officers with
their components. The HSM is regularly inspected -- each security officer
brings his key from his safe, two keys are required to open the enclosure
where the HSM is located. When the payment terminal is injected with keys
there are two operators present monitoring each other to prevent tampering
with the process. Etc.

With good understanding of the concepts it is possible to build secure system.
It's not that hard.

~~~
schoen
I think we're talking about different levels of the system. The attack in this
case was not about an individual employee unilaterally taking an improper
action, but about a company being officially compelled by a government to take
an action that was contrary to the interest of an end-user. In the financial
system this happens _all the time_ and is considered somewhat unremarkable.

If this company had had a dual control mechanism where multiple security
officers had to be involved in order to issue signatures, presumably the
company's executives would have told those security officers "we have to issue
this signature because the government requires us to", and presumably the
security officers would then have done it. It wasn't a rogue action from the
organizational point of view, only from the customer's point of view.

Also, in a messaging application new public keys have to be certified
extremely frequently because new users and devices are constantly joining the
system with new keys. Presumably this happens in an automated online fashion
(otherwise, the security officers aren't going to get much sleep). That makes
it even more challenging to subdivide the responsibility for certification,
for many reasons.

I don't mean to disparage the precautions that financial organizations have
implemented, and I agree that some parts of the software world sometimes seem
extremely cavalier in comparison. But I still think that in this particular
case the threat models are extremely different.

~~~
lmilcin
Nope, not really.

1\. They are choosing and then providing devices. This is very important
because it means they have physical contact with the device, initially, so
they have means to bootstrap cryptographic system by way of injecting keys,
etc.

2\. They are middlemen transferring messages between multiple parties that use
their devices without having to understand the secret part of the message and
only routing the messages.

3\. They are paid well enough for the service that they should be able to
cover expensive devices and processes like manual key injection or expensive
hardware security modules.

4\. The core of the business is security, if it is not provided nothing else
will change the fact they have not provided what they were paid for.

The only real difference from payment industry is that the threat is from
governments, too.

~~~
schoen
I forgot about the point that they physically provide the devices, which might
be relevant somehow. But how could they use financial-industry-like controls
to prevent themselves from being compelled by a government to certify a man-
in-the-middle attack? How can the processes distinguish between "we believe
this statement is true" and "the government compels us to state that we
believe this statement is true"?

------
wjn0
> Blackbox-security.com, the site selling IronChat and IronPhone, quoted
> Snowden as saying: "I use PGP to say hi and hello, i use IronChat (OTR) to
> have a serious conversation," according to Web archives. It wasn’t
> immediately known if the endorsement was authentic.

This strikes me as... very inauthentic. I think anyone with even a basic
understanding of crypto would do things the other way around.

~~~
Someone1234
> I think anyone with even a basic understanding of crypto would do things the
> other way around.

Can you explain your reasoning? PGP lacks forward secrecy, a key feature that
you'd want and which OTR provides.

~~~
rakoo
For all the flak it's received, PGP has no known flaws _if you know how to use
it properly_. IronChat may have had some security merits (I can't say), but
until there's been some in-depth audits you can't trust it.

It's not about the technical features that are theoretically available, it's
more about how much you can believe they actually hold in reality.

~~~
Someone1234
> but until there's been some in-depth audits you can't trust it.

OTR uses AES-128 w/the Diffie–Hellman key exchange, and SHA-1 hashes to
confirm integrity. So in terms of technology it is a pretty well travelled
road. And the advantages of OTR over PGP make it worth seriously considering
for secure messaging.

As to specific implementations I cannot say, but that's true with both OTR and
PGP.

~~~
rakoo
> As to specific implementations I cannot say, but that's true with both OTR
> and PGP.

That's exactly the root of the problem. When you say you use pgp, there's a
very high chance you're using gnupg from the command line, ie the one that has
been reviewed by every security expert who has wanted any bit of recognition.
When you say you're using OTR, _everything_ depends on the specific
implementation, so there are infinitely more ways your setup can be
compromised.

I do agree that from a purely technical point of view OTR is better than PGP
(except maybe the need for both parties to be online at the same time, but
that's a minor inconvenience when comparing to the additional security OTR
provides). But in this case the technical merits are not really important,
what is really important is the _complete_ system, and in that view the old,
crufty, hard-to-use PGP wins.

------
sterlind
How were the police able to seize the company that sold IronChat in the first
place? That's like shutting down Open Whisper Systems for criminals using
Signal. Do we know if the company _knowingly_ did business with organized
crime? Or defied a court order? Or is it simply illegal to sell encryption
hardware in .nl?

~~~
rocqua
Not sure whether this is how they did it, but a new law has given police
'hacking powers'. Otherwise, it'd probably be a court order based on the claim
they knowingly did business with organized crime.

~~~
mtgx
Why didn't it remain court order based for stuff like this? What was their
reasoning for needing to bypass the courts?

~~~
rocqua
I think it still takes court approval for such a hacking operation, its meant
to avoid notifying the targets.

------
teilo
The most obvious way this could have been done is by MITMing the key exchange.
The giveaway is in the last paragraph: "The IronChat app, Schellevis reported,
also failed to automatically check if the server it used to exchange messages
with other users was the correct one."

~~~
qwerty456127
Why would they even need to to trust a server for key exchange? Wouldn't it be
more reasonable to just exchange public keys at the time of adding the contact
to their contact list and then only update the keys using previously used
keys?

~~~
londons_explore
Works until someone looses their phone.

At that point the server steps in and hands the new key to all the old users
contacts. That's most likely where the flaw was in this system.

~~~
qwerty456127
Every reasonable person that cares about privacy seriously has a second device
with access to the same account and as soon as they loose a phone they tell
everybody the account is compromised and make a new account. It's amazing how
ridiculous some criminals happen to be, I know many don't even care to delete
messages after reading.

------
xiao_haozi
I wonder why they would make public this information. Wouldn't it be
advantageous for them to have those with criminal intent continue to utilize
the platform and the police to have an established surveillance method? Or
would it be unavoidable and become public information in criminal complaint
filings and such?

~~~
supakeen
It was made public because the exposed operations of criminals led to threats
being made against assumed 'leaky partners'. The police did not want these
people to retaliate against other people and possibly endanger bystanders.

While it might not be mentioned in this article, it's mentioned directly in an
article of the Dutch DA here:
[https://www.om.nl/actueel/nieuwsberichten/@104414/doorbraak/](https://www.om.nl/actueel/nieuwsberichten/@104414/doorbraak/)

~~~
londons_explore
I can't imagine USA authorities taking the same course of action...

"Let criminals kill criminals. Not our problem".

------
squarefoot
I know nothing about the IronChat service, but if what is shown in that
archived page is the real device then it's pretty obvious that it was doomed
to be cracked. A cellphone? Seriously... a freaking cellphone?

Cellphones -all of them- use binary closed blobs to manage device drivers, and
to date there is not a single cellphone in the known universe which is free of
proprietary closed code. That includes also the Librem5, which is a wonderful
step in the right direction, still not completely free of closed blobs, hence
not secure.

So what's the problem with (closed) device drivers? Well, they run all the
time, they run at maximum privileges (higher than root) and they cannot be
audited to spot any malicious code, which makes them the most effective place
to hide spyware code. If any government tells a hardware manufacturer to "put
our spyware into your driver or your business ends tomorrow" they comply,
nobody can spot the code and there's no anti malware software that will detect
it.

But why one should care if all text is end-to-end encrypted? Well, on a bugged
phone there's no such thing as safe encryption. Let me be more clear: if you
tap the text on the virtual keyboard or any device connected to say the USB
port or through bluetooth, the text is read by the relevant drivers (higher
priority, closed, not auditable) _before_ it reaches the encryption code
(lower priority user app) then it can be stored, transmitted (network drivers
are closed too) etc. Closed device drivers can be used on most platforms
(including PCs) to build a covert channel where information (text, sound,
images etc.) travels completely unbeknownst to the user, so a platform can't
be considered as secure until every single bit of software and firmware
contained can be checked.

So, how did the police decrypt that traffic? I can only speculate that they
confiscated one of these devices, then built a bugged driver for some vital
devices within it, then got to the manufacturer and forced them to inject that
tampered driver as an online update for that given model of phone, possibly
installing only if some conditions were verified to be sure it was one of the
targets.

If that scenario is half true, then there is not a single piece of computer
hardware in the world one can safely assume to be secure. An Arduino-like
board, maybe, until the day they'll build faster ones around bigger chips
carrying closed blobs inside.

~~~
errantspark
While what you're saying is technically correct, I think you vastly
overestimate the resources available and likely to be committed by law
enforcement, and the competence of the people making/selling these phones.

~~~
mmirate
When you overestimate a foe, you waste some resources; when you underestimate
them, you die or get tortured or etc.

~~~
brokenmachine
It's sad that the police are considered a foe. Aren't they meant to be the
good guys?

~~~
dddddaviddddd
Police today, maybe organized crime tomorrow? — encryption protects either
without prejudice or not at all.

------
sushid
If you were on the list before, I’m sure purchasing a 1500 EUR/6 month crypto
chatting phone would definitely pique the police’s interest.

------
thinkingemote
Anyone know if the security company had warrant canaries or any dead man
switch situations specified which were or were not triggered?

------
tomford614
Anybody wants a job to make a new crypto phone hit me on Wickr tomford614

------
trhway
rephrasing a popular on HN saying - if you're using somebody else's crypto
app, you're doing it wrong.

~~~
colejohnson66
But isn’t rolling your own crypto even worse?

~~~
DoritosMan
You wouldn't be rolling your own crypto? You could use standard crypto but
with your own app/UI to make the crypto easy to use.

~~~
United857
Then you have to worry about security from side channel attacks. Even things
like the iOS or Android task-switcher UI caching screenshots of your app can
be vectors.

~~~
DoritosMan
Oh yeah, making an app that is easy to use and still secure is definitely not
easy. But at least you would know how you implemented the encryption on it.

