
WikiLeaks Releases Trove of Alleged C.I.A. Hacking Documents - t0dd
https://www.nytimes.com/2017/03/07/world/europe/wikileaks-cia-hacking.html?smid=tw-nytimes&smtyp=cur&_r=0
======
dang
Main discussion at
[https://news.ycombinator.com/item?id=13810015](https://news.ycombinator.com/item?id=13810015).

------
spullara
This headline is extremely dangerous. The phone itself was owned. No
encryption was harmed by capturing the keystrokes and audio before it reaches
the application. NYTimes should be ashamed of themselves for basically lying
about the nature of the hacks.

~~~
shp0ngle
No, it's not.

The encryption is not broken, it's _bypassed_. The data go to an unintended
third party, even when the encryption is legit, rendering the encryption
useless.

So the word "bypass" is correct.

~~~
scblock
This is a dangerous headline because it implies that Signal was broken, which
could lead to people moving to LESS SECURE SERVICES because they think the
more secure one is broken. When in reality is the phone and OS.

They have similar end result for the phone in question, but headlines like
this can lead to people being less secure on the whole.

~~~
sqeaky
Most users cannot tell the difference between between the Phone, OS, App and
the signal (Let alone an app named Signal). Likely the journalists work with
tech savvy to make sure their understood this and it was hard for them to make
sense of gigabytes of technical jargon and noise.

Arguing this point at all is silly when many people, even many IT
professionals don't know and don't care about the difference between bypassed
and broken. This arguing detracts from the important news...

The CIA sees fit to ignore the security of Americans by not alerting the
companies that make the software the CIA exploits. They do this to insure they
can hack whoever they want, and there is no meaningful oversight and no
ethical, economic or constitutional consideration.

~~~
lvh
That hardly matters if people's response is to use other, less secure things,
as was the case with the Guardian and Whatsapp.

~~~
sqeaky
This is entirely a non-issue.

If group with the massive funding and pervasive reach like the CIA can operate
with impunity it does not matter what app or what security you think you have.

~~~
lvh
Going from easy dragnet surveillance of unencrypted communications to having
to use expensive to deploy, develop, maintain targeted attacks that get
patched (with, on iOS, ridiculously high penetration rates) does not seem like
a moot issue.

~~~
sqeaky
I don't see how this goes from one to the other. It seems that just about
every Android and iOS device can be part of an "easy dragnet" without any app
installed. If the wikileaks article is correct about the CIA having kept
multiple 0-day exploits hidden for each OS, then breaking anything even
remotely is a work ticket and not a research project for them.

The fine distinction of one app being singled out sucks, but it really is
small potatoes here. The owner of the app should write the NYT and complain
that their app was used inappropriately or perhaps write an editorial to get
even more free advertising. The real news is that the CIA lied to Americans
and the President so they could continue damaging American businesses, in the
name of protecting America.

It sounds like we are not too far off from the CIA being able to write self
spreading malware that allows monitoring they just haven't because... maybe it
would be too easy to spot. Oh wait groups like the CIA did this already and
rigged it to delete itself when not on one of their intended target's
machines, stuxnet.

~~~
lvh
You made a specific claim: no app, easy dragnet, work ticket level, because
tons of hidden 0days. I'm taking it as read that a publicly patched one
doesn't count. Is there evidence for that claim in the actual documents?

Pending that, here is evidence of a counter claim. I'd repeat what tptacek
said, but he's whittled it down better than I could:
[https://news.ycombinator.com/item?id=13811541](https://news.ycombinator.com/item?id=13811541)

To cite Tony Arcieri, the only elite cryptanalysis trick in play here is
"Android is a tire fire". Cue surprised gasp from security researchers.

Furthermore, you did not refute my central claim. Popping a Cisco 12k: read a
bunch of unencrypted comms until detection. Target a specific person to get
bit by a specific iOS exploit: maybe read some of the data until it gets
patched. Surely you'll agree that one is drastically more expensive than the
other?

~~~
sqeaky
I haven't gone through all the documents but the summary does say verbatim:

> dozens of "zero day" weaponized exploits against a wide range of U.S. and
> European company products, include Apple's iPhone, Google's Android and
> Microsoft's Windows

The only presumption on my part is that they are remotely exploitable, which
is practically a requirement for mobile device exploits to be useful because
physical access is hard to obtain. I do plan on going further through these,
they look fun.

Of course encrypted communication is better for the user than unencrypted, but
this is not the place for that, which is why I ignored it. This was supposed
to be a discussion about massive government overreach, not petty squabbles
between apps. With unfettered access to these phones there are all manner of
hypothetical attacks that could go after any of these app providers and not
just snoop on the communications of the users. With root access to a large
number of phones and little oversight their capacity for harm is frightening,
this seems more worthy of discussion.

~~~
lvh
The documents do not mention encrypted communications; that same summary
editorialized them in.

------
anigbrowl
And people wonder why I am only lukewarm about encryption and opsec. I use
both for myself, but I've given up evangelizing other people years ago because
(as I've said here on HN many times):

For regular people, the effort of encrypting things is simply not worth it
because they're powerless against a really determined attacker. It's rational
to protect against casual attacks from spammers and scammers, but protecting
oneself against state-level attackers is futile unless you make a full-time
job out of it.

Someone usually pipes up at this point saying 'we need to limit the powers of
the state', like some sternly-worded law is going to undo the existence of the
technology or take away the vast economic and political incentives to deploy
it. Get real folks, technology doesn't get un-invented, and powerful
organizations are just like powerful organisms; they're opportunist, they
maximize their own chances of survival, and when they do collapse the
resulting power vacuum is filled as rapidly as any other vacuum would be. One
can certainly seek to govern the behavior of a state or state organ, but
attempting to limit its technical ability is naive, for the same reason that
you'd be naive to try to fix police brutality by legislating about the design
parameters of police batons.

------
jMyles
> WikiLeaks, which has sometimes been accused of recklessly leaking
> information that could do harm

Nice passive voice there, NYT.

------
amckinlay
We really need Qualcomm and others to _document_ their hardware interfaces for
modems, baseboards, and SoCs so that open firmware and drivers can be
developed for these devices.

~~~
sqeaky
While I completely and I think I understand why, could you expand on this? If
I did I would probably not be as accurate as you.

------
idlewords
This headline is false and misleading, and does not reflect the headline on
the article (WikiLeaks Releases Trove of Alleged C.I.A. Hacking Documents)

~~~
r3bl
The headline here was the headline in the article. They've changed it after
the submission and I believe mods here are going to do the same.

~~~
dang
Yes. NYT often changes their headlines and we follow suit, with some lag.

------
uladzislau
You should consider the assumption that your security IS compromised at any
given point in time (bypassed or whatever) then you could foresee and prevent
some worst case scenarios which usually come from hubris nonetheless ("hey,
our app is 100% secure and tested by the top security experts - not like other
apps on the market").

~~~
jt2190
This point can't be emphasized enough. Sophisticated operators always assume
they're being listened to, and take precautionary steps.

------
upofadown
> According to the statement from WikiLeaks, government hackers can penetrate
> Android phones and collect “audio and message traffic before encryption is
> applied.”

This a perfectly useless bit of information in that it says nothing about how
this penetration could occur. Pretty much anything can be cracked with a
trojan. Something like a currently valid remove exploit would be a much bigger
deal.

I could say that all the secure apps are broken because I can stand behind you
and look over your shoulder while listening to anything you might say.

------
mtgx
To me this is much more worrying:

> As of October 2014 the CIA was also looking at infecting the vehicle control
> systems used by modern cars and trucks. The purpose of such control is not
> specified, but it would permit the CIA to engage in nearly undetectable
> assassinations.

[https://wikileaks.org/ciav7p1/](https://wikileaks.org/ciav7p1/)

Given the fact that car makers don't even have "PC age" security in their
cars, things are looking pretty bad for self-driving cars in general.

~~~
v64
Makes the conspiracy theories regarding journalist Michael Hastings' death in
2013 seem more plausible. [1]

Former U.S. National Coordinator for Security, Infrastructure Protection, and
Counter-terrorism Richard A. Clarke said that what is known about the crash is
"consistent with a car cyber attack". He was quoted as saying "There is reason
to believe that intelligence agencies for major powers — including the United
States — know how to remotely seize control of a car. So if there were a cyber
attack on [Hastings'] car — and I'm not saying there was, I think whoever did
it would probably get away with it."[68]

Cenk Uygur, friend of Hastings' and host of The Young Turks, told KTLA that
many of Michael's friends were concerned that he was "in a very agitated
state", saying he was "incredibly tense" and worried that his material was
being surveilled by the government. Friends believed that Michael's line of
work led to a "paranoid state".[80] USA Today reported that in the days before
his death, Hastings believed his car was being "tampered with" and that he was
scared and wanted to leave town.[81]

[1]
[https://en.wikipedia.org/wiki/Michael_Hastings_(journalist)](https://en.wikipedia.org/wiki/Michael_Hastings_\(journalist\))

~~~
coffeehike
His brother and family don't believe the conspiracy theories. If there was any
evidence, I don't think they'd be scared to say so in such an emotional state.

Also in the police report, I believe his brother said he had been using DMT
and he tested positive for what was likely Adderall. He was in a unique state
to truly be paranoid and throwing psychedelics in the mix could cause one to
try to cope in ways that challenge reality.

Of course, this also would be the perfect time to stage a murder and it's not
improbable that someone did discuss killing him. Also DMT only last 5-10
minutes, he certainly wasn't driving while doing it and if anything, it can
give you a sense of peace and acceptance to the craziness of life.

~~~
v64
I think given what we knew until today, it was prudent for his family to deny
the theories. Now that we have evidence showing car hacking isn't just some
theoretical exploit, but something they were actively looking into around that
time, it merits reexamination.

------
misterbowfinger

      According to the statement from WikiLeaks, government 
      hackers can penetrate Android phones and collect
      “audio and message traffic before encryption is applied.”
    

How is that possible? Isn't the data encrypted before it's sent over the wire?

~~~
libertymcateer
The kernel is owned (or some part of the phone below the application level).
The encryption only gets applied at the application level before the messages
are sent down the wire.

The interception happens _prior to the encryption being applied_. Think of it
as a dongle on the wire between your keyboard and the computer. It doens't
matter if the computer is secure - the message is intercepted prior to any
encryption.

This is, what I am assuming, has happened here.

Edit: lots of stuff deleted for very valid criticism, as below.

~~~
md_
> Given Google's stance of not encrypting local storage in any way that I am
> aware of, this is fundamentally unsurprising. I have long been saying that
> Android is insecure and that storing passwords in Chrome is dangerous.

ChromeOS and Android both implement FDE. There are some legitimate criticisms
of (especially) the latter, voiced by e.g. Matthew Green, but you're just
speaking nonsense here.

There's very little value in per-app encryption on desktop OSes; it's security
theater.

I shudder to think of what your "secure communications" app does. I hope
you're a good lawyer. ;)

~~~
libertymcateer
I am not talking about ChromeOS - I am talking about the Chrome browser.
Localstorage, last I checked, which was recently, is plaintext.

> ChromeOS and Android both implement FDE

Which is irrelevant if the runtime is compromised, which appears to be the
case.

~~~
md_
Let's be all Socratic here:

Given a desktop OS like Windows that implements FDE like Bitlocker and runs a
browser like Chrome, can you describe a hypothetical threat in which Chrome
encrypting localstorage would prevent exploitation?

~~~
libertymcateer
Yes - worms or browsers that scan local data files without accessing the
runtime of the parent application.

~~~
md_
So your threat model is "malware which has access to memory containing
plaintext but is written by idiots"?

0_o

~~~
libertymcateer
Dunno if you are still checking this thread, but I had a followup to this
question.

It seems to me that certain cryptoviruses function in the following way (e.g.
certain variants of ransom_vxlock - I will see if I can find a specific
example):

* The virus functions like other cryptoviruses, encrypting local data and holding it for ransom

* However, in addition to holding your local data ransom, it archives certain files that are likely to hold passwords (e.g., the chrome password store), and then emails them to the C&C server

If this is the case, would local encryption of the chrome password store be a
protection, or would the decryption of this store be trivial the the virus
author? Again, assuming that the virus author is a script kiddy.

So, basically, I am asking that if the characterization of the virus described
is accurate, doesn't that mean that the threat model I describe also actually
occurs in the wild? I'm not trying to be facetious here - I am trying to get
to the bottom of this.

I will try to find links to support the above.

------
libertymcateer
Edit: deleted, for very valid criticism. Next time I won't post in a rush
during work hours.

~~~
md_
Don't take this the wrong way, but as a non-lawyer, I try to heavily caveat
any statement I make about the law.

Would you consider heavily caveating statements you make about information
security? A lot of what you say here is basically wrong.

~~~
libertymcateer
> I try to heavily caveat any statement I make about the law.

That is appreciated, and you are in the minority.

I'm taking this advice, btw, and being more circumspect when I post in the
future.

~~~
md_
:)

Cheers.

~~~
libertymcateer
For serious, thank you for taking the time to engage. I do take this
seriously.

~~~
md_
Yep. Same. And I would probably have posted a longer and less confrontational
explanation of why you're (mostly) wrong if I weren't tired after a long day
of work. ;)

The whole "why not encrypt local resources" thing is an odd red herring that a
lot of (even fairly experienced) people trip over. There was a massive public
furor over Chrome's chrome://settings/passwords (i.e. lack of a master
password) design choice a couple of years ago that was a specific such case in
point.

~~~
libertymcateer
This argument is almost exactly what I was on about. I'd love to see some
summary of it and why they came down where they did.

~~~
md_
Sorry. I went to bed. I'll frame the basic argument for Chrome and then show
how it expands to other systems.

Chrome: =======

Someone who can access chrome://settings/password is presumed to have physical
access to your powered-on, unlocked machine. E.g. someone who sits at your
keyboard when you get up for coffee.

And that person can just as easily install a Chrome extension that sniffs your
passwords or steal your raw auth cookies directly from the developer console.
(He could even paste some JS into the developer console to intercept the
password as-typed by autocomplete!)

(Note that an attacker with access to a locked/powered-off machine or with no
local access is not part of the threat model, since they are presumed to be
addressed by FDE, screen locks, remote access controls, etc.)

Now, the major counterargument is essentially that a lot of unsophisticated
attackers (like spouses) may not know about cookie jars or JavaScript, but
they know about "view saved passwords." I find this argument somewhat
reasonable, but from some vantage point it's security theater--not knowing
about ctrl+j isn't a strong security guarantee, after all. So I view the
Chrome team's stance as being a very principled one, namely: don't invest in
"security" features where a bypass would not be a bug.

(In some literature this is referred to as a "security boundary", typically
defined as "a control which, if bypassed, has a bug." Note the contrast with,
for example, spam filters and antivirus, which may be sometimes bypassed while
working as intended.)

More generally: ===============

I think what was lacking in this conversation in general was a firmly defined
threat model and a firmly defined security boundary. My contention about per-
application encryption is that it doesn't represent a security boundary
because any attacker who can execute code that can read application-ACL'ed
data on disk is by definition either running code at a higher security level
(e.g. has root) or is running code at the same security level (and can thus
inject code into the browser process itself).

This conversation gets a little more complex when talking about mobile OSes
that have per-application sandboxing, but the same observation effectively
holds.

Anyway, I'm tired of typing, but hopefully that makes a bit of sense. Let me
know if I'm being confusing.

~~~
libertymcateer
So, tell me what I am misunderstanding here:

* On OSX, OS passwords are stored in the keychain. * However, Chrome stores passwords in a local SQLite database [https://www.howtogeek.com/70146/how-secure-are-your-saved-ch...](https://www.howtogeek.com/70146/how-secure-are-your-saved-chrome-browser-passwords/), which, on osx, I believe is in your Application Support Folder ("ChromeDB") * The user, who is not root, has read/write access to the ChromeDB * Is it not the case, then, that any script that has user-level permissions can access the Chrome passwords? Because Chrome is not available through the app store, it does not store passwords on the OSX keychain, which, again, correct me if I'm mistaken, requires higher permissions to read? So that, for instance, a malicious script that only had user-level permissions could not access the contents of databases encrypted with credentials stored in the keychain?

~~~
sleepychu
If the malicious script can execute arbitrary code as you then you're owned,
essentially you always allow unsupervised use of your computer which as
discussed is not part of the threat model.

~~~
md_
To expand on this (correct answer) very slightly, what you say
(libertymcateer) is true but misses the other vector:

A malicious script that runs as user X can typically (on desktop OSes) inject
code into any other process running as user X at the same security level. The
details vary by OS--in Windows there's a system call called NTCreateThread
that lets you inject code from a loaded DLL in your process into any other
process at the same or lower security level; in OSX, at a quick Google, it
looks to me like
[http://web.mit.edu/darwin/src/modules/xnu/osfmk/man/thread_c...](http://web.mit.edu/darwin/src/modules/xnu/osfmk/man/thread_create_running.html)
may do the same.

So the attack that this opens up is to basically wait until Safari is running
and loads the credentials into its memory--which it has to do to prefill the
password field in a page--and then just read that memory from your code
running in the same process in a different thread (which shares the address
space). And if you don't want to wait, you can simply request the credentials
directly from the Keychain API; Keychain doesn't know you're not "Safari"
(since you're running in the same process) and will happily give you the
credentials!

Now, there's still a small advantage to the DPAPI/Keychain approach, namely
that it allows the OS to show approvals to the user ("Unlock the keychain?"
dialogs or whatever), ensuring that the malware can only steal credentials
while the user is present. There are some circumstances where a credential API
is nonetheless useful. Offhand:

1\. Cases where there's a test-of-presence ("Do you want to unlock the
Keychain?") conducted by the higher-privilege process, and where approval is
not routine (so that the user is not going to just click "OK"). Browser
password autofill is not such a case, however.

2\. Cases where there's a test-of-presence and the user assent is
transactional (i.e. they see what they're approving and the approval is only
good for that one action--as with Windows UAC).

3\. Cases where the credential granted is a signature and not a bearer-token,
and we find some advantage in the token itself being bound to the device.
(IOW, the down-level process can steal a signature, but it can only use the
signature for some limited use, and cannot steal the signing secrets, which
never leave the privileged domain.)

So to get this right requires a lot of thought about things like broker
processes, transactional approval, etc. I'm far from an expert in this, but
hopefully the above makes some sense.

~~~
libertymcateer
Right - you are describing a very well written worm up at the top of your
comment.

However, in my experience (disclaimer: the plural of anecdote is not data, I
am very well aware of this), the frequency of worms and viruses that are
released by script kiddies using commercially available malware is on the
rise, and these are malicious and effective but not terribly sophisticated.
Check my other thread for more on this.

In other words, what I am saying is that you are describing a very nasty
theoretical worm - I am, however, describing to you a family of worms that is
currently out in the wild and causing a hell of a lot of damage, and, as far
as I know, actually does function in the way I describe. Filecryptor viruses
can be made / purchased by any script kiddy jerk these days, and it seems to
me that they do not function in this very sophisticated way you describe, but
instead _may actually be stymied by local encryption_ of files with passwords
in them. (Or, rather, the distribution of your passwords to the virus owner
would be stymied.)

I would very much like to know if is accurate or not. I understand that the
devil is in the details, but if it is true, then I stand by my point that it
seems unwise (borderline indefensible) not to encrypt local password stores -
as there is a known valid threat. If it is not true, then I stand corrected -
which happens all the time.

Either way, I am deeply interested to know.

~~~
md_
I don't have a huge amount of exposure to current malware trends, to be honest
--it's not the area I work in at the moment. So tl;dr I can only guess.

You're right that unsophisticated malware may be thwarted by per-app disk
encryption or credential stores like Keychain, but it doesn't represent a
security boundary. That's why I would describe the Chrome team's approach as
being "principled"\--they're refusing to implement an ambiguously useful
security feature because its bypass would not represent a bug.

Whether such a feature is nonetheless valuable for the user is unanswered by
that discussion, however; as you say, it may have value in some circumstances.

However, remember that by volume most exploitation is (as best as I can tell)
economic--people who do it for business. And people doing it for business can
buy whatever malware is on the market. If stealing in-memory secrets is
reliably accomplished (which it is), malware vendors have a strong incentive
to implement this and sell it as well.

So I think you have the right idea, but answering the question is nontrivial.
If Chrome implemented file encryption (or, more likely, used the platform APIs
where available), would the engineering cost (and complexity--e.g. different
behavior on different platforms) be counterbalanced by the increased cost
imposed on malware authors? Or would one or two malware authors quickly adapt
and malware prices/effectiveness would remain fairly static?

You get the point.

~~~
libertymcateer
Found it: the worm is called Dynacrypt.

Check it out and let me know what you think.

Edit: from the top google result on Dynacrypt:

>While the ransomware portion of DynA-Crypt, as described in the next section,
is a pain, the real problem is the amount of data and information this program
steals from a computer. While running, DynA-Crypt will take screenshots of
your active desktop, record system sounds from your computer, log commands you
type on the keyboard, and steal data from numerous installed programs.

>The programs and data that DynACrypt steals includes:

>Screenshots

>Skype

>Steam

> _Chrome_

>Thunderbird

>Minecraft

>TeamSpeak

>Firefox

>Recordings of system audio

------
bitmapbrother
CIA Android Exploits

[https://wikileaks.org/ciav7p1/cms/page_11629096.html](https://wikileaks.org/ciav7p1/cms/page_11629096.html)

As you can see they pretty much all reference very old versions of Android
(v4) and Chrome.

------
throwaway31763
I thought they were already compromised since both these services use SMS
authentication; since the defaults AFAIK aren't particularly concerned about a
change in the public key, it's broken for anything secure anyway.

Tox on the other hand seems much more secure... though I guess if you're phone
is compromised you're pretty much screwed to start with (which is not too hard
with all the bloatware one needs these days).

~~~
r3bl
See this:
[https://github.com/TokTok/c-toxcore/issues/426](https://github.com/TokTok/c-toxcore/issues/426)

Long story short: if someone obtains your Tox private key, they are able to
impersonate you in the conversations with other people without you realizing
it.

Tox developers admitted this was an issue. Fixing this means changing the
protocol itself (which will affect everyone).

Tox is still experimental (which they admit here:
[https://github.com/TokTok/c-toxcore/issues/426](https://github.com/TokTok/c-toxcore/issues/426))
and it is not advisable to use it.

------
icodestuff
Given the other revelations of the last few weeks, I have to wonder if these
exploits are getting installed on every phone that the CBP demands people
unlock. Seems like the obvious thing to do. Best not to trust your phone or
any software on it at least without a factory reset, and preferably a software
update, after it's been in CBP custody for any time.

------
uncoder0
Besides the initial titlegore, these tools really aren't that surprising. I've
always operated under the assumption that if the NSA, CIA, etc are in your
threat model you've already lost.

------
james_niro
Lol at NYT, it says that when jack into an android phone they are able to
route the messages to a third party before it gets encrypted

------
evjim
This is why we should not rely on encrypted apps running on top of some other
platform.

disclosure: working on an open source alternative for messaging

~~~
tripplethrendo
Wouldn't you also need an encrypted os for your phone?

~~~
evjim
A fully open source RTOS that is trusted and only running this single
application. The only external communication is the encrypted messages.

------
mattcoles
This just in: man looking over your shoulder bypasses strongest Signal
encryption!

------
palavsen
I found none of these revelations surprising. In this era, you have to assume
that someone is monitoring you. You're naive if you think otherwise.

