Hacker News new | past | comments | ask | show | jobs | submit login
NSO Group's iPhone Zero-Days used against a UAE Human Rights Defender (citizenlab.org)
1055 points by dropalltables on Aug 25, 2016 | hide | past | favorite | 242 comments



Amazing work by Lookout and Citizen Lab.

Until this point I was not aware that Lookout provided any value-add for mobile devices. I was under the impression it was the McAfee of mobile.

It sounds mean but this is the first reference to actual vulnerability discovery done by themselves on their blog, which usually reports on security updates that Google's Android security team discovered. Previous entries include such gems as "Now available: The Practical Guide to Enterprise Mobile Security" and "Insights from Gartner: When and How to Go Beyond EMM to Ensure Secure Enterprise Mobility."

I can't wait to see more great work. Lookout is now on my radar.


Direct links to other resources:

Technical analysis: https://info.lookout.com/rs/051-ESQ-475/images/lookout-pegas...

CitizenLab analysis of the nation-state side of things: https://citizenlab.org/2016/08/million-dollar-dissident-ipho...

Apple update: https://support.apple.com/en-us/HT207107


Love that Technical Analysis.

If Apple, Google, MS, Linux distribution does the following:

* Create sha1, sha256, sha256 chksums of every system, app files and store them in a secure database somewhere.

* Check and audit the system files from time to time and notify the user when change happen.

Would it prevent these type attack or at lease notify the user that system security has be compromised?


Tripwire is a linux util for doing just that. However you need some read-only media to store the hashes and I think rootkits can still just intercept the read calls.

http://linux.die.net/man/8/tripwire


Some years ago, I had Tripwire installed for a few days but quickly removed it again because whenever I upgraded installed packages, I'd get a storm of messages about files which had changed and that was just annoying since I was the one who had initiated the action that caused the files to change, but at the same time there were so many files that changed of course, that I had no way of distinguishing legitimate changes (as they all were) from any potential illegitimate changes.


That's precisely what it's supposed to do. If you update the Tripwire db every time you "initiate an action that causes monitored files to change" - then it does a _magnificent_ job of telling you when someone _else_ changes those files.

You need to run 'tripwire --update' every time you run 'apt-get update' or 'pip install foo' or 'npm install bah' or whatever - then you wont get that storm of false positives.


Honest question from an interested party: have you actually had it alert you about someone else changing the files?

I've been in the previous boat: of having it running on a system I've inherited, and given up because it seemed too much hassle.


Yes - a few times a year when normal and authorised things or people have unexpectedly changed files in tripwire-protected places, and in ~20 years I think three times when I'd had an intrusion.

Those three timely notifications of real breaches have made 20+ years worth of occasional false positives 100% worth it.


On debian you can also use debsums which has the advantage that the checksums come from the distribution directly.


iowatch is a nice simple alternative.


I'm on mobile so links are annoying to get, but Apple has a great PDF on iOS security in general, which includes details on their protections against kernel patching. Windows has KPP, and Mac OS has SIP. Not familiar with anything for Linix but I'd be shocked if there weren't multiple incompatible implementations of similar features.

Realistically, this is also something virtualization can help guard against. If your OS is initialized from a known good version external to the VM, every time the VM starts, you greatly increase the difficulty for an attacker to get persistant root.

KPP: https://en.m.wikipedia.org/wiki/Kernel_Patch_Protection

SIP: https://en.m.wikipedia.org/wiki/System_Integrity_Protection


Doubt it. Once you have control of the system, why would you not be able to just disable the check? What they should do is enforce code signing at the processor level.


Too bad that has it's own issues. Android/Google definitely has the clout to be able to call the shots (ie force the processor manufacturer to issue them a CA cert for their own use), but what would happen if ARM then became a dominant server arch?

Also, putting code signing in the processor wouldn't fix the problem: code signing already happens in higher levels (ie higher than userland), so moving the verification a level up would likely take the exploitable bugs with it. The problem remains.


If the checks are enforced by the bootloader which is signed, then you cannot disable them. I think this is how the system partition is protected on recent Android versions. As this attack demonstrates, simply requiring all code to be signed isn't enough.


Doesn't "rpm -Va" do this (minus the "secure database" part)?


And quite the heads up move by Ahmed Mansoor to recognize the suspicious text for what it was and send it to the research team instead of clicking the link. If this thing really has been going since iOS 7 that means he is the outlier in taking precautions.


FTA: He had been targeted previously by FinFisher AND Hacking Team's malware. Avoiding malware is nothing new to this guy, something this NSO Group should have taken into account when they came up with their spear-phishing attack.


Sure but how is that responsive to parent's point about Mansoor being an "outlier in taking precautions"? The reason he found out about the previous attacks was likely because he took similar precautions:

"When Ahmed Mansoor opened the document, his suspicions were aroused due to garbled text displayed. His email account was later accessed from the following suspicious IPs.."

https://citizenlab.org/2012/10/backdoors-are-forever-hacking...


I'm not sure if you call it a "precaution" when you notice someone's pwnd you. Still good job noticing it after the fact.


Then again, he's not some comfortable first-world programmer who makes $100K a year and enjoys talking about infosec and opsec as a fun diversion, he's a guy living in a repressive third-world dictatorship who has put his entire life on the line for the human rights of others and probably has little to no computer science or infosec education, so, maybe cut the guy some fucking slack.


> third-world

Agree strongly, just a small note: UAE is quite wealthy. Higher PPP-adjusted per capita income than Sweden.


NSO Group claims not to launch attacks itself; rather it only sells tools to do so. So it might even be that the same government has targeted him with all three hacking tools.


Yep -- This is legit security research. Excellent research.


When I wrote the above comment, this thread was linked to the Lookout blog page, not the more appropriate citizenlab page.


> Amazing work by Lookout and Citizen Lab.

Hopefully Apple "made it rain" on these guys (with cash).


There is a frustration, as a user, that as the value of the iOS exploits increase, they become more and more 'underground'. The time between OS release and public jailbreak is continually growing - and it doesn't seem to only be due to the hardening of the OS. People are selling their exploits rather than releasing them publicly. And the further underground they go, the more likely they will be utilized for nefarious purposes rather than allowing me to edit my own HOSTS file. The most recent iOS jailbreak (to be able to gain root access to my iPhone) lasted less than a month before Apple stopped signing the old OS. Yet its clear this (new) quick action on Apple's part does not (yet?) stop persistent state-sponsored adversaries.

It is more and more clear that to accept Apple's security (which seems to be getting better, but obviously still insufficient) I must also accept Apple's commercial limitations to the use of a device I own. And I suppose that the dividing line between the ability to exploit a vulnerability and to 'have control' is a sliding scale for every user: one man's 'obvious' kernel exploit is another man's 'obvious' phishing scam.

It is not a new tension, but it does seem the stakes on both sides seem to be getting higher and higher - total submission to an onerous EULA vs total exploitable knowledge about me and my device. Both sides seem to have forced each other to introduce the concept of 'total' to those stakes, and that is frustrating. More-so when it's not yet clear which threat is greater.


This is what happens when the concepts of security, DRM and commercial restriction get entangled.

This reminds me of the PlayStation 3. It remained an un-hacked console for so long and the theory goes that the people who wanted to tinker with it could do so without being forced to fight on the same side as the bad guys, because Sony allowed 'Other OS'. When Sony closed off 'Other OS', this gave incentive for people to actually try and jailbreak the system [1].

Yet now the people that just wanted to tinker had to take the same route that people who just wanted to pirate would have to take. By locking the platform down further, Sony only succeeded in merging the two camps (benign tinkerers and pirates). I think there's a lot of validity in this theory.

It's a tough choice. As an iOS user I've long since come to the same acceptance as you - that the added security is worth the extra restrictions. Yet it doesn't have to be this way. Protecting your platform from hackers shouldn't be the same as protecting your platform from SNES emulators or games with adult themes.

[1] I believe it was this talk where this view was put forward, but I might be wrong (at work, can't really double-check): https://www.youtube.com/watch?v=PR9tFXz4Quc


You say Apple's security isn't sufficient. It certainly appears that as time goes on Apple's security is pretty sufficient for most users. We're talking about exploits worth 1+ million dollars being used in a targeted attack against a single individual (or, more likely, a relatively small number of targeted individuals over time). This isn't something that the overwhelming majority of users need to be concerned about. Obviously it would be great if Apple's security was so good that not even nation states could get past it, but that's an incredibly high bar.


I don't think the bar is just nation-states. The bar also includes those with any of the following:

- $1M Cash

- skilled working knowledge of Apple's software and hardware

- fast reflexes to quickly react and apply a newly-public exploit derived from any of the above

Together, the number of world-wide actors who fall into one of those categories is actually fairly large. Those all have the capability to have total 'access' to my device. Given the value (to me) and the amount of data on that device, that's a huge hole, even for the majority of people. Further, with these networked exploits, the distinction between having one individual targeted, and all individuals running that OS is actually fairly slight. It wouldn't take that much more work to spam a well-designed exploit against an entire class of (normal) users.


They also require a few other things:

- a single person or committee with the authority to sign off on $1 million for this sort of thing.

- a willingness to risk the legal and PR consequences of being discovered.

Which cuts out a lot of potential corporate espionage


This presumes that the exploits can only be found by companies with deep pockets, which are probably deep only because they are willing to sell them. What if there are equally good teams who are not in it for the money?


Why would a corporation maintain an espionage team for any other reason?


I don’t know about you, but there are probably zero persons who would pay $1000000 to gain access to the contents of my phone.


I think its a bit naive to believe that a 1+ million worth exploit will remain a secret until the patch is shipped, at which point it is worth exactly nothing. There are so many way that the exploit will trickle down to more people until reach mass use.

To mention a few ways: The buyer might want to recover the purchasing price by reselling. If its a government agency, they might want to establish credential with future sellers. Reverse engineers have a 1+ million dollars incentive, and they have a much smaller window to sell it before it becomes worthless.

Its like piracy. First its the group with so called FTP access. During that period of time, maybe a release is worth X amount of dollars, but whats the chance that it remains there and never reach a mass audience?


I'm not sure what you mean. In this case the exploit did remain a secret until such time as a targeted individual was paranoid enough to send the link to an investigative team instead of clicking on it (and this is what resulted in the patch), or in other words, nobody else besides the attackers, the victim, the investigative team, and Apple knew about the exploit until the patch shipped.


Just as an aside; The 1M USD exploit was a bug bounty/publicity stunt. While that also was a chain exploit, that doesn't automatically mean this is a "1M USD" exploit, like everyone is claiming. Or am I missing something?


It's very clear that Apple doesn't want you to have full control over your device. So jailbreaking will only become harder and harder. Either accept it, or move to a more open platform.


As consumers we don't face very good choices right now.

When you buy an iPhone, you don't own it. You are a sharecropper on Apple's OS license.

If you buy an Android with an unlockable bootloader, you own it. But if attacked, the adversary owns the device.

It's a shitty situation but it's hard not to recommend iOS to most users.


My Android has an unlockable bootloader but you need to actually request the key from the manufacturer. Malware can't unlock it against my will without a jailbreak. Seems like a decent arrangement to me- safe by default, but if I want to root my phone I can.


What i would love to see is a bootlader where i can load my own signatures. Preferably done via USB only, and by putting the device into a mode that require certain button inputs during power up.

Signed boot has uses, but we need to be sure that the user does the signing.


I was tripped up trying to unlock an LG G5 yesterday by the secure boot validation.

Turns out, in Android 6, there's a Developer Option called "Allow OEM Unlock" which does enable the ability to unlock the bootloader through fastboot.

While I can't sign my own bootloader, having a developer option to enable the unlock that can only be triggered from inside the OS is an interesting trade off.


"safe by default" except for the huge amount of userland vulns that Android has.


Yeah, but that's life with a complex bundle of software. It's not as though Apple's attempt at making a walled garden makes them magically better at not writing exploitable bugs in the userland. Mostly where they're better is at making it harder for normal users to intentionally install something that turns out to be malware, which isn't nothing, but a with exploits like Trident, that makes absolutely no difference.


> Malware can't unlock it against my will without a jailbreak.

By jailbreak, you really mean "vulnerability" and unfortunately those are quite common in the Android hardware/software/bootloader realms.


Even the Nexus unlock where you don't need a key but need to boot into fastboot is okay. I don't think many pieces of software will be able to automatically perform the steps required for that, including a confirmation on the phone and one on the computer.


It also wipes the device IIRC, so you can't fastboot unlock a device you found on the footpath and gain access to it's data.


Sidenote, it's valuable for devices to come to you locked, because it means the device you have received does not have 3rd party malware. A common problem in some markets for Android phones.


I guess ambiguity / assumed apple fanboyism is why you're downvoted, but if I'm understanding you correctly, then I feel largely the same way.

Apple's walled garden and "moral" approach to guarding their garden is incredibly frustrating, but the sheer number of vulnerabilities affecting different levels of the Android stack is so disheartening. This is true of my PC/Mac as well, all it takes is someone to plug in a malicious USB stick and it'll infect the firmware on the USB host, and that's it, game over. Can spread from there to disk firmware, also growing increasingly complicated and opaque, and who knows where else.

It's reaching a point where I trust my iOS device more than any other, depressingly because of the walled garden. I can't build suitable fences around my kit myself to protect myself against every new vuln (now even monitors can have their firmware exploited and screenloggers installed), giving up trying and sacrificing some freedom for that sense of security is terrible, but feels like the only logical course of action at this point.


I fully admit this line of thinking hasn't completely solidified for me.

But empirically the iOS ecosystem is demonstrating that there is a close source ecosystem with better security properties than the open source one. Security advances the same virtues of user self determination as open source does.


What do you mean? In this attack, the attackers leveraged a root privilege escalation exploit. So iPhones are just as owned.


And if an iPhone is attacked the attacker...doesn't own the device? I don't follow your reasoning.


On iPhone, user-mode exploits remain sandboxed. On Android, on rooted devices, user mode code tends to be able to get root.


Apples and oranges.

On iPhone, user-mode exploits may remain sandboxed, unless they break from sandbox too. On jailbroken iPhones, that last step may be pre-made for them.

On Android, user-mode exploits may remain sandboxed too, unless they break from the sandbox - same as iPhone. On rooted Android devices, the last step may be pre-made for them.

You can not compare stock iPhone and rooted Androids - just like you can not compare jailbroken iPhone and stock Android.


I look at it the other way: as exploits become more and more underground, I feel safer: I know those exploits are more likely to be used by state actors against activists and other people who are doing illegal stuff, and less likely to be used against me and millions of other users to install malware on our phones (to make them send spam, to make them send expensive texts...)

So yes I feel safer now.


Perhaps you feel that way because you have the luxury of living in a place where human rights are respected. That these exploits are being used by regimes to shut down opposition is terrible in its own right.

Edit: As for net effect on global society, I think having my phone be part of a botnet that sends spam is less impactful than disrupting democratic progress.


The point still stands: these bad regimes will find it harder to do their dirty work as security increases.


"I know those exploits are more likely to be used by state actors against activists and other people who are doing illegal stuff"

State actors usually have their own opinions about what's legal and what's not, and they tend to give themselves the benefit of the doubt because... the mission must succeed! So no, this "undergrounding" should not make you feel safer. You never know when you're going to cross paths with the next Snowden or any whistleblower or human rights activist.


I know its cliche at this point but I'm going to once again point out that to some state actors, simply being gay is the illegal stuff they are looking for, and the penalty may be death. They do not feel safer now.


You're not an activist yet. It's probably unwise to dismiss this possibility out of hand: governments change, and you might find yourself in strong enough disagreement that you'll need to speak up or act. Or your employer might do something so nefarious you'll feel the need to blow the whistle.

If there is ever a point when you feel the need to rise up, but can't because you gave in to government and corporate surveillance and lock down in the first place, that would be a pity.


"activists and other people who are doing illegal stuff"

Does this actually read how you intended it to, or was there an invisible comma/parenthesis there?


NSO sells tools that when used violate the CFAA act. It is an Israeli company but a majority share was bought by a San Francisco based VC [0]. It doesn't seem like it should be legally allowed to exist as an American owned company. Maybe Ahmed Mansoor could sue the VC in American courts.

[0] http://jewishbusinessnews.com/2014/03/19/francisco-partners-...


a) Selling tools itself doesn't violate the CFAA act. A separate entity uses the tools and assumes that liability, which as we see is mitigated by sovereign immunity.

b) And even if selling tools began to violate CFAA, then NSO itself would be sued. As it is a separate entity than the investors, which is the whole point of limited liability....


If you can tie the tool to any circumvention of copyright protections -- pretty broad argument (DMCA), you can be sued or arrested.


and then you lean on USC Title 17 Chapter 12 § 1201 (f) : the interoperability with other software defense, broad argument.


That makes more sense then my idea. But it would have to be Apple that brought the suit.


A little off topic, but that link has such an obnoxious share-tab-nubbin-thingy on the side of the page.


An untethered stealth jailbreak that installs without user interaction from a webview, that's almost as bad as it gets. And for iOS 7.0.0 - 9.3.4 inclusive. And with exfiltration of audio, video, whatsapp, viber, etc etc. So thorough and so bad :-/


> An untethered stealth jailbreak that installs without user interaction from a webview, that's almost as bad as it gets. And for iOS 7.0.0 - 9.3.4 inclusive. And with exfiltration of audio, video, whatsapp, viber, etc etc. So thorough and so bad :-/

Short of being triggered completely in the background by an UDP packet, what's worse than this?


Chaining this with some form of SMS/MMS bug (a la Stagefright) would make this unbelievably powerful. That's essentially the worst case scenario I can imagine for mobile security.


Or this, from the detailed writeup linked elsewhere on this page:

> To use NSO Group’s zero-click vector, an operator instead sends the same link via a special type of SMS message, like a WAP Push Service Loading (SL) message. A WAP Push SL message causes a phone to automatically open a link in a web browser instance, eliminating the need for a user to click on the link to become infected.

It goes on to say that messages of this type are increasingly restricted by service providers and newer phone OSes, but that's still pretty horrifying to read.


Wow this WAP Push SL thing seems egregious. It's understandable that somebody thought it would be useful, for like five minutes. But how could a standards body or any of the several different OS companies who have implemented it not have realized how monumentally unwise it is to just automatically run shit that randomly gets sent to a phone?


Not everything that supports WAP has to be a phone. It could also be a standalone device, or sensor, or whatever. And with WAP push you can control it.

For me the strange thing is that it is on by default on user phones.


This stuff was all designed by European mobile operators (many of them monopolies) in the mid-90's. Not exactly a peak period for security thinking.


I wonder if it can be triggered from the webview it automatically pops up when a captive wifi portal is accessed. Needs proximity to the user, but still straightforward.


If the attacker controls the wifi, he doesn't need to put it in the captive portal page; he can intercept any non-secure http page and put his exploit there.

You really shouldn't connect to untrusted networks at all if you want to be safe from this kind of attack.


Likely. Then again, good opsec would imply that you don't join untrusted networks, period.


When your service provider is owned by the state, all you can rely on is the OS provider.

Maybe we should all just go back to carrying dumbphones.


That's presuming you have more faith in your desktop/other computing systems to be safe in the long run.

Personally, I'd take iOS over any alternative, if security was my biggest concern.


I can recommend you to try qubes OS.


Anybody know if and what limitations iOS and/or Android put on WAP Push SLs?


it likely doesn't have persistence due to secure boot chain, so it could get worse.

or attacks against Secure Enclave.


It does have reboot persistence. That's what untethered usually means.


yeah, I'm wondering if it's re-exploit on boot or actual subversion of the OS though


What's the difference? :)

It's explained in detail here: https://info.lookout.com/rs/051-ESQ-475/images/lookout-pegas...

Apparently it overwrites a system binary that's launched on boot with another apple-signed binary "jsc" (a console javascript interpreter), which will evaluate some sort of .js that re-exploits everything. Pretty clever to re-use apple-signed binaries for nefarious purposes. (The binary must be apple-signed because when booting the kernel isn't exploited yet and so it enforces code signing, obviously).



I remember going into the Apple store and every iPhone on the display tables being jailbroken due to that site.


>We recognized the links as belonging to an exploit infrastructure connected to NSO Group

So they were re-using $3 domains to send out a million dollar exploit? Am I reading this right?!


Not really without user interaction. The target in this case would have had to visit the exploit site.


Would something like proofpoint help?


I wonder if you can hit this via 4G/LTE networks? I also wonder if it works over VPNs? Or is hardware L2 adjacency (WiFi) required?


It would affect anything capable of rendering html.


The UAE really hates on activists, and appears to be hiring a bunch of people specifically to suppress activists/dissidents within the country. [1] Unfortunately, due to the amount of wealth the country has, it won't stop almost anybody from dealing with them unless Western sanctions are placed on the country, which are unlikely given the current geopolitical situation.

https://www.evilsocket.net/2016/07/27/How-The-United-Arab-Em...


Don't forget the time they pushed an "update" for blackberries: http://news.bbc.co.uk/2/hi/8161190.stm


Don't forget that Etisalat is now the majority shareholder and pretty much runs PTCL, the incumbent/largest telephone and telecom company in Pakistan, either... PTCL is to Pakistan as Verizon, Frontier or Centurylink are to various regions of the US. It's the ILEC.

Etisalat is not your friend. Etisalat has great marketing and is building GSM-based (LTE, etc) networks in many developing nations but it is no friend of an open internet or democratic institutions.

Etisalat is the reason why in some places in the world if you try to run a VoIP to Phone system gateway, armed men with carbines will show up and ransack your offices and home. They will use their influence with whatever local government exists to "deal with" threats to their revenue and/or tax base. This has happened in Pakistan and the UAE.


> armed men with carbines will show up and ransack your offices and home

This is a solid reminder that in the end, your ability to use defensive technology does not actually decide who calls the shots. Power is still ultimately controlled by violence.


This is the problem with surveillance technologies: they frequently end up being used not just against enemies, but anyone who disagrees with the government or threatens the status quo. Sadly, this happens even in democratic "free" countries.


> not just against enemies, but anyone who disagrees with the government or threatens the status quo.

Those are enemies of the state. What you consider enemies are not who everybody regards as enemies. That is why there is no such thing as allowing 'good guys' using these tools for good and preventing 'bad guys' using them for bad.


Should exploits like this be treated as munitions, with sale to foreign governments restricted? Or any sale at all restricted? Some thoughts:

* The only uses for the exploits are either illegal or by government security organizations

* I don't think you can just make an explosive and sell it to a foreign government; I think there are strict export controls (though I know very few details, I only read about companies applying, getting approval, etc.).

* In the 1990s, strong encryption was called a 'munition' and export was restricted. That turned out to be impractical (it was available in many countries and the Internet has no borders), morally questionable (restricting private citizen's privacy), and it fell apart.

While I believe in liberty and freedom-to-tinker, as I said, this stuff has no legitimate use.


No, exploits are more widely used in industry (for testing and red-teaming) than they are by governments, simply because there are more red teams than there are government-sponsored intelligence and police agencies.

It's hard to imagine a scheme under which exploits could be regulated in the US that wouldn't set precedents for whether code was protected speech. I think very few people on HN would be comfortable with those precedents.


EDIT: As kbenson points out below, it's not just red teams but people wanting to tinker with their own equipment: Get data out of a proprietary app, install a 3rd party OS, unlock their phone, etc. That seems like a very difficult problem.

> It's hard to imagine a scheme under which exploits could be regulated in the US that wouldn't set precedents for whether code was protected speech

Yeah, I was thinking about that too ... and fully support freedom-to-tinker, etc. ...

First, no right is absolute. We can't slander people despite free speech rights, or commit human sacrifice despite freedom of religion, or own a fully automatic machine gun despite a right to bear arms (in the U.S.).

We'd want to create exceptions for research, etc (see below) but I don't think the line is prohibitively hard to draw. The big problem I see is open source security bug reporting: There should be a way to openly notify the vendor and public without releasing the exploit into the wild, but it is a little tricky.

> exploits are more widely used in industry (for testing and red-teaming) than they are by governments

Good point, but I don't think that's a big challenge. Exceptions could be made as they are for other 'munitions' and other illegal products (e.g., drugs used for research).


I'm not saying it's impossible to generate an intellectually coherent set of regulations for exploits, just that the process of doing so is going to damage the 1A protections of a lot of other things over the long run.

Is it worth it? I don't think so. Unless you also regulate research, which is a non-starter, you're just driving exploit development out of the US. Substantial amounts of exploit dev are already done by foreign nationals. If virtually all of it leaves the country, what public policy problem have you solved?


> If virtually all of it leaves the country, what public policy problem have you solved?

A good point. A couple ideas, though neither is sufficient:

* International agreements control distribution of other dangerous goods; that's doable. However, look at how well that works with drugs, and even nukes get around.

* At least stop sophisticated organizations (defense contractors, SV firms, etc.) from making them for foreign governments. Their skills are harder, though not impossible, to replace. Perhaps ban the sale of exploits - taking away the profit motive - but permit distribution for personal, research, etc. purposes.


I don't think you fully follow. The skills can't be regulated: they're pure research. The US research community will continuing doing the fundamental enabling work relied on by exploit developers; it's just the people who do the testing and integration work who'll have to have their paychecks sent to Southeast Asia.

It's a very difficult problem.

There's also some bigtime cognitive availability bias happening here. We read lurid stories centering on "zero-day exploits" and say "something must be done". But no matter what these articles say, it seems cosmically unlikely that an exploit dealer is worth a billion dollars; the entire exploit trade is a rounding error compared to the switching and filtering equipment companies knowingly sell China and Iran for use in putting dissidents to death.


Under that logic tape recorders should be regulated because they could be sold to people who would record your conversations illegally, then used to create a fake conversation using your own words in order to achieve some illegal end. Treating spyware as a munition is a dangerously slippery slope.

And slander is not a criminal offense but a civil one -- one has to prove actual damages to win a slander suit (at least in the US.) So spyware could fall under the slander concept where the victims could sure based on actual damages incurred.

So one would need to prove that a piece of spyware caused them actual damages. Then you get into some other interesting unintended consequences: could a browser extension or even a cookie be construed as being spyware? They kind of are -- except (generally) you consent to those things. However were would the line be drawn? Could a company like Mixpanel find themselves inadvertently having their product being considered a munition?

I take to to a slightly absurd extreme to illustrate how good intentions can have ridiculous consequences. Governments don't have the best track record when it comes to anticipating unintended consequences.


Isn't cryptography a controlled export?


Yes in some contexts.

Most people don't know about it though. I think everyone thinks we won that "war" completely. Even talking to someone like Phil Zimmermann, he was wasn't aware about it.

Granted it is more about exporting to "rogue states" and more of a registration requirement. But it is something, companies (especially startups) probably forget to do. And I don't know of anyone personally who got in trouble over it.


And to extend your thought further, should US based VC's be backing this? NSO is backed by San Fransisco based Fransisco Partners [1].

[1] http://www.reuters.com/article/us-nsogroup-m-a-idUSKCN0SR2JF...


> * In the 1990s, strong encryption was called a 'munition' and export was restricted. That turned out to be impractical (it was available in many countries and the Internet has no borders), morally questionable (restricting private citizen's privacy), and it fell apart.

IIRC, thats still on the books. Its just one of those sleeping paragraphs since the PGP release.


Debian documents mention that "BXA revised the provisions of the EAR governing cryptographic software" in October 2000. Debian no longer has separate non-us repositories for crypto because of that.

https://www.debian.org/legal/cryptoinmain


Open source software is now basically exempt from the crypto export restrictions, which is why Debian doesn't need separate non-US repositories for it anymore. As far as I know closed-source software is still restricted.


Only "Military or intelligence cryptographic (including key management) systems" are included in the current US munitions list [1]. Everything else is handled by the Department of Commerce as part of the export administration regulations (EAR) [2].

[1]https://www.pmddtc.state.gov/regulations_laws/documents/offi...

[2] https://www.bis.doc.gov/index.php/policy-guidance/encryption



"it was available in many countries and the Internet has no borders"

Ummm... tell that to the Chinese or anyone, anywhere in the world trying to watch the complete international Netflix catalog. And as for VPNs, they're like the tunnels under the actual physical borders which are also not impenetrable.


Which foreign governments though? Not all security researchers are from your country (whichever one that may be).


I had the same thought as hackuser when reading the article, and then it was quickly followed by your point. I think an important first step would be to get certain things classified as arms. Once that's done, normal options may be able to handle them appropriately, such as not allowing the purchase or sale of certain types of arms within or over borders, etc.

This would of course open up a whole new can of worms in the US, as we are constitutionally guaranteed the right to bear arms, but that's just makes it hard, not impossible (and could possibly even serve to provide some much needed nuance to that discussion in the US).

That said, I haven't put a lot of thought into this, so a well reasoned criticism could completely change my stance.


Would it then be illegal for Google Project Zero to publish a blog post about a vulnerability that a vendor refuses to fix?


I was thinking less of the knowledge being considered an armament, and more that an actual program that takes advantage of it being one. I don't consider the the scientific knowledge required to create a gun as an armament, nor even specific schematics, but governments may view it differently (indeed, they weren't happy about the 3D printable gun).

Also, I don't think this concept is limited specifically to exploiting bugs. I think a program that was meant to access and catalog social media accounts for a person while hiding it's accesses as much as possible, but run from a third party's location, might be considered an armament. Same with something designed to DoS a service.If the purpose is to cause harm, it might be an armament. I am aware there's probably a fine line here, and one that would inevitably be abused. I'm not sure how to deal with that, and whether the negatives there outweigh the possible positives overall.


Separating code from knowledge was part of the fun of the decss debacle. "That's not a haiku; that's an illegal perl script!"


And fundamentally it's the knowledge that matters. Programmers are "expensive" but not that expensive. Give any decent off-the-shelf code monkey the specifics of a vulnerability and he can give you exploit code.

Which means restricting the exploit code is quite useless. But restricting the knowledge itself doesn't work because the same knowledge is necessary to mitigate the vulnerability and to test that the mitigation is effective.


These days, exploiting vulnerabilities in most interesting code actually seems to be quite fiddly thanks to all the mitigation techniques and requires a bunch of specialist knowledge and tools that isn't exactly trivial to come by. The knowledge is already restricted, just for commercial rather than legal reasons.


Which is still knowledge. If you have the information you can make the tools.

I mean obviously in reality the line between "information" and "software" is non-existent because software is just a type of information, but if you insist on trying to draw a line anyway then it still fails because it's still possible to convey everything of significance using natural language, and the skillset required to convert plain language instructions into software is not rare enough to be prohibitive.


>thanks to all the mitigation techniques and requires a bunch of specialist knowledge and tools that isn't exactly trivial to come by

Eh, specialist knowledge yes. Restricted, no. Getting documents on how chips and software has always been somewhat restricted, just be a linux person and try to get documentation from Broadcom on how their wifi/lan chips work, for example.


I don't recall seeing that spirit and creativity in a long time. Or am I just getting old and cranky?

The battle for end-user control seems surrendered, at least by all but a few.


> we are constitutionally guaranteed the right to bear arms

It doesn't extend to all arms; e.g., you don't have a right to own anti-aircraft guns, weaponized anthrax, or even fully automatic rifles. What side of the line the exploits fall on is of course a question, but if I'm right that their only civilian use is illegal harm to others (e.g., you don't use them to protect your home or hunt deer) then it's simpler.


Yes, and that's what I meant about making it hard, not impossible. That said, there are uses of exploits which can be said are for the purpose of protecting property. I might conceivably want to use an Android or iOS exploit to liberate some of my data from my phone if some apps are less forthcoming with that data than I would like.


> I might conceivably want to use an Android or iOS exploit to liberate some of my data from my phone if some apps are less forthcoming with that data than I would like.

A great point that I should have thought of. I wish I could edit my original post and add that consideration.

I can draw a conceptual line: Ban using exploits on other people's equipment. But practically, I don't see how to stop that without criminalizing distribution, in which case I can't get my data from my phone (or install a 3rd party OS) without the vendor's permission.


> I can draw a conceptual line: Ban using exploits on other people's equipment. But practically, I don't see how to stop that without criminalizing distribution, in which case I can't get my data from my phone (or install a 3rd party OS) without the vendor's permission.

I don't understand what the problem is supposed to be. You don't need laws against knives because there are already laws against assault and murder and there is no harm in having a knife you use to cut carrots. Then you prosecute people for the bad things they actually do.

The justifiable laws against specific weapons are for the exceedingly dangerous ones like plutonium and smallpox. That isn't this.


> You don't need laws against knives because there are already laws against assault and murder

A good point. In this case it's so hard to catch perpetrators that to stop the crimes, it could be necessary to ban the weapons or their distribution (if that even is a practical option).

Are the other similar situations, where perpetrators are so hard to catch and you have to ban the means? Counterfeiting is all I can think of, and they don't ban color printers they just put tracking tech in them. Also, color printers are dual-use: They have many legitimate uses, exploits have very few.

> The justifiable laws against specific weapons are for the exceedingly dangerous ones like plutonium and smallpox. That isn't this.

Weapons that help foreign governments oppress large parts of their population might qualify, though clearly not all exploits fit that description.


> Are the other similar situations, where perpetrators are so hard to catch and you have to ban the means?

The nearest thing is clearly DMCA 1201. The problem of course being that DMCA 1201 is an epic failure. DRM circumvention tools are widely available to pirates, meanwhile it regularly subjects honest people to a choice between breaking the law and having it interfere with their legitimate activities.

> Also, color printers are dual-use: They have many legitimate uses, exploits have very few.

Exploits seem to have more legitimate uses than illegitimate ones. The only illegitimate use that comes to mind is wrongfully breaking into systems, which is the mirror image of the legitimate use of rightfully breaking into systems, in case you somehow get locked out (or some malicious third party locks you out).

Then on top of that, sysadmins require exploits to verify that a patch actually prevents the exploit. And proof of concept exploits are sometimes the only way to convince a vendor to fix a vulnerability. And academics need to study the newest actual exploits in order to keep up with what currently exists in the wild.

> Weapons that help foreign governments oppress large parts of their population might qualify, though clearly not all exploits fit that description.

Smallpox is inherently dangerous. Some exploits could be specifically dangerous in the sense that some very sensitive systems could be vulnerable to them, but only in the same sense that a Fire Axe could be used to break down some doors leading to very sensitive areas. The problem then is not that the public has access to axes, it's that there aren't enough independent security layers protecting sensitive systems.

And you can't fix that problem by banning tools because a high value target with bad security will fall to a state-level attacker regardless. The only answer is to improve the security of sensitive targets.


> Smallpox is inherently dangerous.

I think the equivalent (or much worse, actually) for exploits is something that is self replicating and disruptive. For example, a bug in the BGP routing protocol (or a certain percentage of the common implementations) that propagates bogus routes and disrupts some or all traffic for affected systems and spreads. Something that disrupted a large enough chunk of global traffic would not only be horrendous in its own right, but would also make dissemination of any fix quite problematic.

Then again, I assume it's probably good practice to somewhat lock down how BGP functions in your routers (if that makes sense. I'm not that familiar with it), but a certain incident from last year[1] leads me to believe that's either not possible, hard to do, or people just don't do it.

1: http://www.bgpmon.net/massive-route-leak-cause-internet-slow...


The analogy still fails.

If you have the tooling to keep smallpox and not kill yourself you can also keep ebola around too, if you go to the effort to go find it. Really dangerous stuff and is going to be costly.

The problem here is I can make and keep 'digital smallpox' on my home PC, and for many pieces of equipment it is surprisingly easy to find exploits for them. Are you planning on watching every computer? Every person in the world?

Take a lesson from the failed war on drugs, where there is significant profit motivation people will do what is necessary to make massive amounts of money. There are massive amounts of money in blackhat work.


Which is exactly what I mean by "there aren't enough independent security layers protecting sensitive systems." We've known that BGP has terrible security for many years.

Fixing it is hard because it requires a lot of independent parties to agree on what to do and update their routers. In theory this is the sort of thing a government could help with by providing funding, to fund research into solutions and/or provide cash incentives to early adopters.

But the market also solves these things eventually, since successful attacks are bad for business. It just takes for the attacks to actually happen first in that case.


Haha that would be a well-protected home.


The only legitimate use of this is a jailbreak tool. Obviously 'this' being the root exploit and not the malware/data capturing portion. I agree malware like that should be treated as munitions.


Really then, a tape recorder should be considered a munition under that logic.

Malware shouldn't be considered a munition any more than encryption should have been.

Unless the malware actually makes your phone explode, then it's a stretch to call it a munition.

Do we really want governments getting into the code review business?


Vice has a nice writeup on the exploits as well: https://motherboard.vice.com/read/government-hackers-iphone-...


FTA: It appears that the company that provided the spyware and the zero-day exploits to the hackers targeting Mansoor is a little-known Israeli surveillance vendor called NSO, which Lookout’s vice president of research Mike Murray labeled as “basically a cyber arms dealer.”

Phineas Fisher, we need you now.


So we have cyber arms dealers now. I continue to be amazed at the prophecies of William Gibson. Makes me wonder if there's anything to "remote viewing." Did he just look forward into the 21st century and write down what he saw? :)

BRB, gonna go slot me an icebreaker...


I think Gibson's explanation is that the future is here, it's just not evenly distributed yet.

Others like Doctorow and Stross has voiced similar views. In Stross' case, he apparently shelved the third part of a trilogy because the NSA was outpacing him.



> "So we have cyber arms dealers now."

Yep. And even middle men who will clear 7 figures taking a 15% fee. A dated article, but it has a "price list" of sorts, which is interesting: http://www.forbes.com/sites/andygreenberg/2012/03/23/shoppin...


> So we have cyber arms dealers now.

See https://www.zerodium.com/program.html

Someone who discovers/developers a remote Jailbreak like this can apparently sell it for a cool half-million.


For iOS, $500k was quoted in HN-featured media recently. However, $750k was quoted on HN in response to a query perhaps two years ago.


The same company I linked above issued a $1 million bounty for an iOS remote jailbreak vuln late last year (for a maximum of 3 different winners).

By the time the bounty expired only 1 team had won.

https://www.zerodium.com/ios9.html

So pricing has some fluidity, but you're looking at at least 500k.


There are many private firms in the US doing the exact same thing, and have for years, except they sell exclusively to the US government/NSA. Not sure if it's more or less profitable than being a freelance "cyber arms trafficker".


This vulnerability sounds like this:

https://www.zerodium.com/ios9.html

It was claimed November of last year. I wouldn't be surprised if this "Trident" was sold by Zerodium. Glad it's patched.

Edit:

I just saw the Citizen Lab article on this:

https://citizenlab.org/2016/08/million-dollar-dissident-ipho...

They mention the Zerodium bounty as well.


Article mentions that there are indications this was in the wild as far back as iOS 7, suggesting this isn't directly linked to that Zerodium bounty.


The Article mentions that the exploit has kernel mappings going as far as iOS7. This doesn't mean this predates the bounty at all, the bug that received the bounty payout for all we know might have been simply functional on iOS 7-9 or even earlier (and who ever made the final commercial product just didn't bother). iOS7/8 is most likely still used since older iPhones stop receiving updates at some point and older iPhones are the ones you might actually find in emerging markets and developing countries. While rare you can still see people even in "developed" countries running Iphone 4's, if you go to the middle east, africa, or asia you probably see considerably more of them through being sold on the secondary markets.


Older iPhones become the "kids" phone when daddy buys the new one. There are more of them out there then you think.


I guess so, but it's rare to see iPhone 4's at this point when the iPhone 7 is almost out of the door.

Also depending on how old the kids are it might actually work in reverse =)


You're right, missed that. Still possible the Zerodium exploit uses the same vulnerabilities.


Not having heard about NSO Group before, they've been claiming to have this ability since 2014:

http://blogs.wsj.com/digits/2014/08/01/can-this-israeli-star...

What other 0-days do they have in their pockets?


The article mentions how this may have been use all the way back in iOS 7 which is crazy.

If you are being targeted for surveillance smartphones are a very bad idea depending on your adversary. A cheap phone that is refreshed regularly will probably be your best bet.


On the other hand, smartphones are invaluable to most activists because they allow you to provide documentation of abuses through its various sensors (audio, video, photos, etc).


A cheap phone that is refreshed regularly will probably be your best bet.

Don't buy it traceably or in the same place, use the same model, use the same SIM, turn it on in the same geographic location, or call the same people!


I'm not sure how much I believe in any counter-surveillance methods anymore that involve a phone. Then again, I'm happy that my life doesn't include the need for that level of secrecy.



Make sure to update to 9.3.5 on all of your iOS devices ASAP!


Sad face. Right now, on my iPhone:

"iOS 9.3.5 provides an important security update for your iPhone"

40.5 MB. Great! Tapped "Download and install". It's greyed out. Huh?

Oh, "this important security update requires a Wi-Fi network connection to download". Really? It's only 40.5 MB. Let me decide, please, how I use my data.

Am I missing a setting that allows me to install an important security update on a network of my choosing?


Yeah, apple needs to fix this. They have been bumping the max app size for non-wifi downloads throughout the years from 10mb to 100mb, but they haven't kept up for the actual security updates.

For extra hilarity, if you have two iphones available, you can use the personal hotspot feature between them and install the updates even though it's all 3g/4g anyways.


Go to https://bugreport.apple.com and request that. The more duplicates they get, the more likely something is to get fixed.


Good call. Done. Also cathartic.


You have to wonder how many iPhones never see a WiFi connection.


Put your SIM card in another (i)phone, make a personal hotspot, update. If you don't have another phone, use a friends' phone and/or data that shares an access point. If you don't have a SIM card, move to a country where they protect consumer rights, so you can change phone without asking your carrier.

(yes, I realize how silly all of this sounds :-))


"iOS 9.3.5 provides an important security update for your iPhone and is recommended for all users"

I can't help but think at this point we've totally lost control of our devices..


I don't get the point you're trying to make here. We've lost control because there's a serious vulnerability? We've lost control because Apple can patch the OS?


Well its sort of a general thing. We can't even control what runs on our devices and they run so fast you might not even notice something new running. Also stopping hacker from getting in remotely is hard for 24/7 connected devices.

Even on desktop machines (Linux or Mac for me), there are processes running that I don't really know what they are doing. The OS is actually very complex and you could insert another process and it can go and send stuff out and it would be hard to notice. I was also thinking in context of Windows 10 sending out who knows what all the time ( I don't use windows, but I think they called telemetry..).

In the past when everything wasn't connected together and the connections were slower this wasn't as much of an issue. Although that does allow us to patch quickly and easily. Apple sees to it you'll be hounded till you update..

Its doesn't seem easy to fix. Maybe safer languages will lead to less hackable code.


> Even on desktop machines (Linux or Mac for me), there are processes running that I don't really know what they are doing.

That's been the case pretty much since Windows 2000 (or even 98).

> In the past when everything wasn't connected together and the connections were slower this wasn't as much of an issue

Viruses were really bad even when everything was pretty much airgapped. They were not vectors for state-level attacks only because of cultural elements (you weren't walking with an exploitable beacon in your pocket; there was little value in exploiting what were basically glorified typewriters; and established interests weren't taking this sort of thing particularly seriously outside of the US).

> Maybe safer languages will lead to less hackable code.

JavaScript is fairly safe: it runs in a VM, right? Guess what was used to persist this exploit across reboots...

I don't think this is something that we can "fix" at all. Door locks are ridiculously ineffective and exploitable, but very few people feel the need to use anything different. Similarly, computing devices will always be exploitable one way or the other, but people will keep using them; what we can do is to limit the surface attack as much as possible, and to avoid placing everything online (hello, IoT!) just for the hell of it.


Even on desktop machines (Linux or Mac for me), there are processes running that I don't really know what they are doing.

I don't run Linux so I can't comment on that one, but surely there are "simple" Linux distributions that don't start countless unrecognizable processes?

Mac is a hopeless case; a veritable plethora of inexplicable processes.

In contrast, I just logged in to my OpenBSD firewall. I was able to easily recognize everything that was running. The OpenBSD startup procedure is very simple to understand. It's easy to know exactly what processes are started and why.

tl;dr: horses for courses


This happened the moment you bought an iPhone. Not that Android is much better: Apple (and to a certain extent, previous feature phone manufacturers) set the stage for treating consumers as too dumb to use their phones as they like, and the rest of the smartphone arena happily followed suit. There's never been a point in time where I was satisfied with the heavy constraints placed on users by smartphone OS makers. And I'm not approaching this from a Stallmanesque, philosophical perspective, but a plain old ease-of-use one.


One upside of this is that a large percentage of devices are up to date. It's quite a contrast to other platforms (mobile or otherwise). Just how benign is big brother though?


You can be sure that this vulnerability was probably discovered by some researcher, then sold to grey markets like https://www.zerodium.com or https://www.exodusintel.com/ (they pay up to $1 million for a highprofile iOS exploit), who then resold it to some government who is now trying to exploit this dude's phone...


To people who work for companies that sell / invest in products that are used in unethical ways (Francisco Partners, NSO, Cisco, etc), how do you justify it to yourself?


How do the people working for Citizen Lab / lookout justify to themselves blowing active operations by countless police forces around the world?

Ops like Mexico vs Cartels?

Also, since when is selling weapons to governments unethical?


When the governments are oppressing their people, that's a good clue.


Does anyone know if the iOS 10 developer beta 7 (public beta 6) got this patch, or are we vulnerable?


According to Ars the bugs have already been fixed in iOS10: http://arstechnica.com/apple/2016/08/apple-releases-ios-9-3-...


We currently believe not exploitable due to increased hardening, but still more research to be done here...


Apple told Ars:

"Apple also tells us that these bugs were fixed in the latest versions of the iOS 10 public and developer betas, which were released last week."


Apple made its bug bounty program public a few weeks ago and the past few iOS updates have all been patching security vulns. It could be a coincidence, but from an outsider's point of view, it looks like the program is working.


Will 9.3.5 disable/remove the spyware on infected phones? Or does it just prevent one from becoming infected?


From the article:

  "The kit appears to persist even when the device
   software is updated and can update itself to easily
   replace exploits if they become obsolete."


And, even if your phone is updating it may be doing a fake update and then show you that you did update to whatever version Apple says is "safe" for this exploit but in fact Pegasus was in control the entire time. Get a new phone ASAP.


... or you could just hook it up to iTunes and let iTunes flash the whole phone with the latest iOS from scratch (9.3.5 fixes these exploits) instead of letting the on-device updater do it.

No need for a whole new phone.


I guess I've never dug deep into how a iPhone restore to default from iTunes works, but does it actually zero out the whole disk or is it possible for this exploit to survive that.


There are 2 ways. It can do a quick reinstall or it can do a full flash that wipes out everything (you can force it to do that by holding shift or the Apple key or something when clicking the restore button).


A Lookout page describes a post-update process involving opening the Lookout app and using it to check for an existing compromise, so it seems unlikely that the software update alone will suffice to uninfect a phone.


This is a REALLY, REALLY good reason why "activists" of any variety should be trained in how to acquire an old Thinkpad and install Debian on it (plus a reasonably xorg/XFCE4 desktop environment). If you're dealing with authoritarian regimes you can do a lot to reduce your attack surface. However at the end it all comes down to rubber hose cryptography. If your government, for example Bahrain decides to detain and torture you, you're pretty much fucked.


He would look pretty stupid putting a Thinkpad up to his ear when he makes phone calls, though, wouldn't he?


Debian? If it's anyone that's even 1/10 as targeted as Mansoor was, then they shouldn't use anything less than Qubes, Subgraph, or TAILS.


you realize TAILS is just debian with TOR, and non persistent storage?

I'm sure you can find a way to spear phish somebody and send them a Linux ELF binary that they will then execute, but accomplishing that is considerably harder than on Windows/OSX/Android/iOS.


> I'm sure you can find a way to spear phish somebody and send them a Linux ELF binary that they will then execute, but accomplishing that is considerably harder than on Windows/OSX/Android/iOS.

I'm afraid people are just as foolable and code just as executable on Debian as on any other platform. Additionally, vulnerabilities on Android are likely exploitable on Debian.

You will not survive an attack from a state adversary because you used Qubes, or OpenBSD, and certainly not TAILS (which is not particularly secure, just well integrated with Tor). You will survive because you are familiar with your tools of choice and you know how to secure them.

As a final note, if you're being targeted by a nation state, getting an pre-owned ThinkPad will probably result in getting a pre-0wn3d ThinkPad.


If you're being targeted by a nation state you will face all sorts of things to deal with that can't be handled by buying a Thinkpad with cash from a randomly chosen used computer store. Like bugging your residence and office, bugging your car, putting advanced GPS tracking devices on your car, rubber hose cryptography, hardware keystroke loggers inserted in your equipment while you're known to be away from your home or office, full disk copies of your laptop/desktop being taken (clonezille-type) by breaking into your office while you're away, all sorts of shit.


If you don't keep your airgapped laptop on your person or in a tamper evident container at all times, it isn't an airgapped laptop. And if it isn't an airgapped laptop, it shouldn't know any secrets.


At which point if you're a UAE dissident and trying to deal with all this while living in the territory of the UAE, you might say "fuck it" and find a way to move to Toronto.

replace "UAE" with "Ethiopia" or any other authoritarian regime.


They took his passport.


Could somebody or a group who needed privacy implement a ground floor system like Menuet OS [1] or KolibriOS [2] running some sort of EC cryptography, and custom communication protocols off of a live CD or USB stick.

Would this even be practical? I realize TAILS is an attempt at bringing these tools to as many people who may not be technical, but for a smaller, more tech-saavy group, would this work?

[1] http://www.menuetos.net/

[2] http://kolibrios.org/en/


I'm a beginner when it comes to software development (mostly web development), but it seems to me that the majority of complex exploits like this involve some type of memory overflow and subsequent code execution.

Shouldn't there be methods for detecting these kinds of things in source code or more priority given to preventing it in the C/low-level community?


There are. "(Kernel) address space layout randomization" is one of them. It was circumvented here; that's part of why this is impressive.


How is it possible to put the malicious code in the correct memory spaces? Unless the attacker had a full image of the memory, I don't see how this can be accomplished.


The second bit of the exploit chain, CVE-2016-4655, leads to disclosure of kernel memory addresses. Once a single memory address is known, you can calculate the random offset of the kernel, and then exploit the third part to overwrite the return address and return into specific chunks of kernel code ("Return Oriented Programming"), whose addresses you computed from the offset + a fixed code location. These can let you e.g. install your payload.


Aside, but does anybody else find the switch from right-to-left to left-to-right really jarring in this screenshot?

https://citizenlab.org/wp-content/uploads/2016/08/image13-76...

It has the effect of introducing a line-break into the middle of a line, rather than at either end. I've never encountered this before and it took my brain a few seconds to catch on.

I'd be really curious how native bilingual readers of both a right-to-left and left-to-right language would read that. Does it look natural? Where do your eyes go first?


BiDi sucks, and as an RTL language speaker you learn to live with it.

My native language is Hebrew, and we don't bother translating most technical terms to Hebrew. You end up with technical documents looking something like this:

".yadot patch a desaeler Apple .iOS 9.3 ni ytilibarenluv privilege-escalation a dnuof srehcraeser ehT"

In newspapers, where lines are typically short, you get the effect in the screenshot in question. E.g.:

   "Everyone gets a day .ni tnew eH
            .dias eh ",off tomorrow
You can't really get used to that. You actually have to read the second line sideways from the middle!

By the way, typing mixed text is even worse than reading it. You have to press alt+shift every 2-3 words to switch layout. If that's not bad enough, Office 2007 (if I'm not mistaken) introduced a 0.5-1sec lag after each keyboard layout switch. Imagine typing out an entire document like that. I lost my nerve a couple of times.

In many cases we avoid this issue by simply writing technical documents in English, but sometimes that's not an option.


Thanks for this – I really appreciate the insight. That changing-input lag sounds like an absolute nightmare.

Since I left my previous comment, I came across some Apple presentations on new work they've been doing in iOS 9 and iOS 10 on internationalisation including RTL and mixed-content support. It sounds like there's a lot of work still to do, but I was pleased to see they've at least started multilingual input sources now (in iOS 10, autocorrect can work with multiple languages without having to switch keyboards, though I'm guessing this only works with Latin alphabet languages for now?).


I thought it was interesting that they're using Cydia Substrate to hook into specific third-party apps for monitoring.

I wonder if we'll ever see privacy conscious apps using some sort of obfuscation. So that every time you update your app, the attacker will have to reverse-engineer the symbol names again.

It seems like a compile or link time tool could find method call & selector references. As long as your app isn't calling methods using strings, or doing something else tricky, I think it could work.

Or you could just write the app in swift. It's the Objective-C runtime that makes it so easy to intercept method calls.


The trouble is that nearly every app does "something tricky" because it's so baked into Apple's frameworks. Every UI control calls methods using strings when you interact with it. Key-value coding and observing works extract method names from strings. Core Data uses method names to look stuff up in the underlying storage. And these things are so easy to do that it's pretty common for third-party code to do similar stuff. Reliably figuring out which methods were safe to change would be really tough.


The attacker could find a way to be in the kernel, or insert a shim between the app and OS if everything was sufficiently obfuscated.


  > I wonder if we'll ever see privacy conscious apps using some sort of obfuscation.
Actually Apple can do that already since they have bitcode for many applications. For now it's only required for watchOS and tvOS apps, but might become requirement for every app in future.

I suppose it's close to LLVM-bytecode so perfect for obfuscation.


Unless you are a high-value target, Apple's security seems fairly sufficient for normal use (I have Android ;)). Companies like NSO Group that state that they play both sides without any moral compass seem like a great target for Anonymous or others. Imagine the client list, and banking information as a trail to blaze!


This guy seems to be quite the high-value target to warrant 3 zero-days, on ios no less.

edit: What platform would be recommended, if you happen to be a high value target though. Using iOS at least seems to raise the cost of infiltration significantly judging by this http://www.forbes.com/sites/andygreenberg/2012/03/23/shoppin....


The other dimension to keep in mind is time. Governments can just cash out whatever {Vupen|Zerodium|rogue researcher} ask for, and get a 0-day for any major platform waiting there to be sold. Any obscure, probably less-secure platform has surely gone through less research and time becomes key: probably there will be no time to develop an exploit within your operational window.


I agree with you on iOS being the better choice, which is why there's a wink to my owning Android. I am obviously not a high-valued target.

FWIW - I only journal with ink and paper. I never trust digital files, and I sometimes forget how many backups I have of the same photo or other file.


How does one monitor the infection of an iOS device and how do you capture and store all the stages of an infection?

I've never done any reverse engineering so I'm not sure how you'd go about recording what an infection like this does to your device...


He wasn't hacked, he was being "lawfully intercepted"!

Just kidding. The difference here is that a government doesn't want to do such as provide reasonable suspicion or go publicly in front of a judge.


So, basically three things to notice:

1. never click on links in e-mails. 2. if you're targeted by a nation state, you're screwed. 3. everybody is vulnerable to rubber-hose cryptography.


It's curious that Signal was missing in their list of apps that can be intercepted. Are the targets not using it? Or was it just not mentioned?


Is there any way to check if an iOS device has Pegasus installed, without installing and registering for the Lookout app?


Sounds like an exploited device should be jailbroken, you could try running an unsigned binary (if it's easy to find & install one - I'm not sure).

I believe the article also says it disables the auto-update mechanism. So if you've seen an auto-update prompt recently, your odds are better.

The background audio recording must be terrible for battery life.


(From the lookout paper): "In order to maintain its ability to run, communicate, and monitor its own status, the software disable's the phone's 'Deep Sleep' functionality."


I'm curious about this to. Does anyone know if there are steps that one could use with Xcode to see if any of these files exist?


I have an iPad 1 which long ago was left behind by upgrades. It'd be nice to know when the vulnerabilities were introduced too. Should I stop doing anything networked with it?


I would definitely not trust it, at the very least. Even if this particular vulnerability didn't exist for it, there are bound to be many others that did.


"we did not have an iPhone 6 available for testing"

Big budget operation!


It's a human rights lab at an academic institution. Small budget is hardly a shock.

Somewhat hilariously, they appear to be funded in part by donations from Palantir.


I think NSA is trying to acquire them.


https://citizenlab.org/2016/08/million-dollar-dissident-ipho...

  > Alarmingly, some of the names suggested a willingness on
  > the part of the operators to impersonate governments and
  > international organizations. For example, we found two
  > domain names that appear intended to masquerade as an
  > official site of the International Committee of the Red
  > Cross (ICRC):  icrcworld.com and redcrossworld.com.


This is a much more informative source. Moderators may want to merge everything into this story: https://news.ycombinator.com/item?id=12360714

Edit: that story is now flagged as dupe, can we at least get the URL changed to this much more in-depth article? https://citizenlab.org/2016/08/million-dollar-dissident-ipho...


That is a MUCH more detailed article. Thanks for sharing.


Yes. Done.


Thanks!


Related:

  > That a country would expend millions of dollars, and
  > contract with one of the world’s most sophisticated cyber
  > warfare units, to get inside the device of a single human
  > rights defender is a shocking illustration of the serious
  > nature of the problems affecting civil society in
  > cyberspace.  This report should serve as a wake-up call
  > that the silent epidemic of targeted digital attacks
  > against civil society is a very real and escalating
  > crisis of democracy and human rights.
https://deibert.citizenlab.org/2016/08/disarming-a-cyber-mer...


Money quote for me from that link:

    That the companies whose spyware was used to 
    target Mansoor are all owned and operated from 
    democracies speaks volumes about the lack of 
    accountability and effective regulation in the 
    cross-border commercial spyware trade.
    
    While these spyware tools are developed in 
    democracies, they continue to be sold to 
    countries with notorious records of abusive
    targeting of human rights defenders. Such 
    sales occur despite the existence of 
    applicable export controls.


Be very very glad citizenlab exists in the world. They're doing good work against very strong, very well funded adversaries.


Ok, since most people seem to agree that that URL is the best source, we've changed to it from https://blog.lookout.com/blog/2016/08/25/trident-pegasus/. Thanks.


This is off-topic but at first I thought I was on a Spotify blog page. Lookout has very similar branding.


lol downvotes, ok hn. My initial reaction was "this is crazy Spotify found something like this", which was why I commented.


It's ok. I thought the same thing. You're not the only one.


<< Instead of clicking, Mansoor sent the messages to Citizen Lab researchers.

The story is great but I really doubt this. I'm wondering what made him suspect the link? Does he send all the links he receives to Citizen Lab?


Sounds like he has been targeted by state-level actors before. He's probably suspicious of any unsolicited information sent to him.


Yes, who doesn't click on random links received from unknown numbers over (get this) SMS?

Some people.


Yeah that's almost dumb enough to indicate that this whole thing has been a cat's paw. Burn an old vuln, get everybody riled up about it, but distract them from looking for the sophisticated things you're doing when you actually want to spy on a troublesome subject.


Given his job I assumed he gets a lot of requests/messages from unknown people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: