Hacker News new | past | comments | ask | show | jobs | submit login
NSO Group iMessage Zero-Click Exploit Captured in the Wild (citizenlab.ca)
940 points by jbegley 14 days ago | hide | past | favorite | 326 comments

I always wonder what it takes to find this kind of exploit. Are the programmers at NSO group just the best in the world? Or are they incredibly lucky? Both? I’d love to know what a normal day at work is like for their engineers. Clock in, sit down at a…crazy expensive hardware and software testing station? Crack open a brand new iPhone and start probing away while referencing internet sourced chip documentation and software manuals? What does it even look like?

The NSO group are ex-Mossad who decided working for the government does not pay as well as making money out of exploits, probably obtained at the highest levels of top secret work.

So far, they have been tolerated by the Israeli government as they all went to the same schools, all did the armed forces service together, and all know each other. This allowed them to get a free pass so far. Privately, many of their ex-colleagues, are very critical of their lack of ethics.

All this will change, the day some of the NSO exploits will be used against Israel, the same way some of the NSA leaked tools are now used in the wild.

NSO group is ex unit 8200, which is military signals intelligence. So in American terms, it's the NSA not the CIA. The distinction is important in a country with mandatory military service. You get a large number of people who go through, get trained, and then leave because it never was a career. A number of them take their skills to the private sector.

Mossad, on the other hand, is a civilian intelligence service and I'm told there's a strong tradition that its members don't freelance their services after leaving.

Not sure the distinction is relevant in a country with such a small intelligence community:

"The Israeli Unit 8200 An OSINT-based study" https://css.ethz.ch/content/dam/ethz/special-interest/gess/c...

"Most of this data is shared internally across the IDF (as well as sometimes externally, cf. 3.3 below) to the Unit’s relevant stakeholders, whether combat troops, decision-makers or other intelligence agencies such as Mossad. Or as Yair Cohen, who served 33 years in Unit 8200, the last five (2001–05) as its commander, put it, "90% of the intelligence material in Israel is coming from 8200 […] there isn't a major operation, from the Mossad or any intelligence security agency, that 8200 is not involved in"

>"...Mossad, on the other hand, is a civilian intelligence service and I'm told there's a strong tradition that its members don't freelance their services after leaving..."

Tradition is not what it used to be:

"Black Cube: The Bumbling Spies of the ‘Private Mossad’"


"...Despite some missteps, Black Cube “has to turn clients away because it cannot service all the demands,” said Mr. Halevy, a former head of the Mossad, an Israeli government intelligence agency. He said Black Cube has worked on 300 cases since being founded in 2010 by two former Israeli military intelligence officers, Dan Zorella and Avi Yanus..."

"Harvey Weinstein hired ex-Mossad agents to suppress allegations, report claims"


It's an important distinction. The fact that huge numbers of people rotate through the hacking side of 8200 (like the NSA, vast majority of 8200 members don't work on that) is what drives the supply.

Intelligence services typically have less turnover. Though that is changing, particularly for NSA, where people leave to go to contractors.

Also, frankly, describing NSO as ex Mossad just makes phone malware sound much more complicated than it is and much harder to stop. At the end of the day, its software, written by people in much the same way any software is written. It just exploits mistakes other software devs made so that it can run.

"by two former Israeli military intelligence officers, Dan Zorella and Avi Yanus."

emphasis on "military intelligence officers" i.e. not mossad. this is like mixing up the CIA and FBI. to an outsider they might appear the same, but that's not really the case.

"Ilan Mizrachi, a former deputy head of the Mossad, Israel’s intelligence agency, said that he sees nothing inherently wrong with former intelligence operatives working for civilian enterprises. “Some people I know went into journalism, some are consultants,” he said. “Among many other professions, some work for companies like Black Cube.”


Quote from the article: "Despite some missteps, Black Cube “has to turn clients away because it cannot service all the demands,” said Mr. Halevy, a former head of the Mossad, an Israeli government intelligence agency..."

Which determines that he is qualified to speak about Black Cube, not that he works for Black Cube. There's a difference.

Please read the article first...

"Efraim Halevy, former director of Mossad, an Israeli intelligence service, is a member of Black Cube’s advisory board."

ok, those are better pull quotes than the original :) just noting that mossad and aman (military intelligence) are 2 different things.

> All this will change, the day some of the NSO exploits will be used against Israel […]

There's a reason why Russian malware software does not attack systems that have an RU locale for the keyboard: don't sh_t where you eat.

It’s the system language, not the keyboard settings.


Is this true? I've never heard that before. (But makes sense)

This is myth. Russian systems are suffering from malware just like others. And probably more, because it's easier for local criminals to target local companies. It might be true for a very tiny fraction of malware, but that's definitely an exception, rather than rule.

Of course if there are state-sponsored hackers (I'm not really aware if those exist, but I allow this possibility), they will target whatever their management points at. And with corruption it's pretty possible that some local business could be targeted as a part of some financial wars.

But majority of hackers are just some guys with some IT knowledge and zero morale. They'll buy some exploits and tools on black markets, duct tape them into something and release in the wild, waiting for profits (or police). They'll rob banks or babushkas, they don't care.

It is not myth for ransomware. Many documented cases. It's essential to the survival of these groups; local cops more likely to leave them alone if they leave local businesses alone.

> It's essential to the survival of these groups; local cops more likely to leave them alone if they leave local businesses alone.

Which is a huge misconception outsiders have about this scene. They are Russian-speaking, not Russian, just like English speaking gangs are not necessarily English. These groups may (and often do) consist of nationals of different exUSSR countries, sometimes without even knowing each other personally. They might not even be a single group, just some individuals doing different parts of the scheme. (including "press releases" and "interviews" they sometimes do)

It has been the case long before all this ransomware fad. Russia, Ukraine, Kazakhstan, Belarus, and partially Lithuania had world's top CC theft gangs for a couple decades, and they always been of mixed origin. They mostly steal EU and US cards because it offers better reward/risk ratio, compared to the home countries which are poor. But nothing stopped them from stealing CCs in Russia or Ukraine either, certainly not some mythical cops (who couldn't care less in reality); in fact, skimmers are widespread in those countries as well.

Ransomware groups are the same as CC thieves, it's just a different scheme; they probably avoid home countries for the same reason (same risk, less reward). The state can't possibly have too much influence on them, it just triggers the bullshit detector for anyone who lives in any former Soviet republic and knows about this stuff at least superficially.

It's specifically because Russian prosecutors couldn't care less if there are no Russian victims. By doing this they know there is next to zero chance of criminal proceedings.

Possibly but even if so it's just in a few examples that probably won't be repeated in the future now that it's known.

> So far, they have been tolerated by the Israeli government

Why wouldn't the Israeli government tolerate them? If anything, doesn't their government benefit from groups like this?

They get access to spy tools that they didn't have to use taxpayer money to fund, and because it's former members of their own intelligence working on it, they have some semblance of influence over how it's used.

Am I missing something?

That's my understanding too. Funding is not really an issue, 8200 has one of the biggest budgets in the army but they are bound to the law and regulations, NSO on the other hand can pass the lines and keep Israel uninvolved

Not really. Israel likely openly shares secrets with other Five Eyes countries and so it gets a sort of free pass from geopolitical pressures. Its a mutually beneficial exchange. Additional to the Mossad comment, the Israeli students who work for these group take an entrance exam at 17 and that recommends them for what's known as UNIT 8200 which is a feeder network/NSA clone.

Israel isn't part of five eyes.

I think GP was referring to geopolitical alignment and intelligence sharing rather than membership per se.

Which Israel is also not part of.

Israel is only peripherally and reluctantly involved in the confrontations with Russia and China at the heart of 5E interests, and it neither trusts nor is trusted by 5E countries to the level of sharing intelligence sources or tools except in specific, transactional interactions.

American and Israeli politicians like to talk about Israel being America's "closest ally", but those are just pretty words. Israel's real selling point to the US is that it's a low-maintenance ally.

> Israel's real selling point to the US is that it's a low-maintenance ally.

Hm, that's interesting. Israel seems to be the highest-maintenance ally the US has. Other than, perhaps, Pakistan.

I would say that Israel is politically necessary in the US, but they are expensive and prickly.

And I don't think I've ever seen the "closest ally" quote.

We surely inhabit different media worlds, but FWIW that's the perspective from this side. No arguments intended.

The United States has thousands of troops deployed across the Gulf to defend its allies there. It has another several thousand as a "tripwire" in South Korea.

US troops have died in combat defending Saudi Arabia and Kuwait. They've been killed by militants directly supported by Pakistani intelligence services.

How exactly is Israel "high maintenance" by those standards?

That's a reasonable argument, but I'd counter that the US has never defended Kuwait nor Saudi Arabia, but only her own interests in the region.

I think the US support of Israel comes from a different place, and I think Israel is a cantankerous partner. This may be by design, of course.

If you want to define away sending hundreds of thousands of troops to defend Saudi Arabia, using those troops to free Kuwait from foreign invasion, and then keeping those troops in both countries (where they've taken everything from car bombings to shooting attacks) as defending her own interests rather than those states, then you can define away any action taken on behalf of an ally that way. To take this to an extreme: by that definition, US defense of South Korea isn't "aid to an ally".

There is a legitimate argument that US aid to Israel isn't well thought out rationally, but the only reason that's plausible is that a few billion a year and low-cost diplomatic statements/votes aren't a big enough deal for the Serious National Security Considerations to come into play.

I agree with your last paragraph.

I think the hostility encountered by the US in the Middle East is entirely a function of protecting her own interests in a complicated and contested region. Maybe necessary, definitely inevitable.

The human suffering on all sides is a cost of doing business. This is deemed acceptable by the US govt and not contested by the hosting countries for various bad reasons. It is nothing more special than that. There is no grand righteous moral justification, but that is a useful fiction.

I apologize if this offends you, and I don't share it to be disrespectful -- just to explain my perspective.

I mean, sure. The moral question is important! But I was starting from a thread of people who didn't understand the real-life character of the Israeli-American relationship.

If you're trying to describe the actual actions of the parties involved, morality is not a useful analytical or predictive tool; that comes into play when you yourself try to act.

Doesn't USA literally send billions of dollars of hardware as "military aid" to Israel?

It gives Israel military aid on the order of $3-4B per year. On US budget orders of magnitude that's peanuts, and comes with none of the US troop or naval commitment of e.g. the Saudi or Korean alliances.

6 eyes

> All this will change, the day some of the NSO exploits will be used against Israel, the same way some of the NSA leaked tools are now used in the wild.

Has the leak of NSA tools changed anything?

> Has the leak of NSA tools changed anything?

Yes. The bipartisan USA Freedom Act limited several aspects of the NSA's dragnet [1]. Amendments weakening the bill were defeated [2]. Less materially, a documentation requirement for § 702 searches of U.S. persons was added in 2018 [3].

[1] https://www.eff.org/deeplinks/2014/11/usa-freedom-act-week-w...

[2] https://www.eff.org/deeplinks/2015/05/usa-freedom-act-passes...

[3] https://www.lawfareblog.com/summary-fisa-amendments-reauthor...

I’m skeptical the NSA doesn’t just ignore or creatively interpret laws it doesn’t like, given their past history and the consequences for their misbehavior.

I mean when the CIA got busted not only spying on Congress a few years ago, but also lying about spying on Congress, they were told “don’t do that again please.”

"Not wittingly."


Statute of limitations has expired, IIRC.

It's mind boggling Clapper wasn't crucified for this. This sort of thing keeps happening and some sketchy outsider may get elected with catch phrases like "Drain the swamp". Oh wait...

i can't believe nobody went after the org with algorithmic dossiers for everybody on earth

Google or Facebook?

There is only one org that has access to all of this data and more.

It’s also the Mossad/Israeli government realizing that their capabilities and interests can be advanced by having the hacker mercenary services for sell.

the high tech industry in Israel is not that big. If you look at the companies that make COTS microwave and millimeter wave telecommunications equipment, they're not too different from the other .IL companies which make advanced radar systems, jammers, and avionics for aircraft.

I imagine it's similar for black/grey-hat software development.

The tech industry in Israel is RELATIVELY huge, not in absolute numbers of course.

I didn't get the connection between microwave and spying tools

RF/microwave/millimeter wave engineering, SIGINT, cryptographic stuff and unit 8200 + spying tools are linked.


> The NSO group are ex-Mossad

There's no such thing as ex-Mossad or ex-CIA or ex-KGB etc.

Apparently it's not Mossad but unit 8200, but I'd bet anything that nothing happen without their blessing.

It wouldn't be too far-fetched to imagine that NSO is running malware campaigns against Apple and Google employees.

Look at the exploits Google's Project Zero find for a less clandestine example. No doubt they employ clever people but you don't have to be superhuman to find vulnerabilities in code. Part of it is paying people to sit down and work on it fulltime.

An interesting quote:


"This has been the longest solo exploitation project I've ever worked on, taking around half a year. But it's important to emphasize up front that the teams and companies supplying the global trade in cyberweapons like this one aren't typically just individuals working alone. They're well-resourced and focused teams of collaborating experts, each with their own specialization. They aren't starting with absolutely no clue how bluetooth or wifi work. They also potentially have access to information and hardware I simply don't have, like development devices, special cables, leaked source code, symbols files and so on."

Yep, Apple themselves will find exploits, white hat hackers will find exploits, Project Zero or Microsoft teams will find exploits, and so will NSO or other blackhats. It is a mix of luck, skill and putting in the time. NSO has successfully monetized their exploits, allowing them into then invest the money back into hiring more people, which increases the luck/time put into it.

There's an entire "gray market" of exploit brokers. NSO group is one of the many players. There's a good chance this is an off-the-shelf exploit.

The podcast Darknet Diaries had an episode about the topic recently: https://darknetdiaries.com/episode/98/

(that episode is tied to this book: https://www.amazon.com/gp/product/1635576059/ about the topic)

Also, I like that podcast in general - highly recommend it if you're into infosec stuff!

Episode 100 is specifically about NSO and dives deeper into Pegasus. Highly recommended listening after episodes 98 and 99.


Saw the thread title & clicked through to post exactly the same :)

It's a great set of episodes. This is without a doubt my favourite podcast. 2nd favourite being Knowledge Fight, which debunks Alex Jones and the nonsense that he spews on a daily basis.

That goes very well with this prior episode as background info: https://darknetdiaries.com/episode/28/

Just read that book after listening to the DND episode with the author and it is really great.

They probably hunt exploits like that, but what is quite likely is that they have access to stolen Apple source code and scour it for type overruns like the one in CoreGraphics that is the cause of this exploit. I would estimate that the majority of exploits are the result of source code theft, leaks of potential vulnerabilities from people who have access to the source code and social engineering. There isn't anything particularly special about a "Mossad" trained or "NSA" trained hacker. They are engineers like many of us and prefer the path of least resistance. Trying to brute force buffer overruns without having source code access is tedious. Why go to all the effort to black box exploits when you can take advantage of source code analysis.

I mentioned in another post about why people would leak to the press, when you most likely will get caught and fired. Leakers of a different caliber will leak source code to governments and companies like NSO and have much less likelihood of being caught and much higher remuneration.

You estimate wrong. I've been in infosec for over a decade. We look at binaries. It's not that hard. In fact, it's often easier, since type conversion errors are often a lot more apparent in a disassembly, where you can see exactly what operations are being performed without having to know exactly what the language rules around signedness and integer promotion are, and without having to follow through complicated type hierarchies. Similarly, a good optimizer will strip away layers of software abstraction and make what's actually happening more evident.

There is value in source audits, but you're wrong that exploits come out of stolen source. That's exceedingly rare, and usually quickly publicly leaked when it happens.

> Similarly, a good optimizer will strip away layers of software abstraction and make what's actually happening more evident.

I can attest to this, I've found it's frequently far more satisfying to debug at -O3 than -O0. At O3, the disassembly really lays bare the invalid assumptions that were relied upon.

I respect your expertise and agree that good tools can help find potential vulnerabilities.

You aren't the first person to say that exploits created as a result from source code theft are rare and the theft is quickly publicly leaked when it happens. Why do you think this? I would think that unethical players like NSO Group would have even more motivation to ensure the use of stolen source code is never revealed.

Because I've been doing this for years and I know how we find exploits; we don't need source. Why would NSO need it?

NSO isn't an "unethical" player, they are "ethical" within their own twisted ethics (that most of us don't agree with). They aren't a spy organization outside the law, they're a company building tools for (supposedly) law enforcement. Being caught doing something blatantly illegal like using stolen source code would be the end of them. They can't afford that risk. They have absolutely no need to use source code. There are zillions of binary-only techniques for finding exploitable bugs (e.g. fuzzing). Source code just isn't nearly as useful as you think it is.

If you want a practical example: just a few weeks ago I got ahold of a peculiar, wholly undocumented embedded device (can't even find teardowns on the Internet, no public firmware downloads, etc) and within one day I had a remote root exploit working - this wasn't using an existing CVE in a library, this was a bespoke bug in this device's firmware, and the exploitation involved reverse engineering two authentication token algorithms and a custom binary communications protocol. No source code. Obviously this isn't iOS, which is quite bit more hardened, but that should give you an idea of just how easy it is to find exploitable bugs with just something like Ghidra, if you know what you're doing (I was: I was looking specifically for a kind of bug likely to exist, to narrow down the possibilities of where it might be present, and eventually found a suspicious point of attack surface that indeed turned out to be vulnerable; then it was just a matter of reverse engineering enough of the protocol and token requirements of that code to be able to actually trigger it remotely).

I was actually kind of annoyed it took as long as a couple hours to find it (once I had a decent understanding of the rest of the system); I was expecting even less, but it turned out they did a better job than I expected avoiding some of the classic mistakes - but not a good enough one :).

Thanks for the insight. It is super informative!

> I would estimate that the majority of exploits are the result of source code theft, leaks of potential vulnerabilities from people who have access to the source code and social engineering.

No. Some Apple source code has publicly leaked (iBoot) but stealing this kind of stuff is bound to leak. And reversing binaries for vulnerabilities is not that much harder.

They recruit people who were trained to find exploits, it’s less about having the best programmers and more about having people with a specific set of learned skills and dedicating them to this task.

I would be surprised if their core iOS research team is much more than 10 or so people at any given time.

They also probably use brokers and buy at least some of the exploits they use from freelancers if they offer ~7 figures for a zero click exploit a lot of freelancers will be working on this too.

It’s just like any bug bounty program, internally you run a small and dedicated team and externally you pay enough to entice freelancers to spend their free time on your systems to scale it further.

It takes IDA Pro, some low level asm/C++/Python programming skills and a lot of hours.

Reverse engineering is not that complicated, however getting some results is difficult and time consuming.

In that example it's basically looking at how some libraries are parsing input, that's it. Since everything in those phones are C/C++ nothing is "safe".

It's the same skills you need to crack games, cheat in online games etc ...

It would be quite difficult if you can't get access to the binaries that you have to put into IDA (or, well, Ghidra, for that matter, but IDA Pro is probably better).

The binaries are available in OS restore images that Apple makes publicly available.

Ian Beer with Google's Project Zero gives an amazing walk-through of what it took for him to build a similar exploit.


"Are the programmers at NSO group just the best in the world?"

The parent comment seems to imply that someone who can find programmer mistakes is a better programmer than one who actually writes software for the public. If thats true then wouldnt it be reasonable to prefer to use message software written by NSO instead of Apple. Why dont "security researchers" write the software we use instead of "software engineers".^1 Which group would be more likely to have "the best programmers in the world" who would be the least likely to make mistakes. Honest question. Im not trolling. I think about this question all the time.

1. Some of the programs I use and rely on everyday, even more than something like "iMessage", were written by people who claim to work in "security" or "research" (or even teaching math to university students) not "engineering". I have no complaints about these programs. Yet I have plenty of complaints about the software foisted upon us by Big Tech.

It’s just a matter of the two groups having different skills. One group writes for the general case while the other specialises in corner cases.

The latter looks really impressive when it’s done well, but it’d be silly to expect someone with deep security knowledge to sit down and build a spreadsheet manager from scratch. The two skill sets are just different. There is no “best”.

The hard part is not necessarily finding the programming mistake so much as figuring out a way to reliably exploit them. Back in the day before ASLR and other mitigations it was really straightforward, but modern OSs have much more sophisticated countermeasures to prevent buffer overflows and user-after free bugs to enable RCE.

"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

Engineers need to write 1k lines of perfect code all the while being chased by their bosses to finish fast.

Hacker's need to find just one mistake out of 1k lines of code.

Exploit development is a skill like any other. Instead of learning things like software design patterns, distributed systems, software reliability, etc you would have spent time learning about memory layouts, OS designs, mitigation techniques, decompilers, etc.

A lot of times it is just poring over code looking for bugs that have already been found in other locations in the code.

For example. this is a use after free bug. You can statically analyze disassembled code to find places where this might be happenning, and then figure out how to exploit that instance of the bug.

If you have an organization that can legally hire people, pay them a stable salary and legally sell exploits to all sorts of people around the world you end up with NSO.

NSA finds exploits for their own mission and Google Project Zero researches vulnerabilities to [per their claim] ensure internet stays a secure platform but neither of them sell exploits for profit like NSO.

So, no, they're not the only "genius"es out there. They just are less ethical about it.

I could really recommend you the book A Bug Hunter's Diary by Tobias Klein which is a practical walk-trough in finding these kinds of security bugs.


Here's ranking of top people for this kind of job


Members of those teams are often Security Engineers at e.g Google, Banks, computer emergency response team (CERT) and so on.

They are not as secretive as I expected, even running a twitter account. Kind of uncanny

>We have two Canadians, two Estonians, an Israeli and a Korean

These are security teams doing capture-the-flag competitions, you can literally walk up to them at in-person events and say hi if you'd like. There's nothing illegal going on here.

> I always wonder what it takes to find this kind of exploit. Are the programmers at NSO group just the best in the world?

Not much different too how software exploitation was done in Win98-ME-xp era was done.

A lot of vulnerabilities are very obvious from disassembly, and often can even be found with automated tools.

Today, it's easy. Back in early 200x, everybody was not only hiding their sources well, but obfuscating binaries in every way possible.

People just forgot the scale of binary only exploitation on its peak.

I think it's more that the possibility space for exploits is so large that a dedicated force of highly creative reverse-engineers is all you need to dig them up.

From what I've heard it can be almost trivial to find them if you know what to look for. But it seems that very few people know exactly where to look, and fewer still understand how to interpret the results.

https://www.youtube.com/watch?v=zyHI2Ht3OAI Jiska usually finds couple remote exploits a year by just looking at new component/subsystem. Its all dumpster fire underneath :(

They may have purchased it from an exploit broker.

Zerodium will pay up to $2,500,000 for no-click iPhone/Android exploits [1]. I'm sure they'd only pay that much if they were highly confident they have clients who'd pay enough to make the risk and investment worth it.

[1] https://zerodium.com/program.html

Someone still had to discover it though!

Come on, since jailbreak discovered (checkm8 as king of it) you can run pretty much anything just on iphone itself including automated tests, fuzzing, debug and crash dump analyses. Break is always easier than build. iMessage plagued with such bugs since 2010, the question is how it is not yet rebuild up to decent quality. Retarded security measures like blastdor or aslr is irrelevant as these mostly a security theatre that just require extra step to avoid.

It's not too esoteric, fortunately. The short explanation is they are a part of the Israeli gov, as with all tech companies in that territory, so that gives certain material advantages to their preferred companies, just like how USA does with offense contractors like Northrop.

Basically, they are propped up by their gov, and that is the major problem.

> I always wonder what it takes to find this kind of exploit.

A lot of knowledge about the target system's internals (comes with experience) and probably a lot of investment in fuzzing infrastructure or A LOT of time reverse engineering and reviewing manually. Finding bugs in closed source software by hand is incredibly slow and painful.

The few most recent episodes of the "Darknet Diaries" podcast, which are relevant, including interviews with CitizenLab, descriptions of how NSO works, Black Cube, and the market for buying exploits from Argentina.


Most likely they buy exploits on the market, like basically everyone else. No reason to limit yourself only to the first party knowledge.


As someone who has some familiarity with the people and processes, this response seems extremely off to me.

> Selection starts from age of 4

Care to share your sources for that? As far as I know most are self taught and get some further training in military.

> Boring.

It might be boring to some and might be extremely interesting for others. People who like solving puzzles and facing hard challenges usually like it. Of course, if your passion is building you wouldn't like it as you don't "build" something new.

> Usually a group of introverted young kids that look at their own shoes while talking to you, led by an extroverted young kid, that looks at your shoes while talking to you.

Have you met these people at all? Because it definitely sounds like you haven't and you just describe the typecast some movie would use.

> Care to share your sources for that?

I'm Israeli.

My children were attending/graduated/served kindergarten/school/army in Israel and I saw selection process as a parent.

My wife was a school teacher in Israel. She described to me some of the evaluation metrics she was supposed to submit every half a year over each and every pupil she had.

> Have you met these people at all?

I cannot confirm nor deny I met these people.

I have lots of friends who are ex-8200 (high levels of hightech are surprisingly full of them actually) and this is the first time I hear about that. If you mean that selection that happens at 17yo is based on grades and teachers evaluations since kindergarten - that might be, but it sounds different than "selection starts at 4yo" which implies that 4yo kids are selected and followed all their life.

> selection starts at 4yo" which implies that 4yo kids are selected and followed all their life.

I mean, they were followed all their life when they arrive at the final selection process, it is a track record after all

Yes, but they are followed then selected, not selected then followed. Which has totally different dystopian taste.

So when you said "Selection starts from age of 4", you mean that schoolchildren of this age receive standardised testing?

What does this have to do with the military? What does the "selection" actually entail?

What my daughters went through:

1) at age of 4 all the parents were gathered to meet kindergarten personnel. They explained that kids will play games all year. Parents were separated to groups and given logical puzzles to solve. Results were noted.

For the next two years children were playing games with changing rules to negate natural ability for specific game and to select for ability to find the best strategy within current constraints.

At the same time each parent is given a day to present his/her profession. Results are noted.

Results were passed to school class selection committee.

2) According to results in kindergarten kids are grouped in schools. Some are given opportunity to participate in electrical engineering or robotic activities (my daughter was Top 5 in Israeli competition for 6-9 years old with reduced team).

3) By the end of the second year some of the parents are notified that there will be an examination. Test is analogous to IQ (math, language, general knowledge). Graded on the curve for municipality. Top 8% are invited for one day a week for additional activities. Top 2% are invited to special schools with much more intensive program. My daughter made it to top 8%. Activities are: decision making, finding solutions within constraints, leading groups of people to solve bigger problems.

4) By the end of elementary, depending on previous results, kids get access to full math program (as opposed to reduced arithmetic). Additional activities include software and electrical engineering, robotics, chemistry, physics and so on. Parents and kids, that didn't made it to Top 8% at previous years, are not aware of these activities (invitations are sent personally).

5) At age of 15 kids pass initial evaluation by IDF. Good grades at high school will guarantee initial evaluation will be upheld. Bad grades will negatively impact the chances.

6) By the end of high school whole history and psychological profile are passed to IDF for final evaluation.

> What does this have to do with the military?

In Israel everything has everything to do with military.

You witnessed the hiring process of the NSO Group, which begins at 4 years old in kindergarten? For a company which has existed for 11 years?

I can't agree more with what the above commenter said. This is not infosec hiring, it's Spy Kids.

He didn't say it was NSO, but the Israeli military and specifically 8200

One person's boring is another's career culmination. Breaking system security often consists of dead end after dead end, and even if you get a lucky break, you may hit another dead end after that. Finding an exploit often isn't enough these days, they need to be chained together to actually get somewhere interesting. Personally, it's very unrewarding (aka boring, imho) work most of the time because you don't find anything a lot of the time. (The high off of finding something is something else tho, lemme tell you.) If you're interested in the sort of work involved, http://microcorruption.com is a good CTF to start out on.

You just leaked that the extrovert is a Finn! (the original joke is about a Finnish extrovert).

> Are the programmers at NSO group just the best in the world?

Most people who are good at this are working for national security orgs, blue team in the private sector, or cash focused criminals. This is the relatively small group of people who are comfortable selling tools to help dictators hack journalists up with saws.

I recently learned of this group through the Dark Net Diaries podcast. The host does a pretty good job of covering the NSO group in episode 99 and 100.


I heavily recommend reading “This is How They Tell Me The World Ends” written by one of the guests he had in episode 98, Nicole Perlroth (which also touched a little on the NSO in that episode). She’s The NY Times cybersecurity reporter. A lot of the book focused on the NSO, among others.

That book is fantastic and scary!

The noteworthy angle/point the podcast covers is that NSO is very likely indirectly trying to dig dirt on citizenlab people (same people the post above is from) as they regularly discover their exploits and cost them money. As Jack talks about at the end, this puts NSO group into a whole other category if the above is indeed true.

Those episodes were great!

It sounded like NSO group just considers loosing zero days like this a cost of doing business.

There seemed to be an implication that they have a war chest of these exploits and expect them to each get burnt after a certain amount of usage.

I wonder what the US response would have been if the NSO group was an Iranian business.

> cost of doing business

That's exactly what it is. These companies buy, research and stockpile exploits, and keep a few always at ready for when the currently deployed ones get burned. All exploits have a shelf life, and the more widely one is used, the more likely it is to get caught.

Because let's not forget: NSO and their ilk are not in the business of developing exploits. That's just their raw material. They are in the business of selling weapons-grade espionage and surveillance capabilities.

They were paid $55m for a single contract with Saudi Arabia, this money alone are enough to buy you tons of 0-days

Lets not be so journalistic. They have to pay also salaries and equipment.

The podcast also says that they have ~60 customers in 40 countries... do your calculations.

This episode just came out last week, and this is the second time NSO has made news since it aired (along with Germany being a confirmed client.) Surprisingly apropos, but I imagine Jack's disappointed the big news makes it just after his episode's release on the subject.

Someone remind me why Germany needs to be installing Israeli spyware onto citizens phones? We know this software's only purpose is to track down wrongthink and then murder dissidents.

Massive blow to the integrity of European telecoms.

Germany has comparably little domestic talent, partly because the Bundestag outlawed “hacking tools” in the 00s.

Because they learned nothing from Gestapo and STASI.

It might just be used with warrants for phones that are used by strongly suspected criminals.

The NSO Group is owned by Novalpina Capital, a British private Equity firm. It's not really accurate to call it "Israeli Spyware"

Can you stop posting this bait in every single thread about the NSO? It's really annoying that you repeatedly drag people into shallow semantic arguments for dumb (nationalistic?) reasons: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

What a disgusting accusation. This isn't "bait," it's a fact. And my reasons aren't "nationalistic" -- it's to be accurate.

Frankly, that is rich coming from you considering that you do this often enough that I have several of these specifically directed at you: https://news.ycombinator.com/item?id=25492587. Posting flamebait and then editing it to make the people who respond to you look stupid is against the guidelines. Posting "corrections" or "gotchas" every time a topic comes up is not striving for accuracy, it's being purposefully misleading to violate the guidelines. I am sick of you pretending each time that you aren't seeing the many people who tell you you're wrong or that you should stop. Until now I had held out hope that you were going to stop at some point, especially considering your productive contributions elsewhere, but I think I've given up now.

Who owns Novalpina Capital ?

GuardianUK says:

"The Guardian reported this year that hundreds of thousands of euros of Yana Peel’s legal bills were expensed to the NSO Group by her husband – another move that apparently angered his partners.

Stephen Peel’s lawyers said at that time that the “manner” in which the legal fees were paid had been approved by Kowski and Lueken, and he strongly disputed the suggestion that the payment of the expense claims was a source of disagreement between the partners.

Peel, Lueken and Kowski are all now involved in a legal dispute over the future ownership of the firm they created."


If you're interested in infosec/appsec, DND is a great place to get started. The host packages up stories in a well put-together way, has no qualms about breaking to explain a concept or term, and does it all within an hour.

It is increasingly bizarre in my opinion how this company (and others like Toka) can run active terrorist operations, that if anyone else smaller was doing some of the same hacks they would be in prison for a very long time.

People have lost their lives due to these pariahs!

Israel already has a massive PR issue with other countries, it would do them well to reign in these offensive front arms of their government/'companies.'

Citizen Labs is really a great thing for civilization. There are not enough altruistic organizations.

Just finished reading https://www.amazon.com/This-They-Tell-World-Ends/dp/16355760... which is a great book about the zero-day market and how it evolved over the years.

The basic issue is that every nation is actively buying and using zero-days and doesn't want to stop. And companies like NSO aren't really (so they say at least) hacking anybody. They just develop and license hacking tools to governments to use for "lawful" law enforcement purposes. So nobody wants to ban the zero-day market because every country is a huge buyer of zero-days themselves and it is hard to ban selling zero-days to sovereign governments who are using them in accordance with their own laws (even if the regimes in question are terrible and using them to violate their citizens basic human rights). After all, it would be a bit awkward for the US to demand that the NSO Group stop selling it's hacking tools to Saudi Arabia while we have a multi-billion dollar defense industry selling the Saudis all sorts of advanced weaponry.

This is the world we're living in: kill a man, you're a murdered, kill 100 you're a hero.

This is the international arms trade. As long as you have state endorsement and aren't violating UN sanctions, there are zero consequences.

And if you think this is in any way unusual, take a look at the places the US and China and France sell weapons to.

Israel's specialties happen to be software exploits and EW equipment, but this isn't a deeply different interaction.

> Israel already has a massive PR issue with other countries,

But for these middle eastern countries Israel selling them exploits which allow them to spy on dissidents may actually improve relations by helping out regimes which would otherwise be sworn enemies of theirs…

It just makes me so uncomfortable that these things keep happening. We always find out about these things eventually but what percentage of the time are our devices vulnerable? Isn’t it close to 100% of the time that our desktops and mobile devices have significant security vulnerabilities?

The way I describe it to friends and family is that there are basically two levels of protection:

- Protecting yourself from rub of the mill malware that is looking to make money off of you. You can do this pretty effectively by always updating your software as soon as you can and avoiding sketchy and unnecessary apps and websites

- Protecting yourself from an attack by a nation state level agency. I don't think there is any way to be safe from this, and people who are targeted like this need to use protection that go well beyond the choice of cell phone or chat app

Basically, you’re either dealing with Mossad or not-Mossad. If your adversary is not-Mossad, then you’ll probably be fine if you pick a good password and don’t respond to emails from ChEaPestPAiNPi11s@ virus-basket.biz.ru. If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them. In summary, https:// and two dollars will get you a bus ticket to nowhere. Also, SANTA CLAUS ISN’T REAL. When it rains, it pours.

[PDF] https://www.usenix.org/system/files/1401_08-12_mickens.pdf

I think this understates the threat of privatized hacking tools. Governments that can barely tie their shoelaces now have access to capabilities that only a few heavy hitters used to have. One example: In Mexico NSO software was used to target anti-obesity activists who were pushing for less soda pop consumption.

The funny thing is that despite all of this high end, super secret, extremely sophisticated technology used against them, those activists won in the end.

It's a well known piece but it's from 2014 (or earlier), and the world was different back then.

> Protecting yourself from an attack by a nation state level agency.

My personal data was hacked by a nation-state level agency. The only way I could’ve prevented that is by not working in a national security position for that country’s geopolitical rival.

Now the only thing I can reasonably do is avoid ever stepping foot in that country lest they detain me for “extra questioning.”

Sorry… sounds really rough.

Eh, thanks but don’t feel bad for me. There’s hundreds of other countries I can visit. I feel bad for the dissidents who are targeted within their own country and have no hope to leave.

And worse targeted abroad. Russia, China, Saudi. They all target, sometimes kill, sometimes abduct abroad. Even in the US... Scary.

Which country?

The OPM breach was attributed to China. My personal data was also disclosed in the breach and I’ve since traveled to China multiple times.


Until run of the mill malware learns of a vuln only thought to be known by nation states, and then all hell breaks loose.

Don't know why you're getting down voted, that's literally what happened with WannaCry


This is sort of in the middle. NSO Group's exploits are surely expensive, but they are also not pinpointed. The states buying these exploits aren't spending the unlimited resources at their disposal to do the exploitation, it just costs them cash. This is one of the thing that likely promotes proliferation of this stuff, since it is so easy to pick another target.

So I do think there is a level between these two where you can be defended against nation states that will use COTS-equivalent exploits against you even if you won't resist an active attempt by a full team targeting you very specifically.

But doing this is hard as hell in the modern world, because so so so much of our device surfaces is riddled with memory errors.

Time to be “that guy”…

“Nation state” is a well-defined term in the political sciences, and we misuse it here on HN all the time. To quote Wikipedia:

“A nation state is a political unit where the state and nation are congruent. It is a more precise concept than "country", since a country does not need to have a predominant ethnic group.”


Nation-state is often used in a different sense to distinguish the participants in the Westphalian system of sovereignty from other entities that might be labelled nations and/or states; this use derives in part from the fact that the Westphalian system is itself considered the turning point to nation-states (in the sense the parent describes) as a general norm, and that the participants in that system are generally also nation-states in that primary sense. (While “state” alone is often used for this where context makes it clear that this sense of “state” it s intended, there are lots of other uses of “state”—particularly for subordinate units of certain Westphalian sovereigns—which can create ambiguity, and “Westphalian sovereign” is a lot more cumbersome than “nation-state”.)

But the Westphalian system explicitly emphasizes the importance of the boundaries of the state vs the size of those boundaries. The HN usage tends to imply that “nation state” is something particularly impressive. But “an attack by a San Marino-level agency” doesn’t convey that same level of impressiveness.

Yeah, in security, “nation-state level actor” is used to mean “the most capable category of attackers, most (all?) of whom are particularly powerful nation-states [0]”, not “attacker at the level of at least the least-capable nation-state”.

[0] In the “Westphalian sovereign” sense.

Russia is 81% ethnic Russian, per Wikipedia. I think that's close enough to qualify for "nation and state are congruent".

Sure, it might make more sense to define this as "state-level agency", but that would confuse things for Americans. My internet security threat model ignores the state agencies of Montana just as much as yours ignores those of San Marino.

That wasn’t a misuse though - was it?

Well, perhaps the original poster was using it accurately.

In my experience, the common HN usage really translates to “country with a big military budget”, which is not at all what the term means.

Neither the US nor Russia are nation states. China and San Marino are both nation states. I’m guessing the poster meant “countries like the US, Russia and China”, and not “countries like China and San Marino.”

Honestly I think they just mean "state". Yes, some states have more resources than others, but the ones without a lot of resources generally aren't engaging in cyber attacks, and "state" as a general category is good enough summary.

I think people say "nation state" in part just because it flows better rhythmically, and in part because of that whole "westphalian" thing; and because the word "state" has other confusing meanings (including in CS, state as in 'state machine'; and the 50 USA states).

But really on HN when talking about "threat actors", they mostly just mean "state-level". (See I had to add -level to make it rhythmically like 'nation state' again, the one syllable 'state' is just too short it just plops into your sentence ruining it)

[Hey, why is it called the United Nations instead of the United States anyway? Oops, cause there already is a United States. But the UN is clearly an organization of States not Nations. But the things are conflated and confused generally in European nationalist ideologies of the 18th-20th centuries, that have affected our vocabulary and concepts for these things, it's not just HN. "Nation" is often used as a synonym for "State", so "nation state" ends up just kind of doubling down]

I say "state-level actor".

Almost any contemporary liberal democracy (and not only those) at least formally defines itself as a state of it's citizens, not belonging to any particular "nation" (ie ethnicity basically) in particular. I don't see the point in distinguishing between states that are "nation" states or not in the 21st century, or think that it has a clear distinction.

>Hey, why is it called the United Nations instead of the United States anyway? Oops, cause there already is a United States. But the UN is clearly an organization of States not Nations.

States are sovereign political entities; of course modern countries tend to have a federal state made of several constituent states (see: USA, Germany, etc) where each claims certain jurisdiction. In ancient times there were city-states like Athens, Sparta... and even in 18th century Europe cities like Venice were states (Republic of Venice).

Nations are people united by something they have in common. That could be shared history, language, culture, the geographic area they live in, or something more abstract like fandom of certain sports teams or other hobbies.

There is considerable overlap between nations and states, and given state is already overloaded, extra words are added for clarity.

I like "state-level" because these sorts of exploits and attacks are really about resources, not sovereignty, territory, etc. The fact is a rich person or company could fund a team that does vulnerability research and get results on par with the top tier folks already doing it.

And, the UN should be called the "United Countries" since it is really about territorial areas. They admit members based on geographical claims; I don't see any ethnic, cultural, or fandom group (that isn't in control of some territory and thus also country/nation) as a member.

It's to distinguish the hypothetical attacker and their resources from an individual or group of individuals. The threat to my personal health if Mossad is after me vs a particularly violent jilted ex-lover vs if I took down the local gang/cartel/drug dealer (ie they all want to kill me) but the level (and possibility) of defense against each of those threats are vastly different.

> I don't think there is any way to be safe from this

Apple could certainly do a lot more to protect their customers, and we generally let Apple off far too lightly here. For starters: using their enormous revenues to bid up the prices for these cracks. Writing better software, eg using well-known techniques to harden imessage. etc.

Also they could treat their employees better so there’s less churn. Every newly-hired kernel engineer is bound to repeat the same technical mistakes that their predecessor made a decade ago.

Might be a business model for a Kernel engineer:

* Go work for Apple

* Learn vulnerabilities

* Resign

* Sell vulnerabilities for cash on the dark market

Edit: formatting

But is this because computers fundamentally cannot be made secure, or due to backdoors and sloppy coding? I’ve heard BSD is pretty secure right? Couldn’t we make phones that secure if we didn’t bloat them with flashy new features every six months?

Invulnerability for your devices is a chimera. You can only do what's possible in your capacity to secure yourself.

I am at peace with the fact that I'm doing the best I can and keeping those I love protected.

The problem is that we're moving into a more and more digital world where it's not possible to even opt out. Estonia had their ID card photo database hacked.[0]

>A hacker was able to obtain over 280,000 personal identity photos following an attack on the state information system last Friday. The suspect is reportedly a resident of Tallinn.

>The culprit had already obtained personal names and ID codes and was able to obtain a third component, the photos, by making individual requests from thousands of IP addresses.

How do you protect yourself against that when the government requires you to have an ID card and puts you into the database? What happens when financial transaction logs get hacked or medical histories?

[0] https://news.err.ee/1608291072/hacker-downloads-close-to-300...

Yeah I find it worrying how society only cares about what is technically possible and not what is realistically safe and secure. We could build taller and cheaper buildings if we ignored standards and just accepted that sometimes they fall over. But we don't because that is insanely dangerous.

But now with tech the risk is invisible unlike a collapsed bridge. In Australia it is basically impossible to live a normal life without bringing your phone everywhere because they mandate that you scan QR codes before entering stores and the manual written forms are usually hidden behind a counter and on request only.

Security has always been relative. I feel much safer knowing that an exploit like this is worth hundreds of thousands or even millions of dollars.

It keeps them closely guarded and selective about use. All of that makes me an unlikely target and reduces individual risk.

> I feel much safer knowing that an exploit like this is worth hundreds of thousands or even millions of dollars.

I don't. Look at how much companies like Apple pay out for responsible disclosure if they pay out at all, and then compare it to what exploits go for on the grey/black market. Typically the buyers have deep pockets and burning millions of dollars wouldn't make them blink.

Why does it matter if it’s the “good guys” or “bad guys” paying?

If a vulnerability only cost ~$100 then a malicious person could compromise an ex lover’s phone, for example. The fact that they are expensive means that their use is limited to targeted, strategic attacks. You don’t have to agree that those attacks are good, but surely pricing the average person out of 0-days is better than the alternative.

> The fact that they are expensive means that their use is limited to targeted, strategic attacks.

There are organized crime networks that pull in billions of dollars of revenue a year. If they wanted to pull off dragnet fraud, for example, they have the funds to do so.

>Why does it matter if it’s the “good guys” or “bad guys” paying?

Who do you think are more likely to use the vuln/exploit on regular everyday users? The nation state people are going to use it on targeted persons/groups (typically) while the "bad guys" are going to use it so they get the greatest bang for their buck.

Or the nation state uses it against everyone in a dragnet operation? Also, specifically targeted people by nation states often are "regular everyday users". They just happened to draw the ire of the wrong person.

But still, I feel relatively safe knowing/thinking that the Saudi government doesn’t want to hack my iPhone.

Organized crime might, as they orchestrate fraud, blackmail etc networks all over the world.

It makes me wonder how people like Bill Gates or Jeff Bezos use for their phone security.

For sure they are much more interesting targets than I am, therefore burning a few 0-days might be worth the effort.

Wasn't Bezos phone hacked by the Saudis?

oh didn't know that

Yes, but it can be somewhat mitigated by not using SMS or iMessage.

Don't share the phone number of your sim with anyone for any reason whatsoever (or don't put a sim in the phone at all and use an external wifi router (this is what I do), or use a data-only sim), and ensure that iMessage and iCloud is disabled.

This doesn't make your phone invulnerable, it just makes it less vulnerable.

That's exactly why I started scratching my head as to why the web entire security model assumes a trusted execution environment. That no longer makes sense in today's world.

Naively to me it looks like it's an artifact of 90s OS security model. The modern web, and the threats of the modern world require more stringent security facilities at the OS level to allow isolation of security context even to super users and specifically per program-origin, per identity, and per-process context isolation. Super users having the ability to read-write in any security context is no longer appropriate, at most super users should only be able to deny and delete, that's the only way to protect end-user privacy.

Sandbox escapes are part of most serious exploit chains nowadays. They make things harder for exploit authors but absolutely do not fix the problem at a fundamental level. iMessage runs in a sandboxed environment. Doesn't stop the exploit in the article from getting root.

Qubes OS [0] is based on a different security model: security through compartmentalization.

[0] https://www.qubes-os.org/

I can't find a link to it now, but there was a blogpost on how all other non-compartmentalization approaches to security had failed.

That’s the one! :)

This is largely how iOS works.

you would expect quality from a commercial product because all of the investment being put into a product but these exploits are saying otherwise. open source projects may have more investments that care on a different level. we might have to figure out a way to go in that direction eventually considering how dangerous this is getting, many people depend on the quality of a product to ensure safer communication, and with some it is a life and death situation. do yeah it’s sad that this keeps happening, it seems like we can think of a better way to not make this happen as often.

> Isn’t it close to 100% of the time that our desktops and mobile devices have significant security vulnerabilities?

It is 100%. The sader reality is that the most likely weak link when it comes to exploiting your device is you.

It might be the outrage goggles, but OpenBSD is sure looking good lately. https://www.openbsd.org/security.html

Until it becomes more mainstream; every os can be exploited if lucrative

One company, which likely has a retention problem, is writing all of the code for your system and setting things up so that you can’t easily use anything else.

Do you think this is a recipe for secure computing?

Why do people keep writing file format parsers in unsafe languages?

I think it's mostly that people are continuing to use file format parsers that were written in unsafe languages in 1998.

I do sometimes wonder what a "Manhattan Project" of software security would look like. I do think rewriting all common file parsers in <X> would be a very achievable project with a budget of a few dozen million dollars - nothing compared to the potential savings. The issue is then getting people to actually switch over. I think that a PR push by NIST et.al. could help convince the slowpokes that the "industry standard" has changed and they need to do something to avoid liability.

> nothing compared to the potential savings.

How do you estimate the financial damages here though? It's not like anybody's really going to stop buying iPhones over this. Not to any real degree. There's some brand damage to Apple but that calculation's highly debatable and swings around wildly. Which is the problem. Digital security is impossible to put a price on, because until someone is actively exploiting it, it costs WAY less to do nothing about the situation.

Yes, in fact, if NSA, China's MSS, Mossad and other nation states are betting on these kind of exploits to exist in order to do their really dirty work (even if they contract it out to NSO Group), the "benefits" would be detrimental to them.

With kinds of resources Apple has, you could be writing a PDF parser from scratch in Rust or Swift (it is 100% memory-safe, right?) or whatever else kind of "in the background", maybe as an experimental project, and then replace the existing one with it when it's mature enough.

Microsoft at least started rewriting some components of Windows in Rust. Though they aren't saying which ones.

1. There are public visible Swift rewrite and sanbix enhancement.

2. With the number of projects Apple has, the number of "trivial" enhancement like this add up very quickly.

It is starting. I've seen big companies start shifting towards this future over the last couple of years. In discussions with other security professionals across various companies, it is appearing more like an inevitability that a shift to memory safety is coming, in one way or another. It is moving slower than I'd want, but the discussion feels very different than it did just three or four years ago.

Sure, tech companies and even just random people are already working on it piecemeal. I just think of someone with resources put a concerted effort into it we could replace all the parsers of un-trusted data in e.g. Chrome within 2 years. If a government did it then it can be justified by benefiting all of society, rather than one individual product team having to justify the effort for their own use.

In short, legacy code.

For those that are uncomfortable with this state of affairs, I recommend this presentation: "Quantifying Memory Unsafety and Reactions to It" https://www.youtube.com/watch?v=drfXNB6p6nI

It's the same as asking, what percentage of the time is science wrong? 100% of the time, yes. We're trying to approximate correctness and the plan is to get a bit closer every day as new information becomes available.

At least in Android the level of security is comparable with Win 3.11 for Workgroups. There is no access control except all or nothing. There is an OS which actively spies you.

When have you last used a modern Android OS? This is just not true, it offers exactly the same kind of controls as iOS does.

Their high-confidence attribution to NSO Group is described as being based on two factors:

1. Incomplete deletion of evidence from a SQLite database, in the exact same manner observed in a previous Pegasus sample;

2. The presence of a new process with the same name as a process observed in a previous Pegasus sample.

But isn't it likely that someone with the skills needed to discover and weaponize a chain of 0-day exploits, is incentivized and able to detect these quirks in Pegasus samples and imitate them, with the goal of misattribution?

Of course, there may be more factors involved in the attribution that aren't being shared publicly.

It seems like incomplete deletion of data is an error. If you are an exploit developer looking to throw investigators off your trail, it is one thing to name your processes with Pegasus names. It is another to deliberately introduce errors in your exploit to appear like Pegasus.

Your proposal is possible. It is just less likely than that this exploit was developed by NSO Group.

Since when do we assume misattribution in fingerprinting APTs?

Crowdstrike will find out it's clearly Russia behind this and Mandiant will blame China.

It usually is the US, China or Russia though; the three have a large number of experts for this. And unless you find an error in the attribution processes, they are most of the time backed up by data that appears plausible, like a server or code fragment

>It usually is the US, China or Russia though

Interesting that you're leaving out Israel from your listing while the very subject of this article is Israeli offensive cyberwar and espionage capabilities and a profound lack of ethics.

What I was trying to convey originally is that attribution is politically expedient. If you want to saber-rattle towards China you task Mandiant to find proof of Chinese hacking, if you want to blame Russia Crowdstrike gets the job. It's like employing McKinsey consulting to give a veneer of credibility to a predetermined outcome.

Buried lede: Apple has patched that particular exploit [1] and everyone should download iOS14.8 now if you want to be protected (no doubt NSO has other tricks up their sleeve).

Edit: Just realized it also impacts macOS and watchOS as well which were also patched. Patch Monday!

[1] https://support.apple.com/en-ca/HT212807

Sounds like the buried lede here is that the biggest company in the world is having it's products actively being interfered with by a small shed in Israel run by war criminals. Presumably in this world in 2021 we have mechanisms other than finding their digital fingerprints to stop that.

Imagine if it was a Russian shed

Can you please elaborate why you think NSO group is run by war criminals?

And they don't work out of a small shed. But the metaphor is not that bad, those guys walk on the edge of the law probably passing to the wrong side more than once but never being caught

A shame there's apparently no update coming out for Mojave.

So much for backwards compatibility. The last of MacOS to support 32bit apps...

I wonder if running as a standard user would offer some form of protection.

Pretty soon the choice will be between:

- vulnerable to the latest published exploits


- vulnerable to clientside scanning of your media for wrongthink by Apple for the CCP

Smash that iOS update button and do your part for the party!

> Pretty soon the choice will be between

What about "Don't use Apple products"? I know that Android is just as bad in many ways...

And if all options in the modern tech industry basket of choice are terrible, well... humanity survived without them for an awfully long time.

I've gone back to a flip phone from an iPhone. I no longer use Windows if I can at all avoid it (there exist a few sysadmin tasks involving netbooting Mikrotik devices for major OS updates that are far less painful on Windows than other OSes), and have no plans to let Win11 in my life. And Apple is heading out the door too. Throw in my dislike of Intel, and... yeah, it's getting pretty thin pickings. I still have an iPad with no accounts on it as a PDF reader, but I'd like to replace that with something else (Remarkable or such).

"Agh, this is soooo terrible, but I'm going to keep using it!" just means, in practice, it's not that terrible.

> "Agh, this is soooo terrible, but I'm going to keep using it!" just means, in practice, it's not that terrible.

I don't think this is the only conclusion here.

I think we should acknowledge just how central personal computing devices are in society in 2021. Sure, it's true that humanity survived without them, but at that time, societal norms were drastically different. Removing tech from daily life today can be crippling, and that's part of what makes some of these issues so terrible. They directly threaten our daily lives.

I'd argue that it's possible for the thing to be "very terrible", and to conclude that it's still your only option to continue using the Apple/Google ecosystem.

- Not all users have the financial means to switch. The iPhone they own is the one phone they'll buy for the next 3-4 years.

- A growing number of users have only an iDevice and no standalone PC. Couple this with #1, and things get even more difficult.

- The utility afforded by the Apple ecosystem is high enough (or virtually required depending on one's job) that it outweighs the current set of downsides.

If a corner store owner pays a weekly fee to the local gang "for protection", it doesn't necessarily follow that because the owner chooses to pay the fee, the extortion must not be soooo terrible.

Good for you, not good for %99.99 of the population. For nation states, that is mission accomplished! You never get %100 compliance with anything with large numbers.

Both Apple and Google scan your cloud synced files. Neither of them claim to scan your local only files. So the choice is rather pointless as they both hold the same position.

Android lets you use your own service with the same features as Google's or Apple's. iOS does not.

What do you mean by this? I use Google photos on my iphone and it seems to work perfectly fine. I'm assuming you are talking about background sync but I just checked via the web version and my photos from yesterday are all there so it seems that background activity is allowed while plugged in since I have not opened the g photos app in a while.

> What about "Don't use Apple products"? I know that Android is just as bad in many ways...

Time to consider GNU/Linux phones, Librem 5 and Pinephone.

The irony is that if you’re not updated to the latest iOS, the easier (cheaper?) it is for the CCP to run surveillance exploits on your device a la the Uighurs.

You can either trust Apple, or lose all security updates.

Anybody know why there was such a delay here?

> In March 2021, we examined the phone of a Saudi activist who has chosen to remain anonymous, and determined that they had been hacked with NSO Group’s Pegasus spyware. During the course of the analysis we obtained an iTunes backup of the device.


> Citizen Lab forwarded the artifacts to Apple on Tuesday, September 7. On Monday, September 13, Apple confirmed that the files included a zero-day exploit against iOS and MacOS.

This has been asked and discussed in this thread already: https://news.ycombinator.com/item?id=28516795

In short: Just because they got access to the phone in March doesn't mean that they were already aware of the zero-day exploit back then. Finding this kind of stuff takes a while.

We (the public) have known about FORCEDENTRY for 6 months. That time was spent analyzing and understanding the exploit. It does seem like a long time for such a public zero-click affecting 100s of millions of users.

I once worked in a 'dissident' org (supported by the US Agency for International Development) - these orgs were fighting for human rights in their countries. In one extreme case/country, my prospective project team mate, no one knew her real name (came to know this later), though she was our colleague, was quite social and pleasant. In her country's expatriate circles in DC, she was worried about foreign spies. Family back home is at risk, and so is she, even if she lives in DC. These are brave people.

She wanted to build a database of something, and we were like, "keep your phone in another room" if you want to come discuss. Something that I am not sure she practices but more people need to practice.

CitizenLab is doing yeoman's service for people's rights to privacy and human rights. They're heroes.

I'm glad you put "dissident" in quotes. USAID is notoriously rife with CIA plants and many CIA operatives use the organization as cover, which implies that a nation targeting its members would have a lot more justification than a homegrown activist. That USAID might be targeted by hackers is mostly a consequence of the US government's decision to use it as a front for clandestine operations overseas.

Regardless, they do help real dissidents. People, who are at risk in their home country as they are perceived to be a threat to their authoritarian government.

I can't find any 'dissident USAID' outfit concerned about the fate of Julian Assange or Edward Snowden, however. Seems like 'human rights concerns' are highly conditional on the amount of money an repressive government invests in Wall Street.

> supported by the US Agency for International Development

Isn’t it more usual for the NED to do such things? I remark upon this because it occurs to me that using USAID to do politics might make recipients suspicious of aid even when it’s both necessary from a humanitarian perspective and unlikely to threaten the ruling dispensation in the recipient country. (This is a separate question from whether the NED/US government as a whole should even involve itself in such matters, to which my answer is ‘maybe’, since the dubious stuff probably happens anyway and lots of these civil society organisations &c. actually do good work [e.g. the The Assistance Association for Political Prisoners in Burma.])

True.. I was slightly inaccurate, This org. had various USG funders, with a large slice of funding from US-AID projects. Washington is full of these 'USAID contractors', some tiny others mega-sized. But this project may have been funded by a division of US Dept of State that is focused on Human Rights - DRL. Not sure where the lines are about which one US-Aid gets and which one is State. For example development of journalism in an emerging country would be US-Aid. But OTOH, a project promoting free elections in the same country could be State. Not sure.

In any case, they span the range from benign to hostile nations, with varying risks attached. The "About" page for many such sensitive orgs would be silent on who the team was, except if it was Americans (like me) who didn't mind being their name out there (or nervously okayed the name being public).

It seems like the NSO group is some kind of Hydra where every time their exploits are thwarted they find 2 new ones. The difference is that Hydras go for demigods while NSO products target civil servants and minorities.

> Despite the [gif] extension, the file was actually a 748-byte Adobe PSD file.

I wish programmers would stop "helpfully parsing" files which are named with an "incorrect" extension. If a random unknown person sends me a file with .gif extension that is actually a PSD file, I most definitely do not want my machine parsing whatever that thing is.

Discourse avatars point to a page with a .png extension regardless of what the actual file is (jpg, gif, or svg). Parsing file headers should not be a dangerous operation and in my opinion is the right thing to do.

The easiest way for Apple to find Zero-Day exploits is presumably just to register an iPhone to some Saudi activist and regularly take memory dumps.

You joke, but maybe one way to fight this proactively is with fake activist honeypots. Apple, a company with the size and budget of a nation state could certainly pull off such an operation for the security of the devices they sell, but obviously, and maybe unfortunately, this would never happen.

That's actually where this particular knowledge came from. Citizen Lab dumped a Saudi activist's phone and told Apple what they found.

Morbid, but hilarious. Well done.

I miss the days when iOS exploits were merely used for jailbreaks and allowing alternative app stores, instead of being weaponized/monetized as they are now.

Were they? maybe we simply haven't heard about those

Image and video decoders seem like exactly the right target for formally verified (i.e. proven) implementations. There are just so many moving parts, and libraries get re-used in many projects, rarely forming the 'special sauce' in any given app.

I have been keeping an eye on the work done by what is now called Project Everest[1] over the years in the communication and cryptographic space.

Is there similar work in the image and video decode space? My seach fu is not yeilding anything beyond some hardware decoding proofs.

[1] https://project-everest.github.io/

Anyone has context of why it was withdrawn?

It's a feature, not a bug.

Is there a reason why quarantining image attachments from unknown senders hasn’t been standard industry practice ever since Stagefright?

Apple specifically introduced BlastDoor framework to combat this, so NSO shifted their attacks around decoding, avoiding BlastDoor.

Android 10 also introduced similar mitigations: https://android-developers.googleblog.com/2019/05/queue-hard...

Though it's worth noting that the cost of Stagefright was surprisingly low - it took a long time for a good ASLR bypass to come out for it and by that time most devices were updated or replaced. Additionally, the sheer variance between Android devices means developing worm-level exploits becomes extremely difficult compared to something where everyone's running the exact same binary like Windows, so it likely only saw targeted use.

Project managers like the pretty inline previews! Security? Pssh that's just for nerds.

This report says they discovered this in March.

The NY Times [1] just reported that "Apple’s security team has been working around the clock to develop a fix since Tuesday, after researchers at Citizen Lab, a cybersecurity watchdog organization at the University of Toronto, discovered that a Saudi activist’s iPhone had been infected with spyware from NSO Group."

What took so long? Did Apple not know about this in March or was someone sitting on it for 6 months?

[1] https://www.nytimes.com/2021/09/13/technology/apple-software...

“Citizen Lab forwarded the artifacts to Apple on Tuesday September 7.” — from article, no need to jump to unwarranted conclusions about Apple. “In March 2021, we examined the phone of a Saudi activist” - it would be interesting to know the reason why Citizen Lab delayed so long. Hopefully they just wanted time to discover who else was being targeted?

> In March 2021, we examined the phone of a Saudi activist who has chosen to remain anonymous, and determined that they had been hacked with NSO Group’s Pegasus spyware. During the course of the analysis we obtained an iTunes backup of the device.

> Recent re-analysis of the backup yielded several files with the “.gif” extension in Library/SMS/Attachments that we determined were sent to the phone immediately before it was hacked with NSO Group’s Pegasus spyware.

Seems like they originally examined the phone in March, but recently did another analysis, during the course of which they discovered the exploit and reported it to Apple.

I assume it takes time to go from "this person could have potentially been targeted with Pegasus" to "this person's iPhone was exploited by Pegasus, and here is how they did it."

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact