Hacker News new | past | comments | ask | show | jobs | submit login
'Karma': A hack used by the UAE to break into iPhones of foes (reuters.com)
423 points by MrMetlHed 82 days ago | hide | past | web | favorite | 224 comments



Whether or not this hack was developed with the help of Apple (a “backdoor”) or by a third-party exploit, this is exactly what a “golden key” looks like after it gets in the wild.

An espionage tool developed by a major world power proliferates to totalitarian regimes, aided and operated by ex-NSA agents on the payroll, to compromise human rights activists and the political opposition.

If ever there was proof that our devices need to be striving — constantly striving — for absolute security, and can never allow any “trusted party” an authentication or encryption bypass, this article is it.

An exploit like this is incalculably valuable to intelligence agencies. That the exploit would proliferate is undeniable. And the ends to which it would be (has been) used is atrocious.

Probably the only thing different about how intelligence agencies exploited this, and how they would exploit a golden key, is that with the golden key they would be sweeping up every photo on every device, and not just some photos on some devices.

“It was like, ‘We have this great new exploit that we just bought. Get us a huge list of targets that have iPhones now,’” she said. “It was like Christmas.”


Nobody credible believes for a second that Apple was involved, for whatever it’s worth.


I was not trying to suggest that, I was trying to convey that once a spy agency has remote access (sanctioned or otherwise) we can see by this example how it is proliferated and abused.


yeah, but the Australian Government is busy passing laws to require companies like Apple to do something pretty much exactly like this.

(And in my mind at least, those laws are without doubt part of a coordinated five eyes security/law-enforcement campaign to push those kinds of laws through everywhere: "Look, it works in Australia!" the Canadians/UK/NZ/US will say...)


That's precisely the GP's point.


Yeah. Right now, nobody (credible) is accusing Apple of enabling this kind of exploit.

If Apple are still selling hardware in Australia in 12 months time, the suspicion will _have_ to be that they have enabled something similar enough to this to be considered untrustworthy... (And not just Apple, any manufacturer or software company doing business in Australia...)


> I was not trying to suggest that

You were by simply stating it. It's the same type of saying-but-not-saying lines that the media uses like "if this allegation proves to be true" or similar such phrases. The fact that you state it suggests to the reader that people think it. If it was an honest mistake in wording on your part that's one thing, but you should probably avoid using such phrasing.

Look at all the people responding to the post you responded to saying that Apple did it. There's no reason to give a forum for such ideas.


No, they were not. They said "whether or not..". That does not imply Apple did anything.

In this case, it doesn't matter whether or not Apple even had a hand in it. The issue is that security exploits are security exploits, regardless if intentionally designed or not.


> "Nobody credible..."

Your implication that doubting Apple is incredulousy, is ludicrous! Why is Apple so damn special?

At this point, there isn't any evidence that Apple is involved, and yes, they go on a PR blitz to focus on privacy and security, and get hit pieces published on how "privacy is a feature on the iPhone". But the history of backdoors suggests that no one voluntarily reveals them (Intel, Cisco, Juniper,...). In many cases, how the backdoors made their way in is a closely guarded secret, specifically to enable plausible deniability.

It's best not to put a company on a pedestal, like some religious cult.


Can't tell if you mean credible as in credible or credible as in "credible"


Could you please provide some sources?


Wouldn’t it be far more useful for you to provide someone credible making such a claim?

If he was wrong this claim would be so trivial to refute that sources seem entirely pointless.


To clarify my earlier comment, I wasn't taking a position one way or the other.

lawnchair_larry made a claim about "nobody credible" believing something. As a lay person in the field, I don't know who these credible people are. Therefore I asked whether lawnchair_larry could tell us who one or more of these credible people are, and where we can read more about what they believe to be true about the situation.


Asking lawnchair_larry to give you a complete list of everyone he considers credible is disingenuous.

As the other commenter stated, if someone thought that someone credible made that claim they could simply provide that source, which both (1) involves asymptotically less effort for everyone involved and (2) under reasonable assumptions is at least as effective.

“Sir, I’m going to need for you to list for me all the crimes you didn’t commit last Saturday.”


> Asking lawnchair_larry to give you a complete list of everyone he considers credible is disingenuous.

Implying that was asked seems disingenuous to me, but I assume that wasn't your intent.

What I see is a request for clarification, specifically asking for the sources from which a conclusion was drawn.

It's not a police raid. Relax.


How is he supposed to refer you to people not saying something?

This is most definitely a rather silly request.

Can you prove that magic doesn’t exist?


> How is he supposed to refer you to people not saying something?

That's not what I was asking for. I was asking him to refer me to credible people saying that they don't believe Apple was involved. Here is what he said:

> Nobody credible believes for a second that Apple was involved, for whatever it’s worth.

I thought perhaps he had read about credible people making this claim, or had some other way of knowing what these people believe. I was simply asking for more information, so as to further educate myself.

I believe that in the absence of any sources, this claim may suffer from the Argument to the People [1] and Argument from Authority [2] logical fallacies.

Full disclosure: I have a MacBook and an iPhone. I enjoy Apple products and respect their business model. And in the absence of any evidence to the contrary, I'm inclined to believe that Apple wasn't involved. I just want to learn more about the subject from people who know more than me.

[1] https://en.wikipedia.org/wiki/Argumentum_ad_populum

[2] https://en.wikipedia.org/wiki/Argument_from_authority


You seem to assume that “apple planted this backdoor” is a realistic enough claim that someone credible would be willing to waste their time on it.

Well, it isn’t.


Could be some rogue employee, could be a legitimate employee under constraints / influence of an organization, etc. The NSA tried to do it with Linux quite a few times.



Is that Tim Cook and Saudi Arabia's MBS?


Isn't the UAE muslim? Why Christmas? or is it because UAE dollars bought Christian mercenaries? We are still in the middle ages and the crusades are still raging. Time to get rid of money - or at least build walls that prevent money from crossing borders as well as criminals. We are not worthy of the the technologic advances that a few geniuses have bestowed on us.


That quote is attributed to a former NSA employee. Something being "like Christmas" is a popular American phrase that doesn't indicate one's religion.


Christmas has very little to do with religion, it's a cultural holiday across the world.


"Christmas is an annual festival, commemorating the birth of Jesus Christ, observed primarily on December 25 as a religious and cultural celebration among billions of people around the world." https://en.wikipedia.org/wiki/Christmas


This may be a disturbing thought to some, but a great many people don't give a damn about any supposed virgin birth. To them, it's about getting time off from work to visit family, various pagan-derived decorations around the home (https://en.wikipedia.org/wiki/Yule), seasonal desserts, and of course consumerism/consumption (particularly the giving of gifts, which is what "like Christmas" in this instance refers to.)


There's an awful lot of stuff going on in that comment :p Didn't really understand what the focus was.


"Totalitarian regime" I stop reading whenever I hear doublespeak. You need to find a new way to address countries that aren't western democracies. Especially since the rest of the world is catching up, you won't hold this monopoly on labels for much longer.


Connotations aside, totalitarian is an accurate description of the majority of the UAE. Most of it's citizens live in constituent Emirates which are absolute monarchies. Dubai is a notable exception as a constitutional monarchy, although the relative strength of the monarch is not exactly set even there.


Those countries being relatively more powerful doesn't directly make their governments less unethical. Giving lip service to their propaganda doesn't help either.

Also, if you're talking about the mid east and not, say, China, the future doesn't really look bright.


I disagree with the conclusion of absolute security — it won’t happen, and only encourages subversion by people who both need and have a right to access the content.

Instead of pontificating, the tech industry should innovate.

There’s no reason that hashchains can’t be used to timelock the key, and the enclave export it in response to a signed request. Then we can at least force the compromises through the legal system and require effort to reverse the hashchain. That kind of court authorized targeted access removes the incentive (and justification) for other actors to more deeply compromise the system. In turn, this let’s us provide more security, in practice.

What’s not going to sell, and what the tech industry needs to get over is “lulz, it’ll impossible to intercept military or terrorist information because I need absolute privacy for my saucy emails”. I think it’s been empirically demonstrated that won’t happen.

Be part of the solution.


This sounds like a first order objection to a second order concern.

In particular:

> What’s not going to sell, and what the tech industry needs to get over is “lulz, it’ll impossible to intercept military or terrorist information because I need absolute privacy for my saucy emails”

Seems to be an ironic mischaracterisation of the parent’s point, which was precisely that one coubtry’s terrorism is another’s gay rights activist or high ranking foreign official.

From the article:

In 2017, for instance, the operatives used Karma to hack an iPhone used by Qatar’s Emir Sheikh Tamim bin Hamad al-Thani, as well as the devices of Turkey’s former Deputy Prime Minister Mehmet Şimşek, and Oman’s head of foreign affairs, Yusuf bin Alawi bin Abdullah. It isn’t clear what material was taken from their devices.

“Saucy e-mails” is a bit tone deaf :(


> Seems to be an ironic mischaracterisation of the parent’s point, which was precisely that one coubtry’s terrorism is another’s gay rights activist or high ranking foreign official.

My point was that issues like this should be mediated by courts and existing legal systems, not the unilateral decision of technologists.

And that society is going to insist that be the case, hence the most effective way to protect those persecuted minorities is via cooperation and steering how that process happens — not fighting a losing battle.

Finally, that the way to increase the effective security is stop fighting ideological battles on the issue, and find a politically workable compromise which still prevents remote exploitation — the main danger of encryption bypasses.


Which courts? Which legal systems? Legal systems and courts of nations who believe political speech is a crime and that alternative lifestyles are capital offenses?


The ones who are capable of compelling the phone company to obey over political pressure from other sources.

This is a strict improvement over the current situation, where the answer is “anyone who has money”.


Are you forgetting that time that the spy agency collected call records on millions of Americans through a secret court? [1]

As the other commenter points out, this only adds an attack vector and does not do anything to eliminate any.

The same incentives exist on all sides to find exploits regardless of an additional “legal” channel to crack the encryption. Particularly because your political enemies use the same devices and you can’t get a court order to tap their phones (usually).

[1] - https://www.google.com/amp/s/amp.theguardian.com/world/2013/...



Providing deliberate backdoors to the legal system does not preclude the discovery of other exploits, and the sale of those exploits to whoever has money.


It does not, but it changes the set of people looking to buy them and the way in which they’re used, both of which impact the general market for vulnerability sales — and additionally, re-aligns some present attackers to defenders.

Security researchers, strangely enough, seem to care who they sell to. If the NSA stopped buying and only the UAE was interested, I expect we’d see some firms move to other business models or targets for “research”.


> "It does not, but it changes the set of people looking to buy them and the way in which they’re used, both of which impact the general market for vulnerability sales"

There is no shortage of oppressive regimes with incredible amounts of money at their disposal. People who are in it for the money don't honestly give a shit who pays them. You cannot eliminate the market for this stuff. The only option is to create better software.


> not the unilateral decision of technologists.

It should be, and would the judge disagree, show the honourable justice disagree, make him feel who is the boss in the internets.


Wouldn't the police have a "right" to know if a person has any weapons? Detaining everyone and performing a full cavity search for any and all infractions is just the police exercising their "right" to such information. Will you be the first to bend over and spread for the cops "right" to peace of mind?

"That it is better 100 guilty Persons should escape than that one innocent Person should suffer, is a Maxim that has been long and generally approved."


> "That it is better 100 guilty Persons should escape than that one innocent Person should suffer, is a Maxim that has been long and generally approved."

I wonder if that maxim is still generally approved. It seems like some authoritarians would prefer that 100 innocents would suffer than one guilty person should escape.

I suppose it depends how you define "innocent" and "suffer". Under modern law, everyone is guilty of something. And while we might not require suffering in prison, a little suffering of expensive legal fees, invasions of privacy of your digital data/at the border/in the airport, or searches and seizures of property by police in your car are commonplace.


I think, even in places that believed the maxim, 9/11 changed the calculation. Now the question is: How many innocent people should be jailed in order that 3000 innocent people not be killed?

Mind you, I'm not saying it's right. I'm just saying that this is how the authorities are thinking.


Or how many billions of hour-lives should we steal a minute or hour at a time from to save X lives?


That maxim seems to ignore the possibility that those 100 free guilty people could do more damage to that one innocent person than a prison sentence can.


>Under modern law, everyone is guilty of something.

This is a really dangerous (and unfortunately oft-repeated) mindset that basically boils down to advocating the position of "current laws are so complex that if we give law enforcement better tools [to combat law breaking], we'd implicate ourselves also".

This ignores two huge problems:

1. No, not everyone is guilty of something. I doubt a majority are guilty of anything, and I'd be surprised to hear even like 10-15% are guilty of anything.

2. The people that _are_ guilty of some minor infraction (e.g. jaywalking, speeding, or pick your own obscure state law) are still guilty of much less than the major crimes these kinds of systems are targeted at (terrorism, violent crime, robbery, etc). Comparing the two is like saying "we shouldn't put cameras at bank entrances because what if it catches someone jaywalking outside?"

On a more hard-line note, the people that are guilty of even the smallest things are still guilty of those things. People speed because they think they won't get caught, but it's still against the law. People jaywalk because they think they'll be safe crossing the street, but it's still illegal. People litter because they think it's not a big deal, but it's still illegal. Who gets to decide what people should get away with? Why would anyone be able to get away with breaking any law? (To wit, the obvious response is that some laws are obscure and outdated and that people aren't even aware of them, but feel free to refer back to #2 above until we've fixed those laws. Perhaps actually enforcing them is the nudge we need to push lawmakers into saying, "You know those crazy laws that are still in the books that we've ignored for a hundred years? Yeah, maybe we should get rid of those.")

>That it is better 100 guilty Persons should escape than that one innocent Person should suffer

This maxim encourages a false dichotomy that is embraced by the "we're all guilty of something" mindset, and forgets that "suffering" is a spectrum.

The people who are guilty of gotcha laws _aren't_ innocent, and _should_ suffer proportionately to the law they broke (see: fines, warnings, etc that you'd expect for silly laws that people think shouldn't be enforced, yet are still laws). I'd much rather see that "innocent" person get their $50 fine so 100 people guilty of violent crime don't escape.

"It is better 100 guilty [murderers] should escape than that one innocent [jaywalker] should suffer." Sounds silly, doesn't it?


Totalitarianism and police states don't start by jailing the jaywalkers, but they do start by implementing the laws disproportionately against marginalized communities within the general population. This is generally overlooked by the remainder because they think, "Well, it's not affecting me."

The problem with this line of thinking, "they broke the law, they should be punished whatever it was" is that laws and morality, while sometimes intended to be aligned, are not.

Take, for example, "disobeying a police officer". On the surface of it, no one would argue that that is a problem, thinking, "of course I'll follow a police officer's instructions". However, the system has evolved to the point where someone who has initially committed no crime, stopped by the police under suspicion, can end-up dead or incarcerated due to a sadly more-and-more common sequence of escalations.

Asserting that the law is somehow perfect to the extent that all illegal behavior should be punished is also poorly framed because not everything is illegal everywhere, and in fact, what is legal and illegal is not universally known or clear. For example, to access some laws in some municipalities, the laws themselves are under private copyright and a fee must be paid and they are not accessible except directly in person, as in, not remotely (this was news a while back, not sure if there's a HN story about it).

So, I feel this viewpoint is missing some fundamental realities that may drastically change the underlying assumptions.


>"That it is better 100 guilty Persons should escape than that one innocent Person should suffer" is a Maxim that has been long and generally approved.

It is not fundamentally true nor universally accepted. Where does the scale tip? All criminal justice systems attempt to minimise the risk - but one some level it is simply the cost of doing business. For all that it is an admirable sentiment, it is limited.


Uh, my proposed solution required getting a warrant to order a company to produce a device specific message which in turn could be provided via cable to the physical device, which they must also have access to, followed by spending time reversing a hash chain.

So yes, the police already have the power to search you for weapons if they have a warrant, and this is bringing the ability to search phones into line with that.


C'mon everyone, be part of the solution like this person says. The intelligence agencies will never abuse their power. You can trust them. /s


That’s a deep straw man of what I said, to the point of being non-constructive mocking. You’re just being dishonest to claim I suggested trusting the spy agencies.

Rather, I pointed out that they have a real mission, and they’re going to spend effort accomplishing it. But their mission isn’t to own every device — it’s to own a select few, probably on the order of hundreds or thousands a year. So, if we create a mechanism by which they can do that without owning every device, we can align our goal of protecting most devices with theirs of owning a few.

This in turn increases security for nearly everyone, because powerful agencies no longer have the same motivation to cause harm — and might be persuaded to help. After all, it’s in their interest to prevent large remote compromises — just not a higher priority than maintaining their own access.

Further, the best way to actually restrain them is through a change in government policy, which will only happen when the government believes there’s an alternative solution.

Perhaps you could try responding to the point?


> . But their mission isn’t to own every device — it’s to own a select few, probably on the order of hundreds or thousands a year.

Things like Room 641A show that the government doesn't even need to engage the courts to collect data on millions of people and further, that they are not limiting their collection efforts to a few hundreds or thousands of devices a year.


> But their mission isn’t to own every device — it’s to own a select few, probably on the order of hundreds or thousands a year.

This is patently untrue. Snowden and much more have made it absolutely, unambiguously clear that national spy agencies do wish to gather and collect absolutely and every bit of information possible about their own law-abiding citizens as well as those here that are not.


Could you cite anything supporting that?

I followed the Snowden leaks quite closely, but don’t remember anything close to unambiguously showing that.


Please review the following with explicit citations.

https://en.wikipedia.org/wiki/Global_surveillance_disclosure...


That’s a non-responsive answer, and you know it.


The PRISM program alone shows the NSA's intent to collect as much information as possible. This in conjunction with the Utah Data Center makes it pretty aparrent what the goal was.


I think the underlying point is "which intelligence agencies would be allowed to compromise devices like this"?


The ones courts which governed the relevant company were willing to issue warrants on behalf of — that is, those our regular legal and political systems decide on.

For a big multinational like Apple, that’s probably a fair number. But it’s also harder to hide they’re doing that, and let’s us bring pressure on them politically for their political misdeeds. In the end, it’ll be major powers who can — US, Europe, China, etc.

My point is that it’s never going to be the case that technologists get to unilaterally decide that for all of society.

My proposal is just to bring phones into line with existing warrants: https://news.ycombinator.com/item?id=19036408

But by doing so, technologists have the political cover to push back on spy agency excesses and abuses.


So you support China being able to hack any phone globally with a warrant in a Chinese court? Isn't that literally what Huawei are accused of facilitating?


No, I support China being able to spend an appreciable amount of time to crack each phone they have physical possession of, following a court order. I never suggested the ability for remote compromise (what Huawei is accused of), and my exact point is that we can create a cost to cracking each phone — in hashing power and time spent — if we compromise on the topic. No such cost exists now, because they achieve access via other means.

The combination of requiring physical possession and appreciable hashing time per crack is a two-layered response to mass-surveillance. That’s the whole basis of the compromise I’m proposing: calling their bluff, and enabling warrant cracks as political cover to shut down mass surveillance and cracking as unnecessary.


The "hashing time per crack" reminds me of the old 48-bit encryption which was mandated to have short, crackable keys. Or even the previous attempt at doing this with the "LEAF". Are you proposing some sort of work function within the crypto enclave? ie you leave the device brute-forcing for a few days and it spits out the key?

The "appreciable time" is going to be subject to constant downward pressure, both politically and technologically.

> political cover to shut down mass surveillance and compromise as unnecessary

Mass surveillance is not going to go away without huge cultural reform of the security services. They don't take concessions.


I was purposing each phone be loaded with a unique “export key” while being built. For about 1 dollar of GPU time, you can force an attacker to take a month of continuous hashing, on similarly performing hardware. (Okay, people with fancy ASICS could get that down to a week... but that still seems like a big win.)

More details here: https://news.ycombinator.com/item?id=19040252

I agree that it would be a consistent political battle — but it’s already that, and it’s clear people with power are getting fed up with technologists attempting to impose their ideology without compromising with other social needs.

That’s what prompts laws about wiretapping or mandated backdoors.

I haven’t heard the same arguments about safes I do about encryption — and the reason is because there’s an understood bypass, if they gain access, have time and money.

By compromising and allowing targeted cracking, we split the faction pushing for backdoored phones, solve most of the issues, and give ourselves a viable path to accomplish something rather than being forced into backing down or completely compromising systems. Further, by being willing to compromise, we gain a voice on shaping how that discussion looks — rather than largely being excluded.


>Mass surveillance is not going to go away without huge cultural reform of the security services. They don't take concessions.

FakeComments was mostly talking about targeted surveillance, but I agree with him/her in spirit, since I believe that mass surveillance is not going to go away. Ever. So you can either yell futilely into the wind as it happens over your objections, up to, including, and perhaps going beyond a swarm of camera-bearing networked nanodrones coating the planet, or you can try to nudge it towards happening on slightly preferable terms.


It's not going to happen on "slightly preferable terms". Either those with political power in society want surveillance, or they don't. If they do, they'll take all that they can get, and the only thing cooperating with them will do is make it all happen faster - the moment you provide a "compromise" surveillance scheme to them on a silver platter is the moment when they'll start devising how to get around the remaining limits.


So you simultaneously think the other side is so powerful that you cannot even compromise with them without being pushed back further, but they are also so weak that you believe you can achieve a total victory without conceding any points? How you you reconcile that? Or do you just resign yourself to fighting for a purer goal since you know you will lose without achieving it regardless?


Total victory, no. Total victory would be strong encryption free to use by everybody without fear of persecution. That looks increasingly unlikely.

However, I do not believe they can effectively enforce any encryption bans. Thus, people who need encryption will still have access to it. And as far as I am concerned, my duty (as a software engineer) is to ensure that it remains the case, even if using it becomes illegal.


That attitude sounds like a myopic focus on the software at the expense of everything else.

Say that in the year 2160 you have perfect, unbreakable encryption on your pocket computer. How will you use it? With a touchscreen or keyboard, allowing microscopic cameras to see you input it or read the thermal signatures off your input device afterwards? With your face or voice that are continuously being recorded from hundreds if not thousands of angles? Plugging in the future equivalent of a yubikey that someone can just steal from you? You're lucky if fMRIs don't become good enough to just pluck the information out of your brain as you think it. Of course, the master key is most important but all of these concerns apply to the data being protected as well.

The real thing that can never be effectively enforced is privacy. People who need encryption can have access to it or not. It matters not one whit. Our duty (as people) is to push society in a direction where this change feels less catastrophic, not to fight a Caligulan war against the sea.


> With a touchscreen or keyboard, allowing microscopic cameras to see you input it or read the thermal signatures off your input device afterwards? With your face or voice that are continuously being recorded from hundreds if not thousands of angles? Plugging in the future equivalent of a yubikey that someone can just steal from you? You're lucky if fMRIs don't become good enough to just pluck the information out of your brain as you think it.

You're basically describing a totalitarian Panopticon. A society like that should be fought by all means available, including physical force, so the question of legality of encryption is somewhat moot at that point.


>"It is the common fate of the indolent to see their rights become a prey to the active. The condition upon which God hath given liberty to man is eternal vigilance; which condition if he break, servitude is at once the consequence of his crime and the punishment of his guilt." – John Philpot Curran: Speech upon the Right of Election for Lord Mayor of Dublin, 1790. (Speeches. Dublin, 1808.) as quoted in Bartlett's Familiar Quotations

[0] https://en.wikipedia.org/wiki/John_Philpot_Curran#Quotations


Anyone can find a quote that says anything.

>"If you want a picture of the future, imagine a boot stamping on a human face — forever." - George Orwell: 1984, 1949.

Welp, guess that's it for freedom. A person from the past wrote something. No more for us to do here.


> my exact point is that we can create a cost to cracking each phone — in hashing power and time spent

This is not at all how encryption or security works


It’s exactly how it works:

You create a hash chain, then use the final result as an encryption key of your secret (in this case, the key for the data), then store only the start of the chain and encrypted secret.

The only way to retrieve the secret is to recompute all the hashes, from the start, to recreate the key and decrypt the data.

So it’s secure unless you believe there’s a weakness in the underlying encryption or hash function.

Further, you can parallelize this via encrypting the start of chains with other chains — giving a significant advantage to the chain creator: they can do 1000 chains in parallel, but unlocking requires decrypting them sequentially. At that ratio, if you want decryption to take a month of steady hashing, you need only do a little under 1 hour of hashing yourself. 1 hour of 1 GPU is about a dollar of expense, and has more than 1000 parallel tracks.

My suggestion would be that Apple create a chain for each phone and then load it with that phone specific wrapping key — which it uses to return the actual encryption key wrapped in. The only way to decrypt that key is get the necessary information from Apple and a signed request so the SE will emit the encrypted key at all.


Yeah uhh hey I appreciate your enthusiasm but this literally is not how modern public key encryption works, or will ever be changed to work. All ciphers in general use don't have any sort of a realistic time-bound to being cracked or computed


Yeah.

I appreciate you trying to correct me, but I never was saying that this was an instance of “public key encryption”, whichever version you mean.

This is a scheme by which you can intentionally create a key that can be re-generated in a fixed amount of time, and use it as part of normal symmetric encryption to protect a secret. One usage of that is creating intentionally crackable schemes, such as protecting other signing keys in a way you can later crack if you need to. This allows a device, such as a phone, to emit a masked secret that we have cryptographic guarantees it still takes time to recover.

Hashchains for time locking is a studied mechanism, and though it predates crypto currencies, it’s deployed as a mechanism in several kinds of applications there. A second usage is in storing paper copies of master signing keys in a safe, since the key cannot be exposed in the event of a robbery before a certain period of time — giving you time to rekey your system. (Generally, people use multipart keys instead, because they’re less cumbersome to recover; however, if you only have one secure location — multipart keys don’t help. Hash chains still do.)

So it’s literally how (part of) modern cryptography works.


My suggestion is to make phones secure by default, and if you need to track someone then use a network of cameras in public areas.


Better yet, scrap the network of cameras and employ a load of uniformed, unarmed police officers wearing body cams. (The cameras should default to deleting footage after a few hours, unless the officer presses a button to save it, and all arrests not recorded due to "missing" footage are deemed invalid).

This helps communities to feel the reassurance of a trusted police presence, creates local jobs, and provides a decentralised alternative to having a single network controlled by a few hidden, unaccountable individuals. Putting a human conscience behind every single camera seems like a good way to prevent tyranny and encourage whistle-blowers.


Yep, just shoulder surf the passcode the user enters on the train, or read their fingerprints/retinal patterns off their body and print it out onto your key material.


If I want to have a private conversation where the details of what are said are undiscoverable, I believe that's a right that people should have in a "free country", including over the internet, over phone, and so on. The fact is, I can, using some combination of math and secrets, accomplish this.

I don't think anyone on earth has the right to collect/record/see the contents of my communications other than me and the other participants, until there's reasonable suspicion of a crime.

Covert dragnet snooping is an evil means to any end, and it damages the moral standing of the society that does it.


The trick here is that the authorities will (somewhat reasonably) advertise their need for these tools under the "reasonable suspicion" label: they want the backdoor or whatever it is they need to conduct their surveillance once they do have reasonable suspicion.


> What’s not going to sell, and what the tech industry needs to get over is “lulz, it’ll impossible to intercept military or terrorist information because I need absolute privacy for my saucy emails”. I think it’s been empirically demonstrated that won’t happen.

It's very much the other way. Strong encryption algorithms have been available to the public for a long time now. You can ban using them, but the only way to effectively enforce that ban would be for the government to require that all devices capable of running code from external sources run only code that's signed by that government.

Without that, you can ban all you want, but terrorists and others who need that stuff will have it anyway. So the only effect would indeed be no privacy for saucy emails. Of course, intelligence agencies would love that, since it would allow them to have a society-wide dragnet.


I disagree with your analysis — the way most people receive encryption, including criminals and terrorists, is through a provider. Regulating their behavior does change the general trend in security. Further, forcing them to implement their own encryption increases the likelihood they make a mistake while also refocusing the NSA et al to those algorithms instead.

What we’ve seen is governments subverting encryption and systems repeatedly, in ways they wouldn’t if they had other methods.

I’m not trying to accomplish some absolute ideological position, I’m trying to shift the state of affairs to realign incentives for several players. If some people write their own encryption, or the technologists use GPG everywhere, whatever.

> allow them to have a society-wide dragnet

I don’t think you even read my proposal: the mechanism I proposed makes that impossible, which is in contrast to the current state of affairs, where they subvert the security of the entire system instead of targeted people. Allowing for targeted cracking at a certain level of expense and requiring physical possession of the device in no way enables mass dragnets, and in fact, removes their legal cover by providing alternative means.

I’m not saying people can’t invent their own security — just that factory made safes need to not be “unbreakable”, because it just incentivized bad behavior when they discover a flaw and/or subverting the integrity of the factory.


> I’m not trying to accomplish some absolute ideological position

You did.

Any attempt at right of privacy must be mercilessly crushed with maximum force


That’s clearly not how the US (or anywhere) operates: the constitution itself negotiates terms between privacy and security — privacy is not and never has been an absolute right.

Further, I’m actually trying to increase privacy, by negotiating a compromise that’s workable for society as a way to remove the excuses bad actors are using, and shift the legal framework around the topic. That’s not an absolute ideological position, by any definition.

By contrast, you do adopt such an absolutist position — which isn’t grounded in law, and fails to provide for other societal needs. Such stances lead to failure, because of their absolutism. Your stance is why Australia passed an internet wiretapping law, not mine — because you refused to acknowledge a societal need until they employed force.

If your approach worked, we wouldn’t have the state of things we do now.


> Further, I’m actually trying to increase privacy, by negotiating a compromise that’s workable for society as a way to remove the excuses bad actors are using, and shift the legal framework around the topic. That’s not an absolute ideological position, by any definition.

I appreciate the motivation but it's very naive.

Sure, you give them access after 1 month. Next they'll say 1 month is too long, they need to be able to do it within hours to be able to catch criminals before they run to another city. Then it'll be minutes so they can stop crimes while they're happening. Then it'll be real-time on everyone so they can use machine learning to predict crimes minutes before they happen.

> Your stance is why Australia passed an internet wiretapping law, not mine — because you refused to acknowledge a societal need until they employed force.

> If your approach worked, we wouldn’t have the state of things we do now.

They would still have done, and more. You think police and intelligence agencies will one day just say "yeah, that's enough, we're good"? No, they'll want anything that makes their job easier and gives them more power, always.


The objections to a golden key are irrespective of whatever system permits the golden key access. Your hashchains are completely irrelevant. The courts will use some key to sign their request, and that key will leak.

Cryptography reduces message security to key security, nothing more.


That’s why it’s a two stage system: leaking the key only permits forged requests, but an attacker still has to spend the time reversing the hash chain. For each and every phone they want to crack.

Apple has also done a reasonable job of holding onto their signing keys, to date.


> Apple has also done a reasonable job of holding onto their signing keys, to date.

How do you know that?


Because whoever has stolen them has been reasonably discreet, and we don’t see compromised Apple things all over the place — which means if they’ve been stolen at all, it’s been by high class attackers.

If your goal is to stop the NSA et al stealing a key or owning a device, you’ll be sad.

But if your goal is to change the law and redefine the parameters of them owning devices, you might make progress. The political process will insist on a means to access these devices, and they’ll accomplish it by one means or another. By engaging with instead of fighting that, we gain the ability to have a say on what those means are.


FakeComments, can you tell who you are?


If the US government gives itself the right to install backdoors / exploit vulnerable software (as opposed to notifying companies about vulnerabilities) then I feel pretty uncomfortable about ex-government hackers just becoming freelance mercenaries using knowledge they may have gleaned from those ops once they move onto their next gig.

I can't think of a great solution to this problem.


> I can't think of a great solution to this problem.

There's really only one "final solution" to the problem in the purely technical realm. That would be to make provable security (in the theorem-proving sense) a non-negotiable requirement to all digital logic (both hardware and software) running on networked devices. I don't know if there's even a workable definition that would rigorously describe the goal of such an effort.

... But I believe that if provable security was important enough to everyone (just like "winning the war" in the 1940s or "getting to the moon" in the 1960's), we might possibly achieve it -- at least below the OS syscall level in a few major OSs and in several important userland libraries.

However, that ignores the human element of security, which can't ever be completely solved via mere human effort. People will always be vulnerable to social engineering, for example.


I think your solution needs to extend to the hardware components on the board.

High security MCUs go through great lengths to defeat sideband attacks on the package (some really neat stuff too like failing if exposed to die shaving).

There are secure bus initiatives but they don't extend to the BOM (bill of materials) for all the components.

On top of that, GUI techniques for obscuring physical input (keyboards, UI touches) are needed.

Given Apple's posturing and patch release cadence, I think/feel they are on the side of privacy. Android too. We're on the right track, I wonder if eventually tech will win the arms race for exploits like this? (The rubber hose exploit will always work...)


> I think your solution needs to extend to the hardware components on the board.

It does. I said all digital logic, which includes all the ICs, FPGAs, and silicon.


"final solution" is generally a poor phrase to use: https://en.wikipedia.org/wiki/Final_Solution


Oh crap, I'm sorry. No reference to that or anything like it was intended!


So, "provably secure" is a catch-22.

If something can be created to be provably secure, then it can be an argument for government legislating a back door.

"You said it's provably secure. Now you can give us provably secure access too without hurting your customer's privacy or security, because they're protected by the 4th amendment."

I don't think this can be solved by technology, I think this comes down to politics of freedom, if you get right down to it. And it looks like you're going to have to have that fight anyway.


The goal of probably secure computing would be merely to (hopefully) extend the mathematical certainties of cryptography to computers and software. The politics of cryptography wouldn’t change, they would only be broadened. Intentional back doors would still be possible, and the ramifications of building them would be just as dire.

So the best provable security could do would be to eliminate security holes like buffer overflow/etc. Trust issues (and even side-channel attacks) would still be present as always.


Then you're probably using the wrong language.


Well I did say "in the theorem-proving sense", meaning that the code undergoes formal verification. There are programming languages for which each function is a theorem that is proved at compile time. That's what I meant.

There are some low-level libraries that have already been partially converted to theorem-proved functions for the sake of security.


You're slicing the argument thinner than what a politician would see.


> I can't think of a great solution to this problem.

The mentioned government agencies have the "NOBUS" belief: that the concept of "NObody But US" (having access to the "keys to the secrets") works.

This article is just one of a many good examples that it doesn't.

What could work are just the systems which are secure without any exceptions. Which is hard to achieve when enough powerful influences (most often directly or indirectly tax funded, even if not explicitly government organizations) do all they can to make that not happening. It's then easier than it appears to be to achieve the goals of nobody having an access to a really secure system.

An example:

https://en.wikipedia.org/wiki/Dual_EC_DRBG

"In September 2013, The New York Times reported that internal NSA memos leaked by Edward Snowden indicated that the NSA had worked during the standardization process to eventually become the sole editor of the Dual_EC_DRBG standard,[7] and concluded that the Dual_EC_DRBG standard did indeed contain a backdoor for the NSA.[8] As response, NIST stated that "NIST would not deliberately weaken a cryptographic standard."[9] According to the New York Times story, the NSA spends $250 million per year to insert backdoors in software and hardware as part of the Bullrun program.[10]"


You can make it illegal for ex-NSA employees to use their knowledge of exploits learned while on the NSA payroll. It may well already be the case for all I know.


I hope with all my heart this is treated as the treason it is and not a "plausibly deniable" part of this recent policy of sucking up to brutal Arabian dictatorships regardless of atrocity.


how about we make it illegal to hack into systems unauthorized? oh wait...


Quitting your job doesn't let you expose classified secrets, no.


And how do you enforce it?

Hmm. Wait. Was that sarcasm?


Perfect. Then, just hope people follow rules.


Sometimes you have to disincentivize behavior with prison time and things like that and then hope people don't do it. Trying to prevent some crimes ahead of time is a recipe for dystopia.


In this case, "trying to prevent some crimes [of government employees leaking the golden key]" possibly means "don't make a golden key that lets governments freely hack everyone", which is generally being regarded in this thread as the non-dystopian result.


I can't imagine that it is not.


No ethical one at least.


sounds like maybe they should get a warrant and get legitimate access on an individual basis rather than being allowed to hack everything, you don’t need to hack me if I let you in, it should be just as illegal as it is for them to poke around in my house without a warrant


The US is Dr. Frankenstein, except they didn't learn their lesson from the first monster they unwittingly released into the world and continue to pump them out.


we could elect sane leaders...


And that's a novel idea when they are on the campaign trail... until they start getting daily national security briefings and learn about the attempted attacks supposedly foiled by good SIGINT. No one wants to be the president who turns that firehose off and "causes the next 9/11". I believe that is what happened with Obama.


It's more naive than novel, because it assumes that everyone who came before were acting in bad faith.


Given the bumbling numbskulls who still manage to set off bombs, are we really that sure they're stopping anyone?


Yeah, we're pretty sure.


Like Obama? I remember how the NSA was shut down entirely during his tenure, man that was great.


I also remember his campaign to reign in 'bush era spying' over the next 8 years the knob was turned to 11.

I don't expect much from a person that won a Noble piece prize then proceeded to drop 26,000 bombs in 2016 a bomb every 20 minutes.

https://www.theguardian.com/commentisfree/2017/jan/09/americ...


Or how he shutdown the drones gaming unit, and his constant efforts towards peace in the middle east by refusing to destabilize countries, etc. Great man.


As a matter of interest, do you believe that the sane option is to shut down the NSA entirely?


Where is this love for NSA coming from?


physicist for president! elect Lisa Randall, or Sean Carroll.

provided, of course, that they agree.


Merkel is a physicist. She's great.

But then again many physicists were also convinced Nazi officers.


> But then again many physicists were also convinced Nazi officers.

If the Germans would have won the war, we'd probably celebrate those officers :/ All the torture and killing would be spun as "necessary evil" (if it even came to light), and further investigations would be blocked by the government. How we perceive the past is...complicated.


you mean Heisenberg?


The name that springs to mind is von Braun.



That's a pretty outrageous claim. Do you have any evidence this is possible?


leaders are well insulated from such knowledge for their (legal) safety.


The president is immune from legal liability.


I was including all leaders. The president is just one of them. None of them have nearly the level of knowledge we give them credit for.


I realize it's a really sexy headline, but I'd like for there to be more than 0 proof that this is a real thing. Especially if they claim a vulnerability that's exploitable by only sending a text.


This article doesn't cite sources, but the other one cites Lori Stroud, a former developer of the application.

https://www.reuters.com/investigates/special-report/usa-spyi...


Lauri did not develop the program. She was an solely an intelligence analyst.


A much more technical description of the program. Thank you.


There've been similar issues in both iOS and Android before - iOS had one recently where a text would cause repeated app crashes. Back in 2009 there was a full exploit via SMS on iOS, and just a few years back the Android stagefright exploit was barely spared from turning into a giant worm due to exploit mitigations and the diversity of devices. It's quite possible to see these attacks come to light in a much scarier way. That exploit is solid gold and they probably paid a small fortune for it.


I am 100% sure this is an exploit related to PDU mode SMS messages. Tons of phones of different brands are probably vulnerable to variations of this attack.


I think the asertion that it's on the baseband is correct, for sure


>Especially if they claim a vulnerability that's exploitable by only sending a text.

For some time, it was possible to crash some iPhones by texting them a Taiwanese flag emoji (which was censored by mainland China). https://www.cultofmac.com/561635/apples-taiwanese-flag-ban-l...

I don't know offhand if this was a buffer overflow or something else, but if you can crash the OS with a text, you . could likely exploit it instead.


> I don't know offhand if this was a buffer overflow or something else

It was an issue when the device's local was set incorrectly and would return NULL, leading to a crash in CFStringCompare.


Many thanks!


The description of the hack fits StageFright perfectly[1], which was exactly what it did. The sources may have just changed the affected platform to iOS to gain some traction.

[1] https://en.wikipedia.org/wiki/Stagefright_(bug)


A stagefright style iOS bug was reported around the time this took place

https://9to5mac.com/2016/07/22/stagefright-mac-iphone-ipad/


I didn't know that. Thanks for sharing!


I would imagine details of such an exploit are worth more than A million so doubt people would be eager to share


There have been similar vulnerabilities in iOS before, such as the crashing bug that could be exploited by sending a single malformed ligature/combined character in some incredibly obscure Indian script.


There is no reason to doubt this. It wouldn’t be the first time such a vulnerability was found.


I think that the idea of out-of-the-box privacy/security against even a semi-competent adversary on any computer (especially a mobile device) is completely fictitious, and these hack stories play an important role in helping people realize that.

Consider the thousands of people around the world that are involved in making phones in design, hardware, software, manufacturing, signal providers, platform providers, app writers to name a few. Any of them could be malicious actors or accidentally introduce exploitable bugs. The idea that such a complex stack can shield you from very smart and resourceful people that are actively trying to peek though is not reasonable. Everyone, especially people that are "annoying" to powerful entities (corporate or government), should assume that everything they do with their mobile phone is accessible to the people they hope it isn't.


That a good, cautious stance. That said, thus may not be a good example of it.

We don't know the imessage bug, but a big one was patched in ios 9.3.3, released July 18, 2016. Meanwhile, the article says this exploit got a lot of people in 2016/2017.

So, presumably simply updating software would have protected a lot of the victims in this case.

The higher up in adversary skill level you go, the less this works. But up to a reasonably high level simply having up to date software thwarts most adversaries, no? And conversely, if you have very out of date software, even incompetent adversaries can break in.


>A team of former U.S. government intelligence operatives working for the United Arab Emirates

no non-competes? So, when Snowden tells to public about mere existence of NSA hacks - it is a crime, yet when an intelligence operative brings his NSA and the likes sourced detailed technical knowledge to a foreign government - that is kosher.


Well yeah, our leaders don't care nearly as much if you go and do shady shit for other leaders as they do about you exposing their own shady shit to the general population. Don't assume the US government is ever going to act in good faith about this kind of stuff.


welcome to reality


Now imagine what's going to be possible in Australia with their compulsory surveillance laws...

Though, I wouldn't be super surprised if they banned people they forced to implement exploits from leaving country =X


I take a more pessimistic view. We should think that all this is already being done, just not acknowledged.


What do you expect the non-competes to accomplish, exactly? Thanks, California.


No one read the other Reuters article on this - it may have been criminal for NSA employees to participate in this, at the very least it was highly discouraged. If anything this is a good argument to pay IC employees on the GS payscale better so they're less likely to take jobs with other countries.


Large govt. contractors and sub companies run this and all the other programs like this. There is tacit approval for these ops. I'd even imagine there's a bit of intel being fed back to them from the new employees.


"The FBI is now investigating whether Raven’s American staff leaked classified U.S. surveillance techniques and if they illegally targeted American computer networks" [1]

It's still illegal to use US classified information for a program like this and it's still illegal to target American citizens or networks.

[1] https://www.reuters.com/investigates/special-report/usa-spyi...


I didn't say it wasn't, I simply said all of it could be encompassed by the specific clause of "TTP's" that is common in anyones read on-off of information. The difficulty becomes in proving what was classified and what wasn't when it comes to techniques and procedures. If you were to use a liberal sense of the term, many of your "Security Professionals" deploying their skills around the US right now would be in violation. It's why there isn't a simple non-compete / non-authorization of future work within that field, it's just not going to stand up in court.

It's all as clear as mud and in this instance the government was more than aware of their former employees working there, many returned and went back to their prior careers.. think about that one for a second.


To expect citizens to revolt against their own country and force their "secret services" to not endanger public security any more by using and buying such 0days is one thing - that's unrealistic.

But seriously, I wonder why other governments and their citizens are not demanding drastic actions, like trade suspensions, expulsion of diplomats or other sanctions, when other countries get caught in such ways of spying or otherwise just screwing all over human rights. This one would be a perfect example to take a stand on - UAE is far smaller in oil trading and political importance than e.g. Saudi-Arabia.

Or why there seems to be next to zero public funding for providing open source, auditable hardware and software that could prevent such spying in the first place? The European Union could easily fund the development of a truly FOSS Android-based phone, down to the processors. Instead everyone seems to rely on Chinese or American products, which are both subject to non-European influence (in the US via NSLs, in China due to the massive influence of the Party on any major company).


IA link[0] for people who get "Page Not Found", trying to visit from Europe.

https://web.archive.org/web/20190130135641/https://www.reute...


>Three former operatives said they understood Karma to rely, at least in part, on a flaw in Apple’s messaging system, iMessage. They said the flaw allowed for the implantation of malware on the phone through iMessage, even if the phone’s owner didn’t use the iMessage program, enabling the hackers to establish a connection with the device. To initiate the compromise, Karma needed only to send the target a text message — the hack then required no action on the part of the recipient.

Has anyone here heard about or is familiar with this malware?


I wonder if this is https://www.theguardian.com/technology/2016/jul/22/stagefrig... again, the article mentions 2016.


Seems like a likely candidate.


This is kind of old. Flaws like this are in use since ever.

http://news.bbc.co.uk/2/hi/technology/8177755.stm


I can't help but giggle at the thought of Emirati hackers. For whatever reason my mind can't wrap around the fact that an extremely religious people can also be at the high end of tech (at the very least high enough to figure out 0days and such).

Does anyone have any info on since when this has actually been like this? I'd like to look up how their CS education works and that kind of stuff.


There are plenty of religious people in Hacker News. I have no idea why you would think being religious or spiritual would prevent anyone from developing technology.

My religious views do not stem from a lack of intelligence or education.


Not even talking about intelligence here. Sorry. That's an endless time waster I'm not touching.

As mentioned, for whatever reason, I'm having a hard time picturing how people who deem apostasy punishable by death can also manage, research, and exploit modern equipment, and am looking for some indication as to when exactly did they start getting good at it.


The Society of Jesuits aka Jesuits have also conducted technical research in parallel with religious pursuits. Don't see any problem.


It's not a... problem, I'm just having a tough time understanding how that works. And would love any pointers towards more or less when they started getting good at it.


Overlooked: the apathy required to become techno-mercenaries started with the agents convincing themselves that spying on nationals was different than foreign nationals while working in the NSA. To resist this apathy cyber intelligence agents working for any government/corporation should deploy a moral compass that assumes they are working for the UAE.

This also begs for international conventions. New international conventions would provide a psychological back-stop against the infosec industry's unchecked nationalism. When an agent asks themselves "is what I am doing okay" international convention and law would give them an alternative to compare with other than the militarist default of "yes".


Sorry for slight meta, but does anyone know why this 323 pt 8 hour old story is near the top of the front page, while the 6 hour old 639pt story on apple blocking FB's certs is on the 2nd page?


> and former American intelligence operatives working as contractors for the UAE’s intelligence services

At what point does this become considered treason?


Treason against the United States, shall consist only in levying war against them, or in adhering to their enemies, giving them aid and comfort. No person shall be convicted of treason unless on the testimony of two witnesses to the same overt act, or on confession in open court.


You have an agency like the NSA with scores of genius mathematicians and hackers and a black budget. There's nothing beyond their means, spying wise.

I suspect that one day our internal thoughts and feelings will be under constant mass surveillance, Minority Report style, but it won't look like sci-fi when it happens.


Am I the only one who feels like every time we get news of a government compromising an iPhone through some mystical exploit, the technology around it seems very fanciful?


What's so fanciful about it? We know that remote exploits exist. We know that cellphones have very complex baseband processors which used to have arbitrary memory access and while they've reported been locked down it's not like there hasn't been an arms race finding ways around every other security measure. We don't need any technological breakthroughs to posit that past remote exploits were not the only ones possible.

This is the problem with things like this or the Bloomberg server story: the capabilities are plausible but there's not enough information to know whether or not they're actually true so you're in the position of having to guess about whether someone actually could implement that attack and whether they'd chose to spend that much money.


Like an exploit where all you need to do is enter the target's phone number to compromise their phone?


TFA says they need to send the target a text message.

The exploit must be something like a buffer overflow in iMessage. Which we know bugs like this have been fixed. Remember the text of death which could crash any iPhone from a couple years ago?


Are you thinking of the "Stagefright" bug that did RCE via SMS on Android devices? Or maybe the Chinese censorship code that crashed iOS when people sent the Taiwanese flag emoji? 🇹🇼


Don't forget the Stagefright-like bug in iOS where a malformed TIFF file could lead to remote code execution!


"Bugs like this have been fixed" != "all bugs like this have been fixed". That is, some similar bugs having been fixed does not make such an exploit impossible.


Wasn't that in may? Time flies.


I think more that they manage to find these exploits and rapidly build infrastructure around it to make it useful.


They don't need to be very fast... the NSA is known to sit on vulnerabilities for years.


Bear in mind that, at one time, iOS devices could be jailbroken [to run arbitrary code] by simply opening a specifically-created PDF; https://www.wired.com/2011/07/jailbreakme-3-0-unlock-your-ip...


Not just arbitrary code, but arbitrary code with kernel privileges!


Yes. It likely that the reporters don't understand the technology involved, so their reporting on that topic is pretty vague. Humans tend to look where there's light, even if they know that't not the most likely place to find it. That said we have the words exploit, imessage, full access, email/phone number. We can piece together that they use a 'buffer overrun' style exploit to break out of iMessage, and and possibly several other exploits, till they have remote code execution on the device, and then they install a bot/server to collect data. Really difficult in practice, but a familiar pattern. IIRC there's a Wired article recently that reviews similar tech from another company, the author of that article says he sets his phone down in their office, and minutes later they have access to all his data.


It's a corollary of Clarke's law. When technology is described by someone too stupid to understand it (like nearly all journalists), the explanation makes it sound like magic, because that's how the writer sees it.


I don't think there's anything fanciful in having a request with well-crafted data cause remote code execution. I might be missing something though.


I read this as: "class of netizens who use iphone, targeted by scandal or bad actor" poignantly titled 'Karma.'


While I find this absolutely disgusting and horrifying, at the same time I hope this becomes extremely widespread and rapidly. This needs to happen to Americans. Only then will we, the collective we, wake up and actually do something and maybe start to take privacy and security seriously.

I am rapidly becoming anti tech, as I think I can clearly see where this is all going. That's hard for me to say, as my whole life has been tech focused. I'm 47 and started coding when I was 10. My whole life centers around it, and always has.

Hitler, Stalin and Mao would have absolutely loved to be alive today and have these types of tools. Maybe we need another 100M deaths to see what this kind of information and power leads to. We are recording everything we do digitally, all to be easily analyzed by whomever comes to power at some future point of time, where the rules might be different. Most of what is recorded about us we don't even know. It will also be easier to find all of the relatives, so they can be killed off too. They like to make examples and ensure no one steps out of line. They don't just kill you, they kill 1-2 generations of your family.

This data won't go away. Ever. They will know who likes what, who supports what, etc. Just a keyword search away from getting a list of names and addresses. We think we are so clever. We are building our future jail. For the first time in history, we have the ability to track every single minute detail about a persons life from birth til death, in extreme, high resolution which grows by the day. I don't just know you went from point A to point B. I know the exact route you took, how long it took to complete each segment, how long you stopped at each place along the way, what those places were, etc. That's just gps data.

I saw the 60 Minutes piece on PlanetLabs recent launch of 300 satellites. They're taking pics of the globe in very hi resolution, constantly. Better than some of our spy sats. Oh, and anyone can access that data. It's free! They showed how they were able to go back in time to when the compound that Osama Bin Laden killed in was built. They were then able to create a very accurate model of the compound which led to the raid that killed him by going so far back in time to when they started building the thing. Obviously we think that's a good thing because it led to a mass murderers death, but think about that technology.... recording everything 24/7, globally, going back years in time to reconstruct something that happened in the past... https://www.cbsnews.com/news/private-company-launches-larges...


What about Dragonfly? Is it also assisted by the government? It is censored but it sounds like a rat trap.


TL;DR: the US attitude ...

  *It’s fine to spy on human rights activists with all the powers of government as long as they’re not American* 
... really gets to the heart of how the US treats the rest of the world. The US is the biggest terror threat in the world today. Its pains are self inflicted and it's enemies created by their very own foreign policy.


> Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents.

https://news.ycombinator.com/newsguidelines.html


with respect, as a non US citizen this whole article to me is deeply offensive. I/we are being targeted simply because we are second class citizens on the web. Good enough to have our data extracted by US corps but our rights are trampled on. This is the essence of it yet you're accusing me of flame-baiting? Please reflect for a minute on how just this is to anyone who has never (and will never) step/ped foot in the US. Simply saying what I say here might mean I'm being persona-non-grata and harassed at your airports[1]. All of you should reflect on this the next time you come shopping to Europe or take your SO on honeymoon to Paris, Berlin & Milan.

[1] https://www.theregister.co.uk/2017/06/28/mozilla_dev_and_cur...


It's flamebait because it's not a new critique, and it's tangential to the article. So it's likely to attract people trying to rebut your critique, but neither your argument not the rebuttal would add anything new.

If there was a thread about someone discussing us spying methods and policies against non-us citizens, your comment would not be flamebait. See the difference?


most comments are thoughts and opinions and only few provide new critique. The OP comment here, though presented in an obnoxious way, is new critique because no one else has discussed the civil liberty bias deployed by agents. Though the OP presents this in an offensive way, which is maybe breaking a different rule than just "new critique"


That's what happens when you are on someone else's web. So far, China is the only one deciding they are unhappy enough with the arrangement to build their own web, but the option exists for any country, or even non-geographically-bound organization if you are willing to tolerate shitty satellite connections.


Accusations of flamebait and similar on HN are a form of censorship, to prevent the community from having to confront difficult issues and “keep the peace” — not legitimate regulation of tone.


No, it's just to keep this site from becoming boring. Difficult issues are most welcome—just say something substantive that we can learn from.


that's the feeling I sometimes get. Let's all go back and discuss microservices and virtualization and keep pretending technology is neutral. Because if we were actually to confront these issues we might actually get somewhere.

I have said shit before that I regretted and where I have rightly been put in place. The above comment isn't one of them though. Also I'm not a robot so maybe feeling something when I read this scoop is my own fault. idk


That’s called recognizing UAE sovereignty and not imposing our laws or values. It’s flawed, but probably the best we can do.

On the other hand, the NSA vets who went to work for the UAE knew who was paying their salaries, and knew they’d gone to the dark side the minute they crossed passport control.


You'd think we'd have some special regulations regarding what kind of jobs ex-NSA can have after leaving.


Why do ex-NSA need special regulations? What even makes them special?


Their top security clearances, their access to the deepest darkest secrets of intelligence tools, networks, and systems. Would you trust an employee with your biggest secrets if you knew they were retiring soon and very likely to be hired at many times their current salary by some major competitor?


It’s already super illegal for them to leak any of that stuff.


I can imagine there's plenty of "between the lines" stuff you learn as a CIA agent that, while not specifically classified, wouldn't be something you want going to other nations.


I think maybe your opinions on this are largely influenced by movies and not materials the government actually releases.

Can you give me an example of the kind of unclassified information these people should be prevented from sharing?


Decision making, for example. Someone internal to any organization will gain a sense of the way the organization will respond to certain situations. For something like national security, this can mean a contracted ex-NSA'r could provide another country with a forecast as to how the NSA will react internally to a certain stimuli.


It falls under "Tools, Techniques, and Procedures". It's the same reason anything published or shared is supposed to go through pre-pub even well after you've left one of the three letters.


>> ... really gets to the heart of how the US treats the rest of the world.

Really gets to the heart of how the ruling 0.0001% of the US treats the rest of the world. Fixed that for you. Some of us just live here.


yeah I actually thought it would be clear as crystal that when I speak of the crimes committed here we're not talking about those that pay their taxes, and struggle to get access to health-care ... Most are themselves immigrants so how on earth would anyone suggest it's the ordinary citizen here that's guilty[0]. I do acknowledge that my comment was "un-American" (not that I give a hoot).

[0] https://en.wikipedia.org/wiki/The_Untold_History_of_the_Unit...


I think the amount of Americans who couldn't care less if it's not about spying on Americans is probably substantially higher than 0.0001%.


To be clear, it's not a moral problem if the UAE spies on Americans. It's perhaps a diplomatic problem. The problem here is Americans using American secrets to spy on Americans for a foreign country.


Please keep in mind that what you hear coming out of Washington does not necessarily reflect the thoughts and feelings of the people it governs. I'd guess that the vast majority of Americans would not support the statement you quoted.


Yeah we take exception usually to that. With things like gerrymandering being common, the government doesn't really represent the people.


see my reply to module0000 from just now here. I thought that this would be a given, but clearly I was wrong.


I think we might just be extra-sensitive to being lumped together with the actions of our government (even if that's not what you were intending) because this is a forum with many international members.


Taking the sources of the article on their word...

They said the tools use faded in late 2017 due to apple patches and that compromise required only sending a text message. Examining CVE's up until late 2017 may give more of an idea of how this tool worked. Judging from a cursory review, there are many remote code exploits so it would be hard to narrow down. But this is what I chose to look at when considering CVE's between Jun 2017 and Dec 2017 that could effect iMessage. Many of these are classified as Denial of Service bugs but often those can be extended to code execution with extensive research.

IOKit https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

IOMobileFrameBuffer https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

CFString https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

CoreText https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

CoreText https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

Fonts? https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

ImageIO https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

Messages https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

SQLite https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

SQLite https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

SQLite https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

SQLite https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

SQLite https://www.cvedetails.com/cve-details.php?t=1&cve_id=CVE-20...

Kernel: too many to count

These were compiled by reviewing the apple security mailing list https://lists.apple.com/archives/security-announce/2017




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: