An espionage tool developed by a major world power proliferates to totalitarian regimes, aided and operated by ex-NSA agents on the payroll, to compromise human rights activists and the political opposition.
If ever there was proof that our devices need to be striving — constantly striving — for absolute security, and can never allow any “trusted party” an authentication or encryption bypass, this article is it.
An exploit like this is incalculably valuable to intelligence agencies. That the exploit would proliferate is undeniable. And the ends to which it would be (has been) used is atrocious.
Probably the only thing different about how intelligence agencies exploited this, and how they would exploit a golden key, is that with the golden key they would be sweeping up every photo on every device, and not just some photos on some devices.
“It was like, ‘We have this great new exploit that we just bought. Get us a huge list of targets that have iPhones now,’” she said. “It was like Christmas.”
(And in my mind at least, those laws are without doubt part of a coordinated five eyes security/law-enforcement campaign to push those kinds of laws through everywhere: "Look, it works in Australia!" the Canadians/UK/NZ/US will say...)
If Apple are still selling hardware in Australia in 12 months time, the suspicion will _have_ to be that they have enabled something similar enough to this to be considered untrustworthy... (And not just Apple, any manufacturer or software company doing business in Australia...)
You were by simply stating it. It's the same type of saying-but-not-saying lines that the media uses like "if this allegation proves to be true" or similar such phrases. The fact that you state it suggests to the reader that people think it. If it was an honest mistake in wording on your part that's one thing, but you should probably avoid using such phrasing.
Look at all the people responding to the post you responded to saying that Apple did it. There's no reason to give a forum for such ideas.
In this case, it doesn't matter whether or not Apple even had a hand in it. The issue is that security exploits are security exploits, regardless if intentionally designed or not.
Your implication that doubting Apple is incredulousy, is ludicrous! Why is Apple so damn special?
At this point, there isn't any evidence that Apple is involved, and yes, they go on a PR blitz to focus on privacy and security, and get hit pieces published on how "privacy is a feature on the iPhone". But the history of backdoors suggests that no one voluntarily reveals them (Intel, Cisco, Juniper,...). In many cases, how the backdoors made their way in is a closely guarded secret, specifically to enable plausible deniability.
It's best not to put a company on a pedestal, like some religious cult.
If he was wrong this claim would be so trivial to refute that sources seem entirely pointless.
lawnchair_larry made a claim about "nobody credible" believing something. As a lay person in the field, I don't know who these credible people are. Therefore I asked whether lawnchair_larry could tell us who one or more of these credible people are, and where we can read more about what they believe to be true about the situation.
As the other commenter stated, if someone thought that someone credible made that claim they could simply provide that source, which both (1) involves asymptotically less effort for everyone involved and (2) under reasonable assumptions is at least as effective.
“Sir, I’m going to need for you to list for me all the crimes you didn’t commit last Saturday.”
Implying that was asked seems disingenuous to me, but I assume that wasn't your intent.
What I see is a request for clarification, specifically asking for the sources from which a conclusion was drawn.
It's not a police raid. Relax.
This is most definitely a rather silly request.
Can you prove that magic doesn’t exist?
That's not what I was asking for. I was asking him to refer me to credible people saying that they don't believe Apple was involved. Here is what he said:
> Nobody credible believes for a second that Apple was involved, for whatever it’s worth.
I thought perhaps he had read about credible people making this claim, or had some other way of knowing what these people believe. I was simply asking for more information, so as to further educate myself.
I believe that in the absence of any sources, this claim may suffer from the Argument to the People  and Argument from Authority  logical fallacies.
Full disclosure: I have a MacBook and an iPhone. I enjoy Apple products and respect their business model. And in the absence of any evidence to the contrary, I'm inclined to believe that Apple wasn't involved. I just want to learn more about the subject from people who know more than me.
Well, it isn’t.
Also, if you're talking about the mid east and not, say, China, the future doesn't really look bright.
Instead of pontificating, the tech industry should innovate.
There’s no reason that hashchains can’t be used to timelock the key, and the enclave export it in response to a signed request. Then we can at least force the compromises through the legal system and require effort to reverse the hashchain. That kind of court authorized targeted access removes the incentive (and justification) for other actors to more deeply compromise the system. In turn, this let’s us provide more security, in practice.
What’s not going to sell, and what the tech industry needs to get over is “lulz, it’ll impossible to intercept military or terrorist information because I need absolute privacy for my saucy emails”. I think it’s been empirically demonstrated that won’t happen.
Be part of the solution.
> What’s not going to sell, and what the tech industry needs to get over is “lulz, it’ll impossible to intercept military or terrorist information because I need absolute privacy for my saucy emails”
Seems to be an ironic mischaracterisation of the parent’s point, which was precisely that one coubtry’s terrorism is another’s gay rights activist or high ranking foreign official.
From the article:
In 2017, for instance, the operatives used Karma to hack an iPhone used by Qatar’s Emir Sheikh Tamim bin Hamad al-Thani, as well as the devices of Turkey’s former Deputy Prime Minister Mehmet Şimşek, and Oman’s head of foreign affairs, Yusuf bin Alawi bin Abdullah. It isn’t clear what material was taken from their devices.
“Saucy e-mails” is a bit tone deaf :(
My point was that issues like this should be mediated by courts and existing legal systems, not the unilateral decision of technologists.
And that society is going to insist that be the case, hence the most effective way to protect those persecuted minorities is via cooperation and steering how that process happens — not fighting a losing battle.
Finally, that the way to increase the effective security is stop fighting ideological battles on the issue, and find a politically workable compromise which still prevents remote exploitation — the main danger of encryption bypasses.
This is a strict improvement over the current situation, where the answer is “anyone who has money”.
As the other commenter points out, this only adds an attack vector and does not do anything to eliminate any.
The same incentives exist on all sides to find exploits regardless of an additional “legal” channel to crack the encryption. Particularly because your political enemies use the same devices and you can’t get a court order to tap their phones (usually).
 - https://www.google.com/amp/s/amp.theguardian.com/world/2013/...
Security researchers, strangely enough, seem to care who they sell to. If the NSA stopped buying and only the UAE was interested, I expect we’d see some firms move to other business models or targets for “research”.
There is no shortage of oppressive regimes with incredible amounts of money at their disposal. People who are in it for the money don't honestly give a shit who pays them. You cannot eliminate the market for this stuff. The only option is to create better software.
It should be, and would the judge disagree, show the honourable justice disagree, make him feel who is the boss in the internets.
"That it is better 100 guilty Persons should escape than that one innocent Person should suffer, is a Maxim that has been long and generally approved."
I wonder if that maxim is still generally approved. It seems like some authoritarians would prefer that 100 innocents would suffer than one guilty person should escape.
I suppose it depends how you define "innocent" and "suffer". Under modern law, everyone is guilty of something. And while we might not require suffering in prison, a little suffering of expensive legal fees, invasions of privacy of your digital data/at the border/in the airport, or searches and seizures of property by police in your car are commonplace.
Mind you, I'm not saying it's right. I'm just saying that this is how the authorities are thinking.
This is a really dangerous (and unfortunately oft-repeated) mindset that basically boils down to advocating the position of "current laws are so complex that if we give law enforcement better tools [to combat law breaking], we'd implicate ourselves also".
This ignores two huge problems:
1. No, not everyone is guilty of something. I doubt a majority are guilty of anything, and I'd be surprised to hear even like 10-15% are guilty of anything.
2. The people that _are_ guilty of some minor infraction (e.g. jaywalking, speeding, or pick your own obscure state law) are still guilty of much less than the major crimes these kinds of systems are targeted at (terrorism, violent crime, robbery, etc). Comparing the two is like saying "we shouldn't put cameras at bank entrances because what if it catches someone jaywalking outside?"
On a more hard-line note, the people that are guilty of even the smallest things are still guilty of those things. People speed because they think they won't get caught, but it's still against the law. People jaywalk because they think they'll be safe crossing the street, but it's still illegal. People litter because they think it's not a big deal, but it's still illegal. Who gets to decide what people should get away with? Why would anyone be able to get away with breaking any law? (To wit, the obvious response is that some laws are obscure and outdated and that people aren't even aware of them, but feel free to refer back to #2 above until we've fixed those laws. Perhaps actually enforcing them is the nudge we need to push lawmakers into saying, "You know those crazy laws that are still in the books that we've ignored for a hundred years? Yeah, maybe we should get rid of those.")
>That it is better 100 guilty Persons should escape than that one innocent Person should suffer
This maxim encourages a false dichotomy that is embraced by the "we're all guilty of something" mindset, and forgets that "suffering" is a spectrum.
The people who are guilty of gotcha laws _aren't_ innocent, and _should_ suffer proportionately to the law they broke (see: fines, warnings, etc that you'd expect for silly laws that people think shouldn't be enforced, yet are still laws). I'd much rather see that "innocent" person get their $50 fine so 100 people guilty of violent crime don't escape.
"It is better 100 guilty [murderers] should escape than that one innocent [jaywalker] should suffer." Sounds silly, doesn't it?
The problem with this line of thinking, "they broke the law, they should be punished whatever it was" is that laws and morality, while sometimes intended to be aligned, are not.
Take, for example, "disobeying a police officer". On the surface of it, no one would argue that that is a problem, thinking, "of course I'll follow a police officer's instructions". However, the system has evolved to the point where someone who has initially committed no crime, stopped by the police under suspicion, can end-up dead or incarcerated due to a sadly more-and-more common sequence of escalations.
Asserting that the law is somehow perfect to the extent that all illegal behavior should be punished is also poorly framed because not everything is illegal everywhere, and in fact, what is legal and illegal is not universally known or clear. For example, to access some laws in some municipalities, the laws themselves are under private copyright and a fee must be paid and they are not accessible except directly in person, as in, not remotely (this was news a while back, not sure if there's a HN story about it).
So, I feel this viewpoint is missing some fundamental realities that may drastically change the underlying assumptions.
It is not fundamentally true nor universally accepted. Where does the scale tip? All criminal justice systems attempt to minimise the risk - but one some level it is simply the cost of doing business. For all that it is an admirable sentiment, it is limited.
So yes, the police already have the power to search you for weapons if they have a warrant, and this is bringing the ability to search phones into line with that.
Rather, I pointed out that they have a real mission, and they’re going to spend effort accomplishing it. But their mission isn’t to own every device — it’s to own a select few, probably on the order of hundreds or thousands a year. So, if we create a mechanism by which they can do that without owning every device, we can align our goal of protecting most devices with theirs of owning a few.
This in turn increases security for nearly everyone, because powerful agencies no longer have the same motivation to cause harm — and might be persuaded to help. After all, it’s in their interest to prevent large remote compromises — just not a higher priority than maintaining their own access.
Further, the best way to actually restrain them is through a change in government policy, which will only happen when the government believes there’s an alternative solution.
Perhaps you could try responding to the point?
Things like Room 641A show that the government doesn't even need to engage the courts to collect data on millions of people and further, that they are not limiting their collection efforts to a few hundreds or thousands of devices a year.
This is patently untrue. Snowden and much more have made it absolutely, unambiguously clear that national spy agencies do wish to gather and collect absolutely and every bit of information possible about their own law-abiding citizens as well as those here that are not.
I followed the Snowden leaks quite closely, but don’t remember anything close to unambiguously showing that.
For a big multinational like Apple, that’s probably a fair number. But it’s also harder to hide they’re doing that, and let’s us bring pressure on them politically for their political misdeeds. In the end, it’ll be major powers who can — US, Europe, China, etc.
My point is that it’s never going to be the case that technologists get to unilaterally decide that for all of society.
My proposal is just to bring phones into line with existing warrants: https://news.ycombinator.com/item?id=19036408
But by doing so, technologists have the political cover to push back on spy agency excesses and abuses.
The combination of requiring physical possession and appreciable hashing time per crack is a two-layered response to mass-surveillance. That’s the whole basis of the compromise I’m proposing: calling their bluff, and enabling warrant cracks as political cover to shut down mass surveillance and cracking as unnecessary.
The "appreciable time" is going to be subject to constant downward pressure, both politically and technologically.
> political cover to shut down mass surveillance and compromise as unnecessary
Mass surveillance is not going to go away without huge cultural reform of the security services. They don't take concessions.
More details here: https://news.ycombinator.com/item?id=19040252
I agree that it would be a consistent political battle — but it’s already that, and it’s clear people with power are getting fed up with technologists attempting to impose their ideology without compromising with other social needs.
That’s what prompts laws about wiretapping or mandated backdoors.
I haven’t heard the same arguments about safes I do about encryption — and the reason is because there’s an understood bypass, if they gain access, have time and money.
By compromising and allowing targeted cracking, we split the faction pushing for backdoored phones, solve most of the issues, and give ourselves a viable path to accomplish something rather than being forced into backing down or completely compromising systems. Further, by being willing to compromise, we gain a voice on shaping how that discussion looks — rather than largely being excluded.
FakeComments was mostly talking about targeted surveillance, but I agree with him/her in spirit, since I believe that mass surveillance is not going to go away. Ever. So you can either yell futilely into the wind as it happens over your objections, up to, including, and perhaps going beyond a swarm of camera-bearing networked nanodrones coating the planet, or you can try to nudge it towards happening on slightly preferable terms.
However, I do not believe they can effectively enforce any encryption bans. Thus, people who need encryption will still have access to it. And as far as I am concerned, my duty (as a software engineer) is to ensure that it remains the case, even if using it becomes illegal.
Say that in the year 2160 you have perfect, unbreakable encryption on your pocket computer. How will you use it?
With a touchscreen or keyboard, allowing microscopic cameras to see you input it or read the thermal signatures off your input device afterwards? With your face or voice that are continuously being recorded from hundreds if not thousands of angles? Plugging in the future equivalent of a yubikey that someone can just steal from you? You're lucky if fMRIs don't become good enough to just pluck the information out of your brain as you think it. Of course, the master key is most important but all of these concerns apply to the data being protected as well.
The real thing that can never be effectively enforced is privacy. People who need encryption can have access to it or not. It matters not one whit. Our duty (as people) is to push society in a direction where this change feels less catastrophic, not to fight a Caligulan war against the sea.
You're basically describing a totalitarian Panopticon. A society like that should be fought by all means available, including physical force, so the question of legality of encryption is somewhat moot at that point.
>"If you want a picture of the future, imagine a boot stamping on a human face — forever." - George Orwell: 1984, 1949.
Welp, guess that's it for freedom. A person from the past wrote something. No more for us to do here.
This is not at all how encryption or security works
You create a hash chain, then use the final result as an encryption key of your secret (in this case, the key for the data), then store only the start of the chain and encrypted secret.
The only way to retrieve the secret is to recompute all the hashes, from the start, to recreate the key and decrypt the data.
So it’s secure unless you believe there’s a weakness in the underlying encryption or hash function.
Further, you can parallelize this via encrypting the start of chains with other chains — giving a significant advantage to the chain creator: they can do 1000 chains in parallel, but unlocking requires decrypting them sequentially. At that ratio, if you want decryption to take a month of steady hashing, you need only do a little under 1 hour of hashing yourself. 1 hour of 1 GPU is about a dollar of expense, and has more than 1000 parallel tracks.
My suggestion would be that Apple create a chain for each phone and then load it with that phone specific wrapping key — which it uses to return the actual encryption key wrapped in. The only way to decrypt that key is get the necessary information from Apple and a signed request so the SE will emit the encrypted key at all.
I appreciate you trying to correct me, but I never was saying that this was an instance of “public key encryption”, whichever version you mean.
This is a scheme by which you can intentionally create a key that can be re-generated in a fixed amount of time, and use it as part of normal symmetric encryption to protect a secret. One usage of that is creating intentionally crackable schemes, such as protecting other signing keys in a way you can later crack if you need to. This allows a device, such as a phone, to emit a masked secret that we have cryptographic guarantees it still takes time to recover.
Hashchains for time locking is a studied mechanism, and though it predates crypto currencies, it’s deployed as a mechanism in several kinds of applications there. A second usage is in storing paper copies of master signing keys in a safe, since the key cannot be exposed in the event of a robbery before a certain period of time — giving you time to rekey your system. (Generally, people use multipart keys instead, because they’re less cumbersome to recover; however, if you only have one secure location — multipart keys don’t help. Hash chains still do.)
So it’s literally how (part of) modern cryptography works.
This helps communities to feel the reassurance of a trusted police presence, creates local jobs, and provides a decentralised alternative to having a single network controlled by a few hidden, unaccountable individuals. Putting a human conscience behind every single camera seems like a good way to prevent tyranny and encourage whistle-blowers.
I don't think anyone on earth has the right to collect/record/see the contents of my communications other than me and the other participants, until there's reasonable suspicion of a crime.
Covert dragnet snooping is an evil means to any end, and it damages the moral standing of the society that does it.
It's very much the other way. Strong encryption algorithms have been available to the public for a long time now. You can ban using them, but the only way to effectively enforce that ban would be for the government to require that all devices capable of running code from external sources run only code that's signed by that government.
Without that, you can ban all you want, but terrorists and others who need that stuff will have it anyway. So the only effect would indeed be no privacy for saucy emails. Of course, intelligence agencies would love that, since it would allow them to have a society-wide dragnet.
What we’ve seen is governments subverting encryption and systems repeatedly, in ways they wouldn’t if they had other methods.
I’m not trying to accomplish some absolute ideological position, I’m trying to shift the state of affairs to realign incentives for several players. If some people write their own encryption, or the technologists use GPG everywhere, whatever.
> allow them to have a society-wide dragnet
I don’t think you even read my proposal: the mechanism I proposed makes that impossible, which is in contrast to the current state of affairs, where they subvert the security of the entire system instead of targeted people. Allowing for targeted cracking at a certain level of expense and requiring physical possession of the device in no way enables mass dragnets, and in fact, removes their legal cover by providing alternative means.
I’m not saying people can’t invent their own security — just that factory made safes need to not be “unbreakable”, because it just incentivized bad behavior when they discover a flaw and/or subverting the integrity of the factory.
Any attempt at right of privacy must be mercilessly crushed with maximum force
Further, I’m actually trying to increase privacy, by negotiating a compromise that’s workable for society as a way to remove the excuses bad actors are using, and shift the legal framework around the topic. That’s not an absolute ideological position, by any definition.
By contrast, you do adopt such an absolutist position — which isn’t grounded in law, and fails to provide for other societal needs. Such stances lead to failure, because of their absolutism. Your stance is why Australia passed an internet wiretapping law, not mine — because you refused to acknowledge a societal need until they employed force.
If your approach worked, we wouldn’t have the state of things we do now.
I appreciate the motivation but it's very naive.
Sure, you give them access after 1 month. Next they'll say 1 month is too long, they need to be able to do it within hours to be able to catch criminals before they run to another city. Then it'll be minutes so they can stop crimes while they're happening. Then it'll be real-time on everyone so they can use machine learning to predict crimes minutes before they happen.
> Your stance is why Australia passed an internet wiretapping law, not mine — because you refused to acknowledge a societal need until they employed force.
> If your approach worked, we wouldn’t have the state of things we do now.
They would still have done, and more. You think police and intelligence agencies will one day just say "yeah, that's enough, we're good"? No, they'll want anything that makes their job easier and gives them more power, always.
Cryptography reduces message security to key security, nothing more.
Apple has also done a reasonable job of holding onto their signing keys, to date.
How do you know that?
If your goal is to stop the NSA et al stealing a key or owning a device, you’ll be sad.
But if your goal is to change the law and redefine the parameters of them owning devices, you might make progress. The political process will insist on a means to access these devices, and they’ll accomplish it by one means or another. By engaging with instead of fighting that, we gain the ability to have a say on what those means are.
I can't think of a great solution to this problem.
There's really only one "final solution" to the problem in the purely technical realm. That would be to make provable security (in the theorem-proving sense) a non-negotiable requirement to all digital logic (both hardware and software) running on networked devices. I don't know if there's even a workable definition that would rigorously describe the goal of such an effort.
... But I believe that if provable security was important enough to everyone (just like "winning the war" in the 1940s or "getting to the moon" in the 1960's), we might possibly achieve it -- at least below the OS syscall level in a few major OSs and in several important userland libraries.
However, that ignores the human element of security, which can't ever be completely solved via mere human effort. People will always be vulnerable to social engineering, for example.
High security MCUs go through great lengths to defeat sideband attacks on the package (some really neat stuff too like failing if exposed to die shaving).
There are secure bus initiatives but they don't extend to the BOM (bill of materials) for all the components.
On top of that, GUI techniques for obscuring physical input (keyboards, UI touches) are needed.
Given Apple's posturing and patch release cadence, I think/feel they are on the side of privacy. Android too. We're on the right track, I wonder if eventually tech will win the arms race for exploits like this? (The rubber hose exploit will always work...)
It does. I said all digital logic, which includes all the ICs, FPGAs, and silicon.
If something can be created to be provably secure, then it can be an argument for government legislating a back door.
"You said it's provably secure. Now you can give us provably secure access too without hurting your customer's privacy or security, because they're protected by the 4th amendment."
I don't think this can be solved by technology, I think this comes down to politics of freedom, if you get right down to it. And it looks like you're going to have to have that fight anyway.
So the best provable security could do would be to eliminate security holes like buffer overflow/etc. Trust issues (and even side-channel attacks) would still be present as always.
There are some low-level libraries that have already been partially converted to theorem-proved functions for the sake of security.
The mentioned government agencies have the "NOBUS" belief: that the concept of "NObody But US" (having access to the "keys to the secrets") works.
This article is just one of a many good examples that it doesn't.
What could work are just the systems which are secure without any exceptions. Which is hard to achieve when enough powerful influences (most often directly or indirectly tax funded, even if not explicitly government organizations) do all they can to make that not happening. It's then easier than it appears to be to achieve the goals of nobody having an access to a really secure system.
"In September 2013, The New York Times reported that internal NSA memos leaked by Edward Snowden indicated that the NSA had worked during the standardization process to eventually become the sole editor of the Dual_EC_DRBG standard, and concluded that the Dual_EC_DRBG standard did indeed contain a backdoor for the NSA. As response, NIST stated that "NIST would not deliberately weaken a cryptographic standard." According to the New York Times story, the NSA spends $250 million per year to insert backdoors in software and hardware as part of the Bullrun program."
Hmm. Wait. Was that sarcasm?
I don't expect much from a person that won a Noble piece prize then proceeded to drop 26,000 bombs in 2016 a bomb every 20 minutes.
provided, of course, that they agree.
But then again many physicists were also convinced Nazi officers.
If the Germans would have won the war, we'd probably celebrate those officers :/ All the torture and killing would be spun as "necessary evil" (if it even came to light), and further investigations would be blocked by the government. How we perceive the past is...complicated.
For some time, it was possible to crash some iPhones by texting them a Taiwanese flag emoji (which was censored by mainland China). https://www.cultofmac.com/561635/apples-taiwanese-flag-ban-l...
I don't know offhand if this was a buffer overflow or something else, but if you can crash the OS with a text, you . could likely exploit it instead.
It was an issue when the device's local was set incorrectly and would return NULL, leading to a crash in CFStringCompare.
Consider the thousands of people around the world that are involved in making phones in design, hardware, software, manufacturing, signal providers, platform providers, app writers to name a few. Any of them could be malicious actors or accidentally introduce exploitable bugs. The idea that such a complex stack can shield you from very smart and resourceful people that are actively trying to peek though is not reasonable. Everyone, especially people that are "annoying" to powerful entities (corporate or government), should assume that everything they do with their mobile phone is accessible to the people they hope it isn't.
We don't know the imessage bug, but a big one was patched in ios 9.3.3, released July 18, 2016. Meanwhile, the article says this exploit got a lot of people in 2016/2017.
So, presumably simply updating software would have protected a lot of the victims in this case.
The higher up in adversary skill level you go, the less this works. But up to a reasonably high level simply having up to date software thwarts most adversaries, no? And conversely, if you have very out of date software, even incompetent adversaries can break in.
no non-competes? So, when Snowden tells to public about mere existence of NSA hacks - it is a crime, yet when an intelligence operative brings his NSA and the likes sourced detailed technical knowledge to a foreign government - that is kosher.
Though, I wouldn't be super surprised if they banned people they forced to implement exploits from leaving country =X
It's still illegal to use US classified information for a program like this and it's still illegal to target American citizens or networks.
It's all as clear as mud and in this instance the government was more than aware of their former employees working there, many returned and went back to their prior careers.. think about that one for a second.
But seriously, I wonder why other governments and their citizens are not demanding drastic actions, like trade suspensions, expulsion of diplomats or other sanctions, when other countries get caught in such ways of spying or otherwise just screwing all over human rights. This one would be a perfect example to take a stand on - UAE is far smaller in oil trading and political importance than e.g. Saudi-Arabia.
Or why there seems to be next to zero public funding for providing open source, auditable hardware and software that could prevent such spying in the first place? The European Union could easily fund the development of a truly FOSS Android-based phone, down to the processors. Instead everyone seems to rely on Chinese or American products, which are both subject to non-European influence (in the US via NSLs, in China due to the massive influence of the Party on any major company).
Has anyone here heard about or is familiar with this malware?
Does anyone have any info on since when this has actually been like this? I'd like to look up how their CS education works and that kind of stuff.
My religious views do not stem from a lack of intelligence or education.
As mentioned, for whatever reason, I'm having a hard time picturing how people who deem apostasy punishable by death can also manage, research, and exploit modern equipment, and am looking for some indication as to when exactly did they start getting good at it.
This also begs for international conventions. New international conventions would provide a psychological back-stop against the infosec industry's unchecked nationalism. When an agent asks themselves "is what I am doing okay" international convention and law would give them an alternative to compare with other than the militarist default of "yes".
At what point does this become considered treason?
I suspect that one day our internal thoughts and feelings will be under constant mass surveillance, Minority Report style, but it won't look like sci-fi when it happens.
This is the problem with things like this or the Bloomberg server story: the capabilities are plausible but there's not enough information to know whether or not they're actually true so you're in the position of having to guess about whether someone actually could implement that attack and whether they'd chose to spend that much money.
The exploit must be something like a buffer overflow in iMessage. Which we know bugs like this have been fixed. Remember the text of death which could crash any iPhone from a couple years ago?
I am rapidly becoming anti tech, as I think I can clearly see where this is all going. That's hard for me to say, as my whole life has been tech focused. I'm 47 and started coding when I was 10. My whole life centers around it, and always has.
Hitler, Stalin and Mao would have absolutely loved to be alive today and have these types of tools. Maybe we need another 100M deaths to see what this kind of information and power leads to. We are recording everything we do digitally, all to be easily analyzed by whomever comes to power at some future point of time, where the rules might be different. Most of what is recorded about us we don't even know. It will also be easier to find all of the relatives, so they can be killed off too. They like to make examples and ensure no one steps out of line. They don't just kill you, they kill 1-2 generations of your family.
This data won't go away. Ever. They will know who likes what, who supports what, etc. Just a keyword search away from getting a list of names and addresses. We think we are so clever. We are building our future jail. For the first time in history, we have the ability to track every single minute detail about a persons life from birth til death, in extreme, high resolution which grows by the day. I don't just know you went from point A to point B. I know the exact route you took, how long it took to complete each segment, how long you stopped at each place along the way, what those places were, etc. That's just gps data.
I saw the 60 Minutes piece on PlanetLabs recent launch of 300 satellites. They're taking pics of the globe in very hi resolution, constantly. Better than some of our spy sats. Oh, and anyone can access that data. It's free! They showed how they were able to go back in time to when the compound that Osama Bin Laden killed in was built. They were then able to create a very accurate model of the compound which led to the raid that killed him by going so far back in time to when they started building the thing. Obviously we think that's a good thing because it led to a mass murderers death, but think about that technology.... recording everything 24/7, globally, going back years in time to reconstruct something that happened in the past... https://www.cbsnews.com/news/private-company-launches-larges...
*It’s fine to spy on human rights activists with all the powers of government as long as they’re not American*
If there was a thread about someone discussing us spying methods and policies against non-us citizens, your comment would not be flamebait. See the difference?
I have said shit before that I regretted and where I have rightly been put in place. The above comment isn't one of them though. Also I'm not a robot so maybe feeling something when I read this scoop is my own fault. idk
On the other hand, the NSA vets who went to work for the UAE knew who was paying their salaries, and knew they’d gone to the dark side the minute they crossed passport control.
Can you give me an example of the kind of unclassified information these people should be prevented from sharing?
Really gets to the heart of how the ruling 0.0001% of the US treats the rest of the world. Fixed that for you. Some of us just live here.
They said the tools use faded in late 2017 due to apple patches and that compromise required only sending a text message. Examining CVE's up until late 2017 may give more of an idea of how this tool worked. Judging from a cursory review, there are many remote code exploits so it would be hard to narrow down. But this is what I chose to look at when considering CVE's between Jun 2017 and Dec 2017 that could effect iMessage. Many of these are classified as Denial of Service bugs but often those can be extended to code execution with extensive research.
Kernel: too many to count
These were compiled by reviewing the apple security mailing list https://lists.apple.com/archives/security-announce/2017