Darknet Diaries recently had an episode with John Scott-Railton from Citizen Lab on how he was allegedly being spied on by the makers of Pegasus, and then lured them into a trap
Great rec! This is one of my favorite technical podcasts. The host does a great job getting into the technical details of the subjects while still appealing to non-techincal listeners. It's really impressive.
Very different show, but I enjoy the Accidental Tech Podcast (aka "ATP").
It's a weekly news show that focuses on tech (mainly Apple). They do a good job with technical details and talking through tech product decisions (why did Apple/Google/FB do X? What are its merits?). People have sort of polarized opinions about each of the 3 hosts, but IMO they each have their moments.
I have been listening to ATP for years, and I love it. I didn’t know that there were discussions about them.
To me, one of the powerful things of ATP is that it’s one of those cases where the sum is more than the parts. The hosts complement each other very well.
I also like that, although it’s an Apple-centered podcast, they are willing to take a critical stance towards Apple. I once listened to one of the other major Apple-related podcasts (not Gruber) and the level of fanboyism was cringeworthy.
That's a good point! They're honestly some of Apple's biggest critics. I feel like I've learned a lot about product design based on their ongoing feedback.
Maybe not the same but Co Recursive is excellent https://corecursive.com/ The host is very thoughtful and the conversations that come from that are lovely.
Their episode Memento Mori[1] with Kate Gregory cemented for me that this podcast isn't like others, it is incredibly thoughtful and insightful and has a lot to offer everyone whose interests involve software and engineering. For all the technical topics it covers, it is a very human podcast.
If you join the Patreon, you can get access to bonus shows while also supporting the creator and allowing him to make more episodes. Highly recommend joining the Patreon so we can more amazing content in the future.
I really like the Jane Street podcast "Signals and Threads".
It goes into how Jane Street (a prop trading firm) solve difficult problems in a hard problem space.
For staying up to date with infosec news I recommend "Risky business". Very different style of podcast, but a great way for me to stay up to date with the field.
You don't even need this. I searched "unblur" in Google Play Store, downloaded the first result, tweaked the settings a touch, and I could make out the characters. The whole process took a couple minutes. If the data actually needs to be hidden, this picture should be taken down.
Amusingly enough, people have even screwed this up. I recall some government agency trying to censor data with a black bar, but the problem was that the data was in a SVG-like document, so people could just delete the bars from the document and see the apparently-censored text.
Yep - Apple wound up adding a “Redact” tool to Preview.app for PDFs, likely in response to all the people who just drew shapes over selectable sensitive text.
I believe the US government did this at least once, by implementing the black bars in a PDF as a separate object, so the unredacted item was intact in the files they shared.
This is how I get FOIA responses. Agencies just remove the pieces they don't want me to see, sometimes with a code to tell me why it was excised. I use the redaction feature built into Acrobat.
The information is lost if you have a flat image without any layers. You should take additional steps for other formats of course. Granted, it has to be completely opaque of course.
Known font, known range of possible characters (almost certainly ascii), and probably several blurred characters in there that we know (like the t in attachment). If the blur is differentiable per-character, it's probably reversible.
Even without automation. Just make a list of blurred alphanumerics, then match every blurred character in the picture against the character list. with some patience probably doable in a single day
That’s called generative recognition, and it’s the best way to do it (probably can be formally proved under some noise model assumptions) if you have a good model of the data and enough computational resources.
You don't need ml. If you know the font and blurring algorithm (or a close approximation), you blur all letters of the font at that size and compare output.
Most of the time, blurring is down with a Gaussian blur and this is, in theory, reversible. In practice what is actually done is round_to_fraction_over_256(gaussian_blur(point, image, …)) so there is an error in the unblurring process. But this error is often not insurmountable, especially with extra information.
Actually, if you can identify the exact font and exact location of each letter (as is in this case) it doesn't matter what kind of blur it is.
If you assume the photo was made with one of the consumer applications, there are only so many popular bluring algorithms. You can brute force it quite easily by testing each character and each type of blur until you get exact match (and in many cases even inexact will suffice).
Not at all, I can already make out the characters and recognized quickly that all but the last 4 blurred characters are hexadecimal and the last appear to be [a-z][A-Z]
It should be explained to public how such exploit take place, with open sourcing necessary parts. Otherwise there is no way for us to know it wasn't intentional at first place.
I am not meaning there is a possibility like Apple as a company decides to put exploits. However governments can easily do it with single engineer at right place.
According to wikipedia[1] Pegasus is usually installed via a zero-click iMessage exploit. Open-sourcing Pegasus doesn't seem likely as NSO Group sells it for big bucks. It seems unlikely that Apple has colluded with NSO, as Pegasus is actually a bit of a black eye for the company. I'm not sure what governments can do with an engineer in the right place - in general I'd say "not much, and certainly not as much as with the courts and guys with guns, the other things a government can do.
I understood the parent comment as requesting that iOS be made open source to allow a further understanding of how the exploit works. My response to that would be that making things open source makes this process easier, but is not, by and means, a requirement for this ability.
Interesting. Didn’t realize it was zero-click. I got a bunch of weird iMessages a few months ago which I didn’t open. How do I check if I’ve been compromised?
I feel like phones should just have a "scrub anything that isn't ASCII text" option for paranoid folks. No unicode, no emoji, no media. I mean, I guess they could still f*ck that up, and maybe it'd be admitting defeat, but still.
AIUI part of the problem is that iMessage is a poor legacy design and relies on generic macOS serialization primitives. It's not like HTML where you can just scrub all the tags out. This can't be changed without breaking compatibility due to end to end encryption (the server can't adapt between versions). So there is a big attack surface inherent to the design, and Apple are stuck with it.
They have an enormous level of control over their deployed software. It seems like they could push out the required changes and nearly all clients would support it in no time.
For the remaining clients it seems consistent with the feature to simply block incoming messages in the wrong format.
Now, whether they'd want to implement a feature that essentially advertises that they have a security problem is another thing. (the answer is quite obviously no, regardless of what's best for their users)
There is a known solution to this. You version the protocol. New endpoints talk to each other with the new version, old endpoints that only understand the old version use that. After a period of time there are few enough people who still have the old version and it gets disabled.
Terribly inconvenient and a bad idea-- yes. Necessarily excludes-- no. I don't know anything about cuneiform, but if a secure, ubiquitous messenger only accepted that as input I would be counted as one of its users.
Hell, GPG isn't fit for any human users. Yet Debian is still able to hand-crank a extant web-of-trust off of it.
I wouldn't say so. The problem is the cyber warfare market created by nation states. If it wasn't for those large spenders, we wouldn't be where we are right now.
IMO nation states had a very negative influence on the internet, bringing secrecy, warfare, balkanized markets, mandatory identification and other closed concepts to a place that worked on open principles.
If states would invest more in security advancement and open research than in warfare, we might have been in a better position.
This is definitely part of the problem. But the fundamental flaw is the departure from simplicity.
The solution is to have a processor that is so simple that it cant do more then what you expect, and building the tools to make the unexpected stand out.
However, there is a bigger market for a processor with 3 extra layers of root access to ensure your boss can spy on you and Disney&Co really want this to be the norm.
You’re making this out to be far easier than it really is, to the point of being an armchair commentator. People want phones that can call and text and shoot perfect video and play demanding games. All of this necessitates complexity. The average dumbphone is far less complex than an iPhone but it’s obvious which one people want. (And, I should also note that “simplicity” doesn’t necessarily mean “security”: people have found bugs in ridiculously small pieces of code. The rest of your comment of making “the unexpected stand out” is similar in intent to “just find the viruses bro, how hard can it be?”)
It depends what you mean by "security first". If you're a person of interest and you're carrying around a personal spy with actual data on it and a hardware connected microphone, camera, GPS, sensors etc, which sends God knows what over the internet then yes, it's not going to go well for you.
But if you use devices with hardware kill switches and the most secure OS possible (storing nothing on device, perhaps it's a gateway to another security hardened machine).
Secure computing is possible, but it takes a lot of time, effort and dedication.
If you're just using off the shelf hardware and software you're going to have a bad time.
One thing that seems to link these Pegasus stories is that none of these targeted individuals are practising seemingly decent security ops, being hacked over WhatsApp or iMessage seems fairly trivial and hopefully now they would reconsider their threat model.
I think as long as secure computing isn't convenient enough for many people out there, these news will just rise in numbers.
There are likely many out there secure computing, and we don't hear the news about them because they don't get hacked.
But with the convenience of using smartphones and sending anything quickly over them using (insert your favorite messenger service) statistically many people will be using them, even for absurdly important/critical communication, and a small number of them will be hacked.
Services like WhatsApp/iMessage will just keep adding more features to stay feasible, and more people will be using the feasible services, with more features creating new attack surfaces inevitably.
It's whwt that has really evolved into. We used to live in a much simpler (and secure in that manner) world where there were no smartphones, even GPRS didn't exist, all important communication were done on physical medium.
That became much more inconvenient as technology just progressed to a point where 99.9% of the society couldn't resist using the smartphone, rightly for many purposes, including many of us here too.
But as OSs (and even SoCs) became more complex as more features are added (well, I can't think of Apple or Samsung execs on stage saying "hey we didn't add any features this year" so it has to go this way naturally) flaws are inevitable.
By now every piece of software and hardware that is in use, every abstraction layer in that computing tower of bable has been thoroughly hacked. Anywhere from plaintext passwords on a server to insane exploits like Rowhammer, those security websites and podcasts have long weekly litanies of tragedy. Additionally there is all-knowing Google, chinese phones phoning home, undocumented functions in intel processors, ISPs sabotaging user encryption, small-time browser plugin writers that get offered high sums for their plugin to get a front row seat to users' browsers, programmers pulling who-knows-what from npm and are probably pwned by time they write 'hello world', phishing, billions of smart devices constantly listening and often filming and we probably only know 10% of what's going on until a Snowden 2.0 comes along.
Yes, all of it is 'fundamentally flawed', and it would take a herculean effort to start over with a clean slate, yes, to figuratively burn it all down and make simple provably correct and safe hardware and a small and minimal OS that has browsing and communications built in.
Yeah. Aren't there even known cases of journalists being tracked through hacks and killed? (Like https://www.cnn.com/2019/01/12/middleeast/khashoggi-phone-ma....) Flawed computer code ever putting people into dangerous situations and being involved with deaths should be considered like the Therac-25 incident: a case study and a call to action to change the industry so that regular practices that solve the issue are in effect everywhere.
> So how do we protect our privacy from the advance of technology? It doesn't seem possible. Just going after NSO is useless.
Like we do with anything else:
These are crimes, but we are stuck in the mindset of the nascent Internet, when it was a growing experiment, a subculture in our society, harmless, and we wanted to nurture it and give it maximum freedom.
Those days are long gone. The Internet is completely integral to our society, like a major city (an extraordinarily large one) - in fact, anything not integrated into the Internet is on the fringe, like a business without a website. The idea of a harmless Internet has been antiquated for a long time; it is a serious place of serious money, serious criminals, and serious political actors.
Yet we still don't have serious law or law enforcement, not as an oppressive force but in the tradition of free, open societies. It would be like New York or Tokyo without law or law enforcement. We should create in the federal government (not state governments, given the Internet's borderless nature) a major domestic law enforcement agency, on the scale of the FBI, to protect people and enforce laws; I suspect we need a major addition to or revision of our legal code to go with it. That is how we deal with crime in other parts of society; the Internet is no different. We need divisions dealing with theft, fraud, destruction or property, invasions (hacking), etc. It's long past time to stop applying the antiquated notions to the current reality. Why do you accept this Wild West chaos; it no long fuels creativity and growth, it greatly hampers it.
Why a new agency? This is already very much within the FBI’s jurisdiction. Why is the international surveillance of U.S. journalists and their sources not visibly a priority? In my opinion it’s a matter of policy. This comes from the top down.
Bringing justice to international actors opposing democratic ethics is regrettably less of a priority today than enforcing highly publicized and politicized criminal cases.
IIRC, within the FBI’s jurisdiction and international don't go together. Isn't the FBI restricted to operating nationally only?
But to answer your question more fully, you can't solve this problem without supranational cooperation. A "police force" working to safeguard the Internet would have to work under authority of the UN, not any single nation.
The UN isn't a government; it has no real legal authority (international 'law' is something different), no law enforcement. It has no legitimacy - who elected them?
It's an association of governments, where they get together and organize things. All the power is in the individual governments. There are some grey areas and exceptions, but overwhelmingly the above is the case.
The UN could coordinate cybercrime law and national agencies.
The U.S has an MLAT with Israel and routinely extradites. If the crime was committed on U.S. soil (intrusion, conspiracy), my understanding is that it is within U.S. jurisdiction.
> Why a new agency? This is already very much within the FBI’s jurisdiction.
Based on an estimate of the design of organizations: Sometimes you expand an existing function within an organization, sometimes you add a sub-organzation (e.g., a division), sometimes you create a new organization. Which, when, and why? Standard CEO fare. A couple basic considerations off the top of my head:
Organizations have priorities. As one example, the story (I can't promise perfect details here) is that the US Air Force has always had the priority of pilots - it's run by pilots, they are glorified - strategic bombers and air superiority (air-to-air) fighter planes. Tasked also with providing close air support for ground soldiers, drones for surveillance, and orbital operations, they don't quite get around to those needs: They want bombers and air superiority fighters, flown by pilots, so that's what gets attention, that's what they invest in researching, developing, and buying - F-35's, B-21's, etc. (name a high-price uber-tech platform they've built for close air support, surveillance, or space). For close air support, they insist the F-35 will do it well enough as a secondary function, and want to cut other options - 'well enough' is not the language of priority. It's a constant battle to get them to deliver on these other needs. Partly for that reason, the Marines provide their own air support and the Army has helicopters - they have different priorities than the Air Force - and the US created a separate Space Force.
Organizations also have competencies, which affects the expertise of leaders, the acquired deep organizational knowledge, the asset investments, the organizational structure, and the culture - systems engineers have a different culture than movie actors. If the people in the executive meeting know storage but not networking, you can imagine the results for the networking function. Consider recruiting, training, mentoring, and promotion for networking personnel. Just consider office locations, which will be near the storage talent and facilities, but not near the Internet exchange and networking talent hotbed.
The FBI's priority has been terrorism. Catching domestic terrorists seems much different than investigating cybercrime. The FBI leaders have little expertise in the latter; the entire organization is built around the former. The agent training and skills needed for cybercrime and terrorism seem completely different, the assets needed seem completely different (field offices versus high-performance, highly secure computing centers). I would guess the culture would be very different, with cybercrime placing a very high priority on intellectual ability seated in a room, not interpersonal skill (interviews, etc.), tactical decisions, and physical action around the world. My impression is that a different agency, or at least a major FBI division that reports directly to the top, is needed.
You forget that the internet is post nationalism. Borders no longer exist and your domestic agency limited to the USA would be worthless. Or worse, serve as a pawn in the hands of Big Tech.
Legislation holding companies liable for breaches and leaks, which were in their capabilities to prevent. Simple and fair, scales well. No downsides.
Sure, not everything is always their fault, but usually it is and comes with yoloing from the first line of code, shipping alph… proof of concept software, or outsourcing their network’s security to MS Word. If a breach could ruin a company beyond reputation, people may stop storing cleartext credentials or testing merely their app’s UI at best; if a hacker could stop your show, companies may take bug bounty programs serious, and be grateful for disclosures instead of filing reports, when someone edit-and-resend’ed on a web API and accidentally got a copy of their database.
Today, a breach has zero consequences. Why would you spend a shitton of money on security, when marketing’s budget isn’t downright ridiculous yet?
And of course it would be super helpful, if governments would stop encouraging insecurity by buying e.g. NSO’s products for what they do. Always awkward persecuting someone you depend on… The NSO’s business should be straight illegal, including export/import. Since hacking someone without their consent usually comes with the ability to tamper with evidence, it’s really questionable for law enforcement and straight unethical for anyone else. Just kill the whole sector IMO.
It depends on what your threat model is. If its individuals, local law enforcement, or even national law enforcement (context dependent) you are trying to hide from, you can obtain phones with cash and make it very difficult to link them to you (use a sim card bought with cash and never give out that number, use a VOIP service for your primary number, use an OS that doesn't send back much telemetry, turn off location, never use the phone near your home, etc).
If your threat model includes targeted attack by a major intelligence agency, just accept that you are likely screwed.
I was recently asked how to make an anonymous post to a local news organization where all they wanted to do was hide their IP. I said if their only worry is the news organization then a VPN would be enough... Now that I'm reading your comment I'm having second thoughts whether it was right.
Varies by country I’m sure, but I was surprised how difficult it was to buy a SIM in Indonesia and Malaysia without an ID. Even little shops wanted an ID or passport number to type in to activate it.
>use a sim card bought with cash and never give out that number
This is near impossible now. I tried a few years ago to get an anonymous phone to activate an anonymous twitter account and you have to provide too much information to activate the sim card across the major providers and other companies that use their infrastructure.
It depends where you are. Some countries require ID, some don't. Last I bought one in New Zealand it didn't require anything, but I know Australia does.
That might be true now, but very soon you won't be able to buy a SIM without AML/KYC laws kicking in. We will be living in a complete surveillance state in our lifetimes.
This depends on your threat model (what is illegal, who chooses to prosecute, etc)
I was driving home today and the satnav warned us about driving over speed limit (74 mph on UK motorway). Ok. But the solution to that is technology - and organisation. There are speed cameras on this road. But most of the time they don't take images or don't trigger an action. If every road camera triggered a warning / fine on every violation then speeding would stop in a few months.
Is that something socially beneficial ? Probably. Would it be disruptive and cause great anger and political resentment? Yes.
That is one tiny example but I think that pretty much every criminal act can be detected with technology - it's going to become which one we care enough about to prosecute and which we give up and decriminialise?
Or governments will continue to have those laws on the books and prosecute them with discretion (which is what happens today). It is very convenient for those in power when every person is already guilty of something.
That is the point - a "free and fair" society will prosecute everyone equally. And the impact of that will either make us all modify out behaviour so we are not all guilty or modify the laws so we are not all guilty.
If society is not free or fair, that's the problem to fix first.
Any phone's location and call history will effectively identify it.
Location can be determine with sufficient accuracy for this purpose from cell-tower connections. More so as 5G, with its greater tower density and shorter range, is rolled out.
They specified using a VOIP number, so there are no calls associated with the phone by the cellular service provider. Wouldn't the attacker need access to both the VOIP service, to obtain the IP address, and the cellular provider, to link the IP address to a device and obtain the location?
If you add a VPN to the stack, the VOIP service doesn't know your IP (though I wonder if a VOIP service would work well through a VPN, due to added latency).
If you're making VOIP calls over a device that is itself connected to mobile networks ... you've still got the connectivity of the device itself to track. Presumably that's a long-lived relationship. At this point the information is limited to location data, but that, at the postal-code level is again sufficient to identify 90% of individuals within the US, based largely on residential and workplace locations.
The notion of having short-lived individually-attributable 5G connection history, perhaps through a dongle- or tether-swapping system, in which many individuals utilise devices for a short period of time, might work. With a sufficient budget, disposable devices might also be an option. (As the cost of SBCs / SOCs falls through $0.10/device, the disposable option might be tractable, leaving SIM card provisioning as the bottleneck.)
The tether is connected over WiFi (the MAC address space is already repetitive, and MAC addresses can be arbitrarily changed at the OS kernel level), giving a two-stage connection to the actual mobile network itself. Frequently-relocating (via a swap) or short-lived / previously unknon tethers, as identified through IMEI is required for mobile connections to work, would still be possible, but at a much greater workload. (I'm very sketch on how 5G identifies specific devices, take what I'm saying here with a few kilos of salt.)
I'd still have concerns with a VOIP device that itself has access to information and computing capabilities, but at least the degree of tracking that's possible over a PSTN direct-dialed mobile handset on a 4G/5G network would be sharply reduced. Other threat vectors remain.
Burner phones on a one-use / short-use cycle would probably be preferable.
Thanks ... To emphasize a point that you seem to imply, the goal of security is to raise the costs of the attacker; anything can be defeated, of course.
If by "two problems" you mean that VOIP adds an additional problem, I don't quite grok it. It isn't a panacea, as you point out, but seems like a clear improvement.
Another advantage of VOIP is that you can easily obtain throwaway phone numbers.
> If you're making VOIP calls over a device that is itself connected to mobile networks ... you've still got the connectivity of the device itself to track. Presumably that's a long-lived relationship. At this point the information is limited to location data, but that, at the postal-code level is again sufficient to identify 90% of individuals within the US, based largely on residential and workplace locations.
Good point. They still don't know who I talk to and when, but they certainly can figure out who I am. I wonder how expensive the latter is, which I'd guess it depends on whether that analysis and the sharing of it is done automatically or takes a special request.
> The tether is connected over WiFi
I'm not sure that helps privacy: Wifi networks are likely shorter range than 5G cells, and the networks are well mapped. I suppose it does require involvement of someone with the map, but that might be easy to obtain.
> the MAC address space is already repetitive, and MAC addresses can be arbitrarily changed at the OS kernel level
I think iOS and Android randomize MAC addresses these days ?
> Burner phones on a one-use / short-use cycle would probably be preferable.
Yes, but a single burner phone, between the hardware and a one month plan, can cost $75-100. Using lots of them is out of reach for many people.
The "two problems" is an additional attack surface --- the cellular network tether, which by design and function leaks subscriber-linked information without any compromise necessary, and the VOIP device itself, which continues to be susceptible to its own attacks leaking information, including contacts, call data, messaging data and metadata, email, browser history, and its own location history through both WiFi connections and in all probability, GPS-based location.
On connecting to the tether over WiFi, the advantages over cellular data or Bluetooth is that a WiFi identity (MAC address, SSID) can be arbitrarily changed, and in fact are in consumer-grade hardware (yes, iOS uses a distinct MAC per connected network AFAIU, not positive of Android). This could be modified on every network connection, or even within a single session (requiring periodic reconnects). Other means of specific host identification via TCP/IP and 802.11 protocols are fairly limited.
On increasing workload, much surveillance is done via mass-produced hardware and software, and targets frequently-encountered devices (e.g., stock mobile phones, iOS, and Android systems). Adopting measures and methods other than these ... leaves a signature, but also means that specific new surveillance methods need to be devised for a specific target.
Also: in case anyone mistakes me for an expert on this area, I'm not. I've general familiarity with methods, techniques, protocols, devices, and operating systems.
You can't. It's all marketing fluff at this point, because significant enough state actors will see the ~$10,000,000 R&D cost for a few iOS/Android zero-days as a drop in the bucket. We live in a post-security world, where it's economically feasible to develop malware at a pace that outruns Blue Teams. We live in a post-privacy world because Apple and Google happily pass your data back to world governments in the name of stopping terrorism, or whatever the social cause du-jour is.
There's no escape really, your only option is to embrace the paranoia and learn to love the cat-and-mouse game, or (what most people choose) give up. Remember, this is the future you voted for when you signed up for Google Drive and bought your iPhone. This is the future you willingly supported with each ad that YouTube showed you on movie night, and the one you opted-into when you noticed you were low on popcorn and got 2-day delivery on kernels from Amazon.
> We live in a post-privacy world because Apple and Google happily pass your data back to world governments in the name of stopping terrorism, or whatever the social cause du-jour is.
To illustrate this point, Apple gives up users' data for about 150,000 users/accounts in the US[1] a year in response to government data requests.
What can a company do when presented with a legal, legitimate warrant? We talk like Apple in this example has a choice to say “no”: they don’t, though.
Apple gives up customers' data when presented with simple data requests. Not all of the data they gave up was in response to subpeonas or warrants. Most tech companies have portals for law enforcement to simply ask, without a warrant, for users' data, and the companies often voluntarily share the requested data without any coercion from courts.
Yes, Apple is no different than any other tech company in that regard. The difference is that Apple's PR tells you otherwise. The whole San Bernardino shooting case had many people on HN saying that it meant that Apple would refuse to work with law enforcement when law enforcement would ask for users' data, even to the point of challenging subpeonas and warrants in court. That is clearly not the case.
No company should, but we don't live in that world (for a number of reasons), and if they do then they must follow instructions given by the government via warrants.
One of the issues I have is that those warrants are rubber-stamped out. We should change how the judiciary approaches that, raise the bar law enforcement has to meet to be able to request that data, while also encouraging the use of encryption at every level.
Until we make those changes (which I'm of the opinion the wider society does not have an appetite for the legal and usability trade-offs that come with the even if I personally do), I guess I'm confused by what we're demanding when we point out that a company based in the US cooperates with valid legal requests from the US government.
We're already in a zero-sum game, society won't 'like' any of the things that come next. What goes up must come down, and I get the feeling that we grossly underestimate how much data is in the hands of third-parties, private holdings and sovereign governments with their own interests. That dam will burst, it's just easier to tell the audience that the Internet is a Titanic: too big to fail, right?
The sentiment towards Apple is just disappointment today. Their 'ecosystem' approach has had detrimental effects on the consumer electronics market, and has given them a frightening amount of power over the flow of information. Apple's treatment has been generally irresponsible, though: they do nothing to assuage the general public of what's running in their OSes, rather choosing to occasionally throw out unverifiable whitepapers of how these systems might work, but we've got no way to verify that. For a company that claims 'privacy is a human right', I was really hoping to dig into something more profound.
The biggest issue is that we're taking Apple at face-value. They're documented liars, and their insistence on being right contrasts with their tacit rejection of transparency. They operate without accountability, and the only people keeping them in check have a mutual interest in creating a monopoly. Their factories are staffed by political prisoners, and they're the only FAANG company who's comfortable operating in China's homeland. It's crazy how people forget this with nothing more than a little marketing and some diversity in their iPhone commercial.
I expect better from a company with more money in the world than anyone else. But maybe this is yet another reminder that shareholders don't care about your security, privacy or peace-of-mind.
Going after NSO is far from useless.
These guys make 100s of millions, this gives them power to subvert and influence politicians so criminalizing this sort of surveillance will be impossible.
Once NSO employees and founders be held responsible for the damage they do and the life they ruin you'll see much less talent go and work there or establish new companies of the same sort.
Same way the mafia used to do it when they realized all their phones and cars were bugged. No technology. Talk in person, outside.
Seriously, if you are a journalist investigating anything that might upset the powers that be in a nation-state, don't use any online technology and for gods sake not a mobile phone.
A proper bug bounty program facilitates that, however, it seems that Apple has mismanaged theirs to the effect that it drives frustrated researchers to not report their findings to Apple.
By valuing it. Apple's annual revenue is more than the entire government budget of Saudi Arabia. That's a pretty meaningless comparison, but certainly gives an idea of the scale. There's asymmetry in security, but only one side is trying right now.
I don't think you can escape the use of the smart phone. But treating them as "throw-away", as not your device, etc.
I think the original landlines, which were/are a few switches connected to a write on one side and some microphones on the other, were close to inherently insecure. Phones haven't ever been "your device" whereas a laptop might, maybe be rendered trustworthy.
I think there should always be physical off-switches for microphones: it should be possible to know that the thing is not listening. But smartphones also have other private information on them than what can be captured by its microphone.
I don't think the smartphone is inescapable at all, and I don't think any of the conveniences it offers is worth surrendering one's privacy. But there is a tendency in businesses to ignore the fact that some potential customers do not have smartphones. I wonder if legislation against this might be possible.
I recently had to file an insurance claim with my car insurer. The entire process happened through their app. They require you to send them pictures that you took using their app.
One of my banks has been closing branches left and right, and if I want to use my accounts for anything other than debit purchases, I need to use the app. Some banks even charge you when you go to a branch location in person and use a teller to access your accounts.
Some jobs require you to install and use apps on your phone. Last time I was a big box retailer, the floor staff had the company's app installed on their phones so they could do instant price look ups and confirm discounts on their store's inventory.
Even just applying for a job requires an internet browser, and many people's only access to the internet is through their phone.
Pretty much all those things you can do over old channels still. The app is optional. I can do everything my mobile banking website can do over the phone. The last time I filed a claim with my insurer, everything happened via back and forth emailing.
Preferably, don't take those jobs. Or, if you must, tape the cameras and remove the microphone. Use a wired headset for talking and unplug it when not in use. When not on the job, turn it off and wrap it in aluminium foil.
This is the only solution, and one with very minimal downside. In fact, within a year society would be so greatly improved, we’d look back in horror at the current state of affairs and wonder how we’d all gone so mad in the first place.
Not all, but many restaurants in multiple cities. They use QR codes, no doubt to identify you better (tie you to a specific place and time, maybe to a specific table). Usually I just load the restaurant's website on my phone and read the menu that way.
I was also at a play where a QR code was the only way to get the program.
What exactly are you suggesting the QR code is doing? My phone shows me the URL encoded by the QR code before opening, and I've never seen one with any additional information in the URL. They're not dynamically generating QR codes for you...
The static URL encoded by the QR code funnels you to a web page where that page view can be reported back to trackers and incorporated into your advertising profile.
Using your device to read the menu puts your device in the loop where formerly it was not.
Sure, if I suspend disbelief and assume that no other search engines or navigation services were used that do similar tracking—but the GP was specifically calling out QR codes, and they use the website anyway.
You don't have to suspend disbelief to come up with such a scenario. When I go to the bar down the street from my apartment, order food and a drink, pay cash and then leave, it was not an interaction that was likely to become part of my advertising profile. Now it is.
It's not comparing websites accessed via QR against every other already tracked thing in society, it's comparing it with laminated pieces of paper.
Have you used them at restaurants? I've avoided it, so I don't know.
I didn't mean they generate QR codes dynamically. It wouldn't be hard at all to encode the table number, for example, and then of course they have the time and know your reservation, and thus can identify their customer's phone.
Not quite the same thing, but I had to head home early one night recently because my phone had died (which almost never happens to me) so I couldn't show vaccination-proof. I would guess that a person's phone dying while bar-hopping is much more common than dying while dining alone (since when dining with others, you can just look at their menus).
You're not wrong that this fundamentally excludes those who don't have (powered-up) smartphones. But it's not like restaurants and bars had the luxury of thinking through and choosing to have these effective new smartphone requirements: they adapted to Covid for their survival, and the odd case who got unlucky with a dead phone is just collateral damage.
Well yes, the point of this subthread is not that it's impossible to do this well, but that bars/restaurants don't seem to care whether they're excluding the occasional unlucky customer. I'm in New York City and not a single person has scanned my vax QR code: they just see a vaguely official looking app with a name that matches my ID, or a physical vaccination card (conveniently sized to not fit in any wallets).
Also, in your case, is this a federal system or a state by state one? If the latter, this sounds way worse than what NY is doing; with my CA vaccination I couldn't get in anywhere.
Do these types of iMessage attachment exploits require the victim to do anything on their end? Downloading the attachment? Opening the message ? That part is unclear to me
An open source cellular modem firmware is long overdue, but there's no government on Earth that would be keen on allowing it to happen, the best we have is 2G/3G stuff that has been illegally leaked and reverse engineered.
The modem firmwares might be old and hairy, but is there any evidence that they have been used to actually compromise phones? All of the investigations that I can recall reading have been exploits in the phone OS application code.
Are other messaging apps on iOS ever getting RCE exploits like this? Can’t they sandbox iMessage so this isn’t possible no matter how many bugs the app has?
Ironic that Apple limiting their apps in the same way they limit 3p apps would've likely solved this vulnerability, unless the attack was only "0-click access to full chat.db"
They do limit their own apps (they even specifically sandboxed part of the iMessage handling, more than a standard app). The exploit chains that NSO uses include sandbox escapes.
Except there are tons of examples of iOS sandbox escapes over the last few years. I definitely don't consider iOS sandboxing a security control at this point.
It's possible other apps are getting exploits, but those are less valuable since they're not installed by default.
As it stands, the most recently published information about the exploits were in the image parsers. So any app that used the default image parsers may have been affected, but might not have the same ability to escalate the exploit via other exploits. Plus you get back to the lack of ubiquity of the app, and the difficulty in targeting.
It's actually not persistent; AIUI Pegasus these days is designed to be ephemeral to avoid forensic analysis. If you reboot your phone it's gone (but they can just own you again with another message). Of course, most people don't reboot their phones very often.
My security researcher buddy at Apple responsible for investigating this vulnerability told me that the hack is very complex; Apple couldn't even fully figure it out before pushing patches; the patches do not fix all the known bugs used in the vulnerability; the attackers most likely have access to Apple internal source code as well. They are very thankful for Citizen Lab without which the bugs wouldn't have been discovered. Also, there are likely many more compromised phones out there and Apple is kind of scratching their heads on how to fix, or even detect it. How do you fix a vulnerability that's secret and that no one knows is actively exploited?
So, what is the legality of this? I've not followed much about this at all, but NSO group appears to be an Israeli company.
Do they just sell, or operate the hacking software for their clients? If they operate it, is it illegal for an Israeli company to hack an American citizen (I assume it is illegal in America, but how about Israel?)
Is the sale of hacking software regulated in any way?
> (I assume it is illegal in America, but how about Israel?)
This part doesn't matter much in practicality. Like it is illegal for the US gov't to spy on their citizens. It is illegal for the UK to spy on their citizens. So the NSA made a deal with the UK. They spy on us, we spy on them, and exchange the info. There, the US didn't break the law and neither did the UK. They worked around it.
> Like it is illegal for the US gov't to spy on their citizens. It is illegal for the UK to spy on their citizens. So the NSA made a deal with the UK. They spy on us, we spy on them, and exchange the info. There, the US didn't break the law and neither did the UK.
Let’s not mince words, this is officials of the United States of America conspiring with foreign hostile [0] powers to undermine the rights and security of the American public. It’s treason, and an incoming president with the stones required could arrest much of the former administration’s “intelligence community” leadership in midnight raids via the insurrection act.
[0] Foreign intelligence services are, by design, hostile powers even if they belong to an ally. The UK is an ally, but GCHQ is a hostile agency from the perspective of the United States public which these agencies supposedly serve.
> and an incoming president with the stones required could arrest much of the former administration’s “intelligence community” leadership in midnight raids via the insurrection act.
Sure they could, but they won't. No president will, if for no other reason then out of fear that the next one from the opposite party will do the same to their administration. Unless they outright shoot someone in front of witnesses, I don't expect this ever to happen, regardless of the level of corruption.
Allied intelligence services are not enemies within the scope of the Constitutional (or any sane, for that matter) definition of treason.
Nor would the Insurrection Act be in any way needed or relevant to arresting former (or current) intelligence officials for either actual treason, or any illegal conspiracy with allied intelligence services regarding surveillance.
But then we'd have to get into other messy things like the US Chairman of the Joint Chiefs of Staff Mark Milley bypassing the president's constitutional executive authorities ability to launch a nuclear attack. Of course it's framed as him saving humanity, but at it's core you had a treasonous act.
> Chairman of the Joint Chiefs of Staff Mark Milley took steps to prevent then-President Donald Trump from misusing the country's nuclear arsenal during the last month of his presidency, according to a new book by The Washington Post's Bob Woodward and Robert Costa obtained by NBC News.
> The book, set to be released Sept. 21, also recounted a phone conversation Milley had with House Speaker Nancy Pelosi after the Jan. 6 violence at the Capitol, which Pelosi blamed on an "unhinged" Trump. Pelosi said in January that she spoke to Milley about "preventing an unstable president from initiating military hostilities or accessing the launch codes and ordering a nuclear strike."
> "I can guarantee you, you can take it to the bank, that there'll be, that the nuclear triggers are secure and we're not going to do — we're not going to allow anything crazy, illegal, immoral or unethical to happen," Milley told her, according to a transcript of the call obtained by the authors.
> "The president alone can order the use of nuclear weapons. But he doesn't make the decision alone. One person can order it, several people have to launch it," he said later in the conversation.
> After the call, Milley summoned senior officers from the National Military Command Center to go over the procedures for launching nuclear weapons, the book said. He told the officers that if they got a call, "you do the procedure. You do the process. And I'm part of that procedure," he said — making sure he was in the loop on any planned military actions, the book said.
The lines are a bit blurrier here than you might think. Soldiers are required to disobey illegal orders. Congress declares war. But the President has the right to respond militarily in an emergency - war does not wait on committees. If Trump had said that e.g. NK had attacked, and ordered that NK be nuked, but NK had not actually attacked (i.e. there was not actually an emergency), that would have been an illegal order, which soldiers would have been legally bound to disobey.
It would be treason for Milley to countermand a legal order, but asking for key servicemen to review the details of an admittedly complicated bit of military law and to prepare themselves for exactly what decision they might need to make in realtime - nothing illegal about that.
I'm not sure if this counts as spying but in UK they are allowed to monitor people to quite a large extent if I got this right. I think refusing to decrypt your device when requested is also punishable.
Indeed, I don't think the UK government has any problem hacking phones if they believe this is required, but it's likely to involve disclosure to the courts, and hence in some way be accountable.
It's therefore easier to get a friendly government to do the hacking and to pass on the discovered info, which side-steps any legal accountability.
Yeah the whole "unwritten constitution" thing is very laughable, if we're honest. At least the US Federal gov has to pretend to care about the 4th and 5th amendments.
If were honest, the us is held together by a few pieces of paper written by a bunch of men 250 years ago that wasn't meant to last 20 years and has since become the sparring ground of lawyers who have twisted it beyond recognition no doubt.
US politics is bizarre. When Ben and Jerry's ice creams decided they wanted to close shop in some disputed territories in Israel many states (mostly Republican run) punished the parent company immediately to make an example of them and sold their holdings of its stock from the pension funds they were controlling.
Yet a company like NSO weaponizes and abuses all sorts of vulnerabilities they get their hands on and sell it to thugs around the world who then use it against Americans and the same politicians couldn't care less
My understanding is they sell it, after the Israeli gov't (Israeli Defense Ministry) vets the sale. It is operated by the client. NSO has claimed they do not have any info on targets by the purchaser, and has no way to find out post-sale.
A german article[0] claimed that only a hashvalue of the telephone number is transmitted to NSO Group:
> "Das BKA hat nach Angaben der stellvertretenden Behördenleiterin sichergestellt, dass keine sensiblen Daten bei der Firma NSO landen würden. So würden Hashwerte für Telefonnummern vergeben, damit das Unternehmen die Zielpersonen nicht identifizieren könne."
They claim that this way the NSO Group would not be able to identify the victims. Obviously that is a fat lie, as a phone number hash could trivially be brute-forced, even on a home pc.
From the sounds of it, NSO Group does not give out the zeroday exploits, but rather do the dirty work of exploiting/infecting the victim themselves, and then hand over control. But the writing is pretty vague.
Seeing all these democratic countries, including my home country support this kind of stuff by buying their malware, is extremely disheartening to me, when there is clear evidence that it is being misused by authoritarian governments. It also makes me feel powerless.
So I don't see how a government hiring someone to hack someone else is not complicit.
Unless if that government branch had the legal right to execute that hack. Because if they were legally able to, but were unable to themselves, it makes sense to hire someone to do the job for them (if that is legal?)
I am quite in awe how for example exploit brokers like Zerodium and Thaddeus Grugq are allowed to sell their services to oppressive regimes, and getting away with it (a clear case of morally bankrupt). They are powerful weapons, and should be treated as such (export controlled etc).
Right. If the US can file charges against Assange and attempt to have him extradited to face them, it should be able to do the same with the NSO Group principals.
US DOJ has notably secured convictions against spyware authors that simply sold the software. There is no legal distinction between "dude in his apartment" versus "multinational Israeli defense contractor" in this regard.
The US government is also one of the largest customers of ICBM rockets too. And you would find that it is not an unregulated field at all.
I don’t know about regulations in the field. All I know is that “US gov buys a lot of X, therefore it X is not regulated” is not a convincing argument.
Fair play. I'm thinking more from the perspective that the lack of regulation in the space makes it much easier to acquire and hoard zero-days at a government scale, as we saw with the Vault 7 leaks. Since the government is incentivized to hoard vulnerabilities for intelligence gathering, it's hard to see them being so willing to give it up.
The government’s ability to prevent software distribution is limited significantly by the first amendment. Selling “a tool for hacking” is fine, selling “a tool for committing crime” would be illegal, but that distinction just comes down to picking the right marketing copy. The government can however restrict the import and export of software quite broadly.
>The government’s ability to prevent software distribution is limited significantly by the first amendment. Selling “a tool for hacking” is fine, selling “a tool for committing crime” would be illegal, but that distinction just comes down to picking the right marketing copy. The government can however restrict the import and export of software quite broadly.
An interesting point. Given the vendor and customers for NSO's products, Federal law (in the US) would apply, rather than state law.
That said, an interesting parallel would be possession of burglary tools[0], which is a crime in many places in the US. However, given that "burglary tools" are generally just tools (e.g., bolt cutters), intent or mens rea[2] becomes important.
Presumably, a similar argument could be made about tools like nmap, nc, ettercap, metasploit, etc., since they can be used for legitimate purposes, even though they're also used for site intrusions/compromises.
NSO's tools, presumably, are mostly used for the latter rather than the former. I'm guessing (IANAL) that's one of the rationales used to restrict sales/exports.
Is that a convincing argument to criminalize activity and saddle it with strict liability[1]? I'm not so sure, but I'm also not a DOJ lawyer.
All that said, I don't think it's all just "marketing copy." As with most things, context and nuance matter. I make no judgement WRT the appropriateness of such restrictions, as I'm not in possession of all the facts.
Even so, while I tend toward the free flow of information, there is something to the idea that if you're caught at the back door of a jewelry store late at night with bolt cutters, that implies mens rea much more than having bolt cutters in the toolbox in your garage. YMMV.
The sales pitch is basically the only thing that can make it illegal, because it’s illegal to knowingly do anything for the purpose of assisting somebody else commit a crime. That’s why it would be illegal to have a “burglary tools” section at a hardware store, even if they only difference between that and any other hardware store is words on a sign.
Even with regards the restricting import, the government is largely limited to sanctioning particular actors involved in the transaction.
I’m really a bit surprised that this isn’t more widely understood on HN. Anybody who operated a web server in the 90s is likely to know about Bernstein vs DoJ, and even if you operate one today you’re still likely to encounter the idea of an “export cipher”.
>The sales pitch is basically the only thing that can make it illegal, because it’s illegal to knowingly do anything for the purpose of assisting somebody else commit a crime.
I misunderstood your point. I (mistakenly) thought that your reference to "marketing copy" related to the US Government's justification of restrictions on tech exports, not NSO's sales pitches.
Well, and the first amendment only matters if someone gets it to court, the court accepts it (not easy to get against the federal gov’t), the evidence or the plaintiff doesn’t ‘disappear’ in the mean time, etc.
Depends on what kind of court you’re talking about. If you’re talking about civil court, then any software (or anything at all really) can be used for committing a tort.
If the government tried to ban certain types of software from being made/distributed, they would either make a law that’s never enforced (like the obviously unconstitutional DMCA anti-circumvention law), or a law that’s immediately struck down by the courts.
I’m not sure that regulation really applies much when you operate at that level. How many countries has the US waged war on with the combatants in both sides using US made weapons?
The scene has been set again in Afghanistan. It isn’t ICBMs but it’s not a virtuous circle when you are dealing with weaponry.
Hacking is a crime in both Israel and the US. If either government wanted to they could pursue charges. Selling exploits is not illegal in either country AFAIK, and is in fact a booming business.
Hacking someone in Israel is the crime right? Same as in the US it’s a crime to hack someone in the US? If the US group hacks someone in Israel and the Israeli group hacks someone in the US, then they’re fine as long as they don’t vacation in the country while someone is looking to serve them with a warrant?
But also have to meet a relatively high bar internationally, and require some evidence gathering to meet that bar which is nearly impossible in the situations we are describing.
No US law enforcement is going to honor an Israeli subpeona I believe, and vice versa.
Since this is an international issue and there's no global "legality", the effect is that locations matter a lot.
Presumably, the hacking was done by Saudi authorities from SA, using NSO-developed tools. Citizenship of the target is not very relevant, but it does matter where "the event" happened.
If the reporter was in Saudi Arabia when the hack happened, then Saudi laws apply and essentially Saudi government gets to set conditions on whether it was legal or not, and if it was forbidden by their laws, then what consequences (if any!) that should have.
If the reporter was in USA at the time, then it would be reasonable to apply US jurisdiction and try and investigate it as a crime in USA. However, Saudi Arabia can refuse to cooperate and even if USA prosecutors identify the culprits and convict them, Saudi Arabia can refuse to extradite them and choose to protect them. In essence, if it's not a random foreign criminal but someone from the actual foreign government that has harmed USA citizens in USA, it's not really a criminal matter as much as a diplomatic one, where all the other aspects of USA-Saudi relationships matter much more than any facts about the actual case; USA can choose to make a big deal out of it or ignore it, but historical precedent shows that it likely will be ignored as the Department of State considers all the other factors of Middle Eastern politics as much more important, SA could likely get away with literal murder (e.g. Khashoggi), not just some hacks.
In a similar manner, perhaps you could argue that NSO is an accomplice in that crime (I'm not saying that this would succeed - in general, arms exporters are not considered liable for whoever the purchasing country harms), but that essentially comes down to (a) whether USA prosecutors are willing to pursue this, and (b) whether Israel is willing to cooperate, as in the absence of specific treaties it would be legitimate for Israel to say "NSO did not violate our laws, we won't enforce any foreign judgements about this event"; if so, then any action would be limited to seizing whatever assets NSO has in USA (if any!) and/or trying to capture the involved people (if specific people can be identified) when they are traveling outside of Israel somewhere within the reach of USA. USA could apply diplomatic pressure to get Israel to restrict NSO, however, it doesn't seem likely that USA wants it so much to actually try and change that.
For another of your questions, sale of hacking software can be regulated by countries in whatever way each country wishes. In this case, as far as I understand, Israel treats is as essentially an equivalent of "arms export" where NSO has to obtain approval from Israel government for their foreign customers, but in this case it is not contested that NSO did have all the required approvals to sell their tools to Saudi Arabia.
There are allegations that the NSO Group doesn't provide the 0days they're using to their customers, so they are in fact performing the intrusions themselves.
What exactly do you mean by "treated like mercenaries", what should be the treatment in your opinion ?
In general, countries do hire mercenaries/private military contractors/etc, and it is not considered anything special, and many powerful countries (including e.g. the USA) routinely use mercenaries in their campaigns. the "sending" nation may restrict their people and companies from mercenary actions abroad if they choose to, but if e.g. Israel is okay with their company hiring out as a "mercenary" (the term usually implies directly participating in a conflict while being armed and excludes any other support such as training, logistics, software, etc, but for the sake of argument let's assume it applies here) for Saudi Arabia then there would be nothing unusual about that - for example, Saudi Arabia has used thousands of mercenaries in Yemen.
If the specific individuals commit something that's a crime in USA then USA can try to put them on trial, but that works exactly the same no matter if they're Saudi citizens working Saudi government or serving in Saudi military, or foreigners contracted out to Saudi government as "mercenaries"; in both cases it's up to the local government whether they want to hand them over (effectively betraying their own "employees") or refuse.
Presumably the GP means that the NSO employees should be held personally responsible for actions taken by the NSO group, without diplomatic cover or ability to claim that they're law enforcement/military/intelligence. They are civilians, not uniformed government employees.
My point is that this status does not influence those aspects, it does not matter if they are or claim to be law enforcement/military/intelligence, the practical consequences are the same.
They can be held personally responsible in USA criminal courts no matter if they're civilians or uniformed SA government employees - if a foreign government agent does something on your soil, you can (and should) apply standard criminal law can no matter if they're an uniformed employee in their service or not - for example, the Russian officers UK charged with Salisbury Novichok poisonings. However, USA courts can't enforce any judgements without cooperation of the host countries.
And Saudi Arabia can arbitrarily ignore the victims' complaints, foreign charges and convictions and their enforcement if they want, no matter if the violators civilians or uniformed government employees, that's only a difference if SA chooses to make that distinction. Uniforms would imply some differences in their rights according to Geneva convention if they would be captured as prisoners of war in an active armed conflict, but this is not an active armed conflict and they have not been captured as PoWs.
With respect to extradition or local prosecution Saudi Arabia can arbitrarily extend their protection to whomever they choose to, no matter what their status or citizenship is - if they have not made e.g. a bilateral treaty with USA where they agree that they will extradite such people, they do not have to do so.
Iphones are a standardized attack surface. Apple prefers vulnerabilities not to be found than to be discovered and patched, leading to NSO holding on their discovered vulnerabilities for longer.
An android device with no modem (baseband) is definitely more secure. Throw in a hardware switch for camera, mics, and wifi, which iphones will never have.
Android is an even larger attack surface when political people don't take any of the precautions you mention, which they most likely won't. The President of the United States does indeed have a specialized phone that is modified by WH staff to either be only for phone calls or only for twitter/news (which means 2 phones)[0], but they're still iPhones. As long as there's not some 0-authorization 0-click vulnerability in the things used (as in, reportedly, neither iphone did texts), it'll be secure enough, but that's still limiting its usage and isn't want any politician is going to do. Just stick to a desk phone for secure comms.
Texts shouldn't be difficult, just disable on the carrier end. MDM might be able to restrict it further just in case.
With iMessage you can just not sign in to an Apple ID. Or use MDM.
they need an actual functional phone. You can't be a journalist and not have a fully functional phone that access the internet whenever needed. I'm sure they use burners for sensitive stuff, but what are they supposed to use for their regular work, calls with the school, car navigation, ...
I am assuming you use a killswitch VPN to your trusted network. NYT for this journalist.
My proposed setup is 3 devices: hotspot, android device without baseband, dumbphone. Hotspot would be the weak link here, security wise, but is easier and cheaper to replace. Nothing on dumbphone would be encrypted.
If I were a journalist, I would consider this alternative to being hacked. Remember he even knew there were at least attempts to hack his devices, years in advance.
They are just biting the "apple is the most secure alternative" propaganda.
iMessage can run over data, right? It sounds like the bugs exploited here were iMessage and WhatsApp holes, not weird mystery-baseband flaws (which are harder to patch but only ever affect a fraction of the phones you want to sell the ability to compromise). So similar Android exploits would just go right through the hotspot and compromise the Android device that does everything.
The only way out of this mess is actually correct code on actually correct hardware. Maybe you have to run Linux and Android at the top to run existing apps, but somewhere below there you need a supervisor that makes security guarantees that are actually true. You can't just port a monolithic C kernel onto hardware that's struggling to be faster than the competition and call it good.
Journalists need to buy communications equipment that doesn't come with that "NO WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE" line in the EULA. Sadly, it is not for sale.
Texts from a burner? Disposable email address? Dead drop notes? Lurking in the shadows of a parking structure? Plenty of options that don't require a smartphone. Not like journalism was impossible before the iPhone.
I am but one atom in a molecule in a drop in an ocean, but I have pledged to never be involved in the hiring of any person who has had any willing association with any organization responsible for efforts similar to Pegasus, with no exceptions. I will also immediately resign any job that violates the above as well. Trends like this are not to be taken lightly - for the first time in human history, the concept of an all encompassing tyrannical dystopia is a realistic possibility, and you deceive yourself if you think that there aren’t very very powerful people that get an almost erotic thrill at this possibility. Contributing to the advancement and deployment of this technological capability is the very definition of a violation of whatever meager ethics our profession possesses, and should be taken as essentially a credible threat against literally every other living person.
I can't get behind bifurcation of job market based on what political side you belong to. This seems destructive at best, dangerous at worse. It's like the classic Palantir vs Google argument.
I won't hire anyone if they show any sort of activism at work.
Unless I am missing some irony in it, the last sentence contradicts the preceding claims. Denying activism ia activism itself; you are enforcing a reactionary culture.
Is “not wanting to work with surveillance” an example of a political opinion from the left, or from the right? It kinda just seems like a personal preference.
Yea I mean how often do you come across a resume that has NSO on it.
I am attacking the underlying tone of political activism in hiring committees. This seems deeply oppressive to me and signals 'internal rot' in corporations.
Do we somehow stop beeing moral beeing when we are at work? I thought that whole discussion were settled 65 years ago. But I notice that you are using the term "political activism", so maybe its rather that you find this particular cause not worthy of a real moral issue, its merely "political activism"? And if so, what causes would actually be important enough for you that you would consider them relevant for you, even at work?
This is not a political argument. If you willingly participate in the construction and deployment of technological systems that are designed to be used to monitor, suppress and ultimately threaten the physical security of people who are non-violently opposed to the current group of people in power, you have essentially declared yourself an enemy of mankind, and I choose to personally act accordingly.
This should not be misconstrued as a partisan issue. Those who desire these outcomes will make every attempt to conflate it with one political movement or another. They'll appeal to auth sensibilities and moral panics.
It must be made clear that these represent efforts by the powerful to squash dissent and free society. It is an attack on the rest of humankind.
Most of what Facebook and Google have developed could be used to do these things. In fact it has been used to do those things, perhaps with the exception of the physical threat. But monitor and suppress? Yes.
Is everyone who worked on this stuff also an enemy of mankind?
While I find what Google and Facebook do personally distasteful, it would be foolish and short-sighted to assume everyone who had worked there was complicit. It’s one thing to be an employee at a giant company that does a multitude of things, one of which when misused can present a threat, and an entirely different thing to sign up to work at a place who’s product’s _intended use_ is to support tyranny.
This is not a correct analogy. There are multiple legitimate and moral usages of ammunition.
A better comparison would be to ask if I would hire someone who worked for the East German Stasi, or someone who had helped to build the systems used to identify, target and kidnap dissidents in mainland China.
Edit: Additionally, no, I would not hire an ammunition manufacturer who produced ammunition knowing that the entirety of his output was exclusively purchased by a government for the exclusive purpose of assassinating those who were non-violently opposed to said government.
Good point, and the NSO example from OP is definitely extreme. But it rings alarm bells for me. It is year 2025 and there are distinct and two separate job markets. One cannot cross the line because your resume reflects your political choice.
This kind of dystopia sucks and I am gonna push back as much as I can. OP's tone was definitely about activism and I can't stand behind it at all.
Also ammunitions producers have no idea where the ammunitions are used. It could be for saving lives in a hostage situation or assassination. Don't blame Intel for making processors that are then mounted on missiles that kill people. This is exactly what's wrong with illiberal ideology.
Keep in mind that one mechanism for control is to slowly suck someone into a scheme over time. I'm sure this has a name, or many names, though I'm not aware of it.
A friend had a professor at uni who'd been recruited to join a deep-sea scientific mission which was an absolutely incredible opportunity: a phenomenally well-appointed ship, newly constructed, a large scientific crew, and funding was completely assured.
He went on the project, returned home, and read much later in the paper that he'd been part of the cover mission for the recovery of the sunken Soviet submarine K-129, aboard the Glomar Explorer. According to the professor, he'd had absolutely no inkling of that mission.
During WWII, numerous individuals turned on their own countrymen, comrades, and fellow Jews, as Quislings, collaborators, and capos, through a mix of threats and rewards.
And of course, various paths toward corruption are seen all the time in gangs, business, government, institutions, and other contexts.
That said, I'd have a very hard time working with anyone who is still working for a Facebook, Google, Amazon, Oracle, Palantir, AT&T, Verizon, or numerous other firms in the surveillance capitalism space today.
Forget Y Combinator -- come build the next great surveillance start-up at the IDF's Unit 8200, the world's greatest hacker school and incubator for mass surveillance start-ups. With generous subsidies from US taxpayers, Unit 8200 lets you level up your surveillance game by practicing on 4.5 million Palestinian beta-testers. (Go nuts, it's not like they can sue you!) Plus, say goodbye to those moral qualms -- at 8200, you'll acquire the unshakeable conviction that you're a Good Guy fighting the Bad Guys. When you graduate, the IDF will keep the data you collected, but the skills you acquire and the friends you make are yours to keep forever.
I will presume that this comment was made with forthrightness and lack of information rather than attempting to obfuscate a fairly obvious funding funnel from the US -> IDF -> 8200.
The US gov't provides billions, yearly, in monetary aid and guaranteed loans to Israel specifically for military funding. Sure, most of that has earmarks, but that's the way the game is played.
This doesn't account for anything in the black budget, which as you can imagine, probably includes quite a bit for this realm. With Israel currently considered an indispensable intelligence partner (and thusly an outsourced R&D partner), I find it hard to suspend disbelief enough to accept that U.S. taxpayers aren't funding Unit 8200 just because there isn't a line item in public budgets.
Eh, the comment made me google to learn more about Unit 8200.
That said, surely you can agree the removal of comments we don't like is undemocratic. Further, no one user is the boss of this site's moderators. Here's a relevant article [1]: moderators are human too :)
It would seem to be the rational thing for NSO to hack a journalist who is writing on them, so that they better prepare for what’s coming. As for all the countries that buy and use NSO, to target and kill journalists, they are all close all allies of the US and Israel.
And the US and England were also spying on the journalist Julian Assange, and have kept him in prison and tortured him for over a decade. Ben Hubbard luckily just got hacked.
> the US and England were also spying on the journalist Julian Assange, and have kept him in prison and tortured him for over a decade. Ben Hubbard luckily just got hacked.
As you probably know, these assertions are a big stretch for many people. Not everyone considers Assange a journalist. He was living in an embassy for most of those years, so while he was confined, it's not a prison and not torture. Hubbard isn't lucky; neither the US or UK have ever imprisoned and tortured a journalist from a major publication (unless I'm overlooking someone). There may be legitimate debate about Assange, but it's not credible to pretend that these are facts.
> A Belfast-born writer who has been a consistent critic of IRA violence has revealed how the British Army subjected him to electric shock torture outside his family home in the early years of the Northern Ireland conflict.
> Journalist turned novelist Malachi O’Doherty describes in a new memoir how soldiers first threatened to shoot him, then dragged him through a hedge, kicked him and eventually resorted to inflicting electric shocks to try to extract information about the local IRA.
Thanks, great point. I think there's a distinction between peaceful open society - they wouldn't do it to a Guardian or Times journalist today in London - and a military occupation (Northern Ireland), but I'm not sure it's such a bright line: The government created that occupation (whatever its merits, I'm not debating them here) and could create another.
When you are doing the information from the inside thing, you do need to get your players in line.
England?
I'm English ... and Welsh, Cornish, Scottish and tangentially Irish, not to mention German (check my username).
The country is called Britain, the Great thing is only to distinguish from the other Britain - Brittany (part of France). You might as well call everyone from the USA as Texans.
He was not tortured in the embassy - he was a guest who gradually outstayed his welcome. He was always treated well. As you can see Harrods is just to the right. This is not the roughest place to be a prisoner in Christendom.
Whilst he was in there, there were always several Police stationed nearby. They stood in doorways and kept watch. Probably a boring job but nice and simple. The whole thing basically costed the UK tax payer a fair old wodge and obviously Ecuador too.
I know that area and what goes on because I run internets for some flats nearby.
> I'm English ... and Welsh, Cornish, Scottish and tangentially Irish, not to mention German (check my username).
Sorry, but this is absolutely nonsensical to me, how can you be all these nationalities? Were you born on the most insane round trip flight ever or what?
Edit: And sorry, as a Scot (One actually born there); 'the country' is not called 'Great Britain'. As a nationality we group identify as both $member-country and also British/members of the United Kingdom. The UK itself, is made up of four separate countries, Scotland, England, Wales and Northern Ireland. Great Britain is simply our name for 'the big island (and all the little ones) excluding ireland', the UK is the big island + NI. Holy cow where did you learn such nonsense? :/
We are called Great Britain because that is what James VI (I in England) called us when Scotland and England finally merged into Great Britain. He was our first joint King.
The other Britain is Brittany - https://en.wikipedia.org/wiki/Brittany. Have a look at the county names in Brittany and see if they look suspiciously like Devon and Cornwall.
My family/surname is Gerdes. In Scotland, that is rendered as Girders. Only you can pronounce it properly 8)
Edit: sigh, okay you are a little bit right, but besides of course the settling after the whole Gallic period, and the Brittons, the Normans, and the Saxons -- please, forget all that we are talking about the term GB right now and this only refers to the island.
My (properly) researched family tree is 15 generations deep for me personally, so far.
My uncle has done quite a lot of research. Quite a lot. At the extreme 15 gens down, you get this in your Ahnentafel:
"26921. Alice15 John (14829). Her married name was Trelowarth (14829).
She was born circa 1550. She married Robert Trelowarth (14828) on 3 Oct
1574 at Wendron, Cornwall, UK. She died circa 1603 at Wendron, Cornwall,
UK."
Oh, you're American? I'm pretty sure 15 generations ago I was probably Danish but that doesn't make me one of them... I kind of understand need to identify as someone more interesting though, I suppose. Perhaps you should just own who you are instead of pretending though my friend, maybe look into some meditation or such..
Edit: Apologies, I see you say you are British, however I've never met a British person who would ever identify as coming from more than one of our member countries. An Englishman calling himself Welsh? A Scot calling himself English? I mean.. I find it unlikely somehow.. But, hey ho, I'm often wrong and presumably this was one of those occasions. No offence intended.
I don't think the comment you're replying to is talking about justification or justice, just rationality.
If I say "Your bike lock doesn't have to be unbreakable, it just has to be strong enough that a rational thief will look for another target," that doesn't mean I think the thief is justified in stealing other people's bikes instead of yours.
There is no point to saying it is rational for NSO (or the Saudis) to act this way. It was quite rational for MBS to order his minions to lure Khashoggi to the embassy and cut him into pieces. It brings nothing to the discussion of just how this company and its clients continue to ignore laws and decency.
Fun fact, the CIA had/has a rule not to impersonate priests etc, journalist or personnel of NGO's in undercover missions (because they have a hard life in some country's already).
But then there is that excellent movie Spy Game....
The CIA reportedly used the “Save the Children” charity as a front for a fake hepatitis B vaccination program in Pakistan to help confirm Bin Laden’s location.
A ban on the polio vaccination program in some Taliban territory and attacks on vaccine workers followed.
In fact the CIA uses all the above you mentioned ranging from priests, charities, NGOS, humanitarian outfits, journalists and the media as fronts and covers for their spying. It is documented multiple times over decades. The CIA even has it's own official NGO called the National Endowment for Democracy.
My favorite is the US AID CIA spy who goes into Afghanistan in the 1980s that is profiled in Charlie Wilson's war. Or the fake vaccination program they conducted with "humanitarian" NGOs and charities:
My favorite CIA journalists are the ones who worked for CBS and other publications and were involved in promoting Modern Art around the world with NGOs like MoMA, the Rockefeller and Ford Foundations:
Sadly, they didn't have any such rule about impersonating healthcare workers or weapons inspectors, making vaccine outreach an extremely dangerous occupation for charities and making treaties that rely on inspections extremely difficult.
I mean, it's not amnesty international, but a UN Special Rapporteur on Torture claimed:
"Mr. Assange has been deliberately exposed, for a period of several years, to progressively severe forms of cruel, inhuman or degrading treatment or punishment, the cumulative effects of which can only be described as psychological torture."
His confinement to the Embassy of Equador since 2010 amply qualify as both imprisonment and torture [0]. If you like to argue that it does not literally qualify then I suggest you don't in the interest of not wasting everybody's time.
I'll take that argument on - he was charged by the Swedish Prosecution Authority, extradition was agreed by the UK authorities and he sought sanctuary in the Ecudorian embassy which granted him sanctury.
So which of these imprisoned him? Presumably not Ecuador. The UK for agreeing to extradict him? Sweden? Similarly - what who was the perpetrator of the torture? Ecuador for not offering sufficiently spacious accomodation in the embassy?
Are we not ignoring the fact that any internet argument regarding Julian Assange where we can seemingly only deal in maximalist, black-and-white terms is also tantamount to torture?
>His confinement to the Embassy of Equador since 2010
Assange, as a bail jumper and fugitive, requested and received asylum from Ecuador.
He could have, at any time, left the Ecuadorian embassy. In fact, had he done so, he'd likely have been investigated, prosecuted and potentially convicted of the charges against him.
Had that come to pass, it's entirely likely that Assange would have completed any sentence of incarceration years ago and have been back to banging Swedish girls for quite a while.
As we'll see, Assange might be convicted of violating the Computer Fraud and Abuse Act[0] which, under these specific circumstances (n.b.: IANAL) would carry a sentence of not more than five years, with the opportunity to reduce that sentence[1] by more than six months, assuming he is not given parole.
As to the completely bogus "charges" of violating the Espionage Act of 1917[2], no journalist has ever been convicted under that law.
As such, had Assange not decided for himself to jump bail and become a fugitive, he would most likely have been a free man for at least several years right now.
Except for the part where the CIA had plans to extrajudically assassinate him.
>“That the CIA also conspired to seek the rendition and extrajudicial assassination of Julian Assange is a state-sponsored crime against the press,” she added.
>In response, the CIA and the White House began preparing for a number of scenarios to foil Assange’s Russian departure plans, according to three former officials. Those included potential gun battles with Kremlin operatives on the streets of London, crashing a car into a Russian diplomatic vehicle transporting Assange and then grabbing him, and shooting out the tires of a Russian plane carrying Assange before it could take off for Moscow. (U.S. officials asked their British counterparts to do the shooting if gunfire was required, and the British agreed, according to a former senior administration official.)
Saying he could've just come out at any time is absurd.
Nils Melzer dismissed the fact that he was free to leave by making the analogy that someone in shark tank is "free to leave" their boat - but what is the analogy to being eaten by sharks here? Just the normal experience of being in prison in the UK? Is it the position of the UN that every person in prison in the UK is being "tortured"?
If you count being forced in a box by government entities as imprisoned, he's been imprisoned since 2012. If you're a stickler for being literal, he's if nothing else been captive since 2012.
Not to mention the UN's guy whose job is assessing whether a person is being tortured has repeatedly said that yeah, what's being done to him counts as torture.
«Painting a picture of progressively severe suffering inflicted on Mr. Assange from his prolonged solitary confinement, the Special Rapporteur upheld that it not only amounts to arbitrary detention, but also to torture and other cruel, inhuman or degrading treatment or punishment.»
Amnesty International is a mouthpiece for the UK/US governments on so many subjects including Syria and Chevron.
They even famously withdrew their "support" for Steven DOnzinger who was prosecuted by Chevron for exposing their environmental damages in Latin America.
It is a tainted and biased source. Use it as a source at your own peril.
https://darknetdiaries.com/episode/100/