Hacker News new | comments | show | ask | jobs | submit login
A Saudi Arabia Telecom's Surveillance Pitch (thoughtcrime.org)
459 points by bbatsell 1676 days ago | hide | past | web | favorite | 111 comments

This stuff happens more than anyone in infosec wants to admit; it's (ironically) what got me into professional software security to begin with, after being upset by what a commercial network monitoring tool would have allowed us to do to our customers at an ISP I helped run.

It's especially funny to see a government sponsored telecom reaching out to Moxie Marlinspike. Also: this isn't like that time a random Microsoft recruiter accidentally hit ESR and he wrote the "Your Worst Nightmare" post about it. Like Moxie says, money buys technology, and they will eventually find someone to rig up a workable solution for what they're trying to do.

Moxie: one thing that would be hugely helpful is a quick list of the things you did that make you confident in Twitter's TLS code (which: thanks for doing).

In this case I was mostly referring to the inclusion of certificate pinning (ex: https://github.com/moxie0/AndroidPinning) in the mobile apps, which would theoretically prevent them from using a UAE or Saudi controlled CA to do the interception. In addition to iOS and Android, we also refused to compromise with low-end platforms like MediaTek, and made sure those clients were also all-TLS and that they employed certificate pinning. We also did common sense stuff like including assertions in all the platforms that would prevent accidental HTTP leakage.

More generally, a lot of common sense effort was put into the general TLS posture of the website. From certificate pinning baked into the browsers, to HSTS headers, to making sure the links in search results are https.

There was a bunch of generally nice TLS stuff in the pipe as well, but I'm not sure if it shipped yet.

If you really want the world to be a more secure place, can I please ask that you relicense the AndroidPinning code as BSD or something less viral than GPLv3?

I don't see Instagram, Facebook, etc. using that code to secure their apps, they won't license their Android clients as GPLv3 just to use the Android pinning library. While it is easy enough to re-create your code (though I have not looked at it), given that we're talking encryption libs, it's always nicest to have secure, vetted libs that just work.

(As a matter of fact, I'm sure you had to relicense it for Twitter to use it in their app.)

From the README:

>Please contact me if this license doesn't work for you.

I see no reason why Moxie should give Facebook and Instagram this valuable feature for free. When did open-source hackers become the unpaid laborers for silicon valley?

If they want it, they can either release the source code for their applications and liberate their users, or they can pay (hopefully) through the nose for it. Maybe that'll buy a few more months of TextSecure development, or whatever other cool things Moxie is doing now.

Facebook or Instagram will just reimplement it themselves if they care. Smaller developers will just remain insecure. GPLv3 harms adoption of something like this.

If you think that, go implement a MIT-licensed variant.

Not that simple, you have often stated that normal programmers shouldn't be near security and now you are stating that they should go implement something that is specifically to enhance the security of the web.

The gp isn't asking for a change of license because he hate the GPL, he is (properly correctly) predicting what will happen if that license isn't changed: specifically, the thing that Moxie is trying to prevent won't be.

I don't say normal developers shouldn't be near "security"; I say they shouldn't be implementing cryptographic primitives.

Note: AndroidPinning is not a cryptographic primitive.

No, it isn't.

Facebook or Instagram could PAY for getting a license better than GPLv3.

That something is GPL does not mean that could not be also licenced as proprietary for those that pay if they don't want the limitations of GPL.

We live in a world where if the cost of too high for something like this, it will be written off as unnecessary. Unless there is someone really pushing this from within, Facebook/Instagram/etc probably won't implement something like this, or will just create their own (possibly poor) substitute.

I get the idea that people should be paid for their work, and it's his choice how he licenses it. On there other hand, if the point is to make sure this spreads as far as possible and gets used everywhere, then maybe a very permissive license is called for.

If he hadn't taken the time to publish this code, you wouldn't have even known to try to zing him for using the "wrong" license. Perhaps the most rational solution for people like Moxie would simply be to never publish their code, and simply continue to write forcefully and effectively about technical controls and privacy.

Then they wouldn't have to jump through silly hoops to prove whether they "really want the world to be a more secure place".

Or, how about this: if you really want the world to be a more secure place, why don't you take the time to learn how to implement certificate pinning for Android apps and publish your own MIT-licensed implementation? I'm sure Moxie would join the rest of us in cheering you on.

You don't have to be such an ass. I asked nicely enough.

I fully acknowledge Moxie is better at security than I ever will even dream of being. I just hoped he might see the value in releasing it under a more-amicable license. I don't have the numbers, but more-liberal licenses are by a wide margin the choice for open-source crypto.

I'm not speaking from the armchair, I've released open-source code under BSD/MIT myself. I don't have Moxie's skill for security, is it so wrong to point out the obstacle the license represents? He did release it to help secure the web, did he not? Why don't you let him reply.

Basically all of the software that I write for projects like this is GPL by default, but I generally include a note (as in this case) that developers should contact me if the license doesn't work for them.

I find this to be a good balance: those who wish to take my work and openly contribute their own work are free to do so, and those that don't need to contact the copyright holder. I don't think it's a lot to ask in this case, and I definitely don't think that licensing issues are what's holding back internet security here or otherwise.

I hardly think that calling moxie's choice of license "viral" and questioning whether he "really wants the world to be a more secure place" is asking "nicely enough". religous wars aside i also would like to hear from one of the cryptography gods about licensing cryptography software since ComputerGuru does have a good point when talking about software such as openssh

There's more value in forcing vendors to work with Free Software licenses than in compromising the ideals of open source to allow vendors to benefit without contributing back.

You should be asking yourself how you can change your project so that GPL3 licensed code will be acceptable, rather than asking others to relicense their code.

I humbly contend that forcing people to do anything in the name of an preserving the purity of an ideology is a Bad Idea.

Not to mention, you can't really force them to do anything: they'll just avoid the GPL code, create it themselves, or find something similar under another license.

Authors are giving their work away, subject to restrictions of their choosing. There's no coercion involved.

I'm responding to the idea that ideological purity is the goal, not the author's right.

Isn't it amazing how people come out of the woodwork to point out the force inherent in the GPL never say "oh, by the way, thanks for publishing a reference spec I'm free to use to develop my own code."

As Thomas said, people would be less bitchy, and less holier-than-thou (cough), had Moxie not written any code, or written it and charged an arm and a leg for it.

It's sort of what patio11 talks about. The cheaper the service, the worse the people treat you.

I didn't say anything about Moxie.

> Why don't you let him reply.

The internet doesn't work that way.

> You don't have to be such an ass. I asked nicely enough.

No, not really. Would you have asked the creator of a closed source crypto library to give it away?

I used to agree with you, that security software should be BSDLed to encourage use, but now I see it just encourages more low-end closed-source software.

If that software was open, users could know what they were using and could with work really be safe. But by trusting a closed source app, especially one that can't afford anything for security, they'll never be secure (see this article for proof) and thus are worse off than if they're knowingly only partially secure.

It sounds rough, but better the mob steal some money because you used an insecure app, causing you learn and audit your security requirements, than for you to feel secure until someone shows up and shoots you.

Fully agree. Securing an application is just part of the overhead of creating it. To expect people to hand these bits and pieces out seems a bit overboard, if not somewhat entitled. This stuff costs time, money, and effort to make. The author released it under GPL3. If you can't afford to shell out for it, you can use the code to reroll your own. There's plenty of documentation on the topic as well.

Agreed. The onus shouldn't just be on Moxie. We could easily flip the question around. Why not put the onus on the company's mentioned. Why don't Facebook and Instagram relicence their code as GPL to be compatible. Do they not want the Web to be safe? Will they put "not having a GPL app" before "our users are safe"? Etc etc

> I don't see Instagram, Facebook, etc. using that code to secure their apps, they won't license their Android clients as GPLv3

They might license the code under a non-exclusive license with different terms. I.e. the copyright holder is free to license the same source code under various licenses.

So, e.g. I could license some code to the community under GPL, but I could also license it closed-source to a corp for a fee.

> money buys technology

I don't think it matters. He quickly noticed that the problem is cultural, that

>> I’d much rather think about the question of exploit sales in terms of who we welcome to our conferences, who we choose to associate with, and who we choose to exclude, than in terms of legal regulations. I think the contextual shift we’ve seen over the past few years requires that we think critically about what’s still cool and what’s not.

But the problem with/in Saudi Arabia is also cultural, or social, not technological. It doesn't really matter that they can buy exploits or intercept communications. What matters is that those in power can stay in power while doing all that.

Mao and Stalin built some of the most repressive regimes the world has seen with 1930s technology, and even then they were behind the times. Do you think those would have been rocked by secure Twitter? On the other hand, Greeks ran fairly decent democracies when the closest thing to mass communications was shouting in a place with good acoustics.

I'm not saying the west should just provide scum of the world with access to modern technology. Let's not kid ourselves though. Whether we do or not, it won't change much.

The problem here in Saudi is indifference. People take it as granted: "Of course its being intercepted" or "they know everything, don't even try." Its a decapitating indifference to the extent that people around me are mystified why I have a VPN connection 24/7 on my desktop and mobile phone; why do I even bother? Even many techies around me think I am naive to be taking all these precautions. Resistance is futile.

PS: Mobily is my carrier. Discomforting. Maybe resistance is futile after all. Sigh.

It is possible to believe both things at the same time: that dictatorships will inevitably acquire exploits, backdoors, and monitoring tools, and that it's unconscionable for companies to sell these things to dictatorships.

The story is perhaps clearer on exploit markets. The alternative to markets is publication, which burns the vulnerability by hastening its patch deployment. Dictatorships will inevitably acquire more exploits, but they are in a race against everyone in else discovering vulnerabilities.

I think there's a defensible case that in the case of Saudi Arabia, there are other parties operating in the country who are much worse for the rights of both Saudi citizens and humans elsewhere, and selling the digital equivalent of arms to the Saudi Government isn't inherently evil.

I'd sure rather deal with the current Saudi Government than with Al Qaeda. Yes, there are fairly bad elements within the government, and it is at best one of the more restrictive regimes in the world, but there are some alternatives that are worse.

> dictatorships will inevitably acquire exploits

That's not what I mean. Even if you somehow stop them from acquiring exploits, they will remain in power because it's not derived from subtle technological advantages.

Perhaps the way to phrase it is that the soviet surveillance apparatus was an expression of power, just as the modern technological surveillance apparatus is an expression of modern power.

I think that this stuff matters, to the extent that I'd like to be in solidarity with those everywhere who are in a tension against authority. Not selling exploits is one small way that I can do that, and writing a blog post about it is one small contribution (I can hope) to creating a culture of doing that.

If the technological advantages do not help them stay in power, why do you think they would pursue them? And what 'advantage' would they be?

Why do people eat themselves into morbid obesity? Why did soviets reverse rivers? Why are American prosecutors trying to jail kids for sexting?

Is eating not obviously beneficial? Is there something wrong with large-scale engineering? Shouldn't we fight child pornography?

Drives, rules, and organisations outlive and outgrow their usefulness all the time. Why would surveillance be an exception?

It'll change plenty if we do. Oppressive regimes are taken down by conspiracies and secret communication. If they eliminate this ability to associate, with our assistance, there will never be any space for revolution, or even reform.

This logic reeks of the law of averages: "I might as well swim over Niagara Falls because I could die any day, even from crossing the street. If I die today, it was just my day to die."

Of course, I have less of a reply to "Even if I don't sell it to them, someone will." The middle class finds it very easy to rationalize behavior that will keep the consumption flowing.

Like Moxie says, money buys technology, and they will eventually find someone to rig up a workable solution for what they're trying to do.

Governments are in a unique position here. They can always just move up the stack. Can't break the crypto? That's fine. They can just require the mobile phone companies to sell phones with spyware already included.

That problem is, I think, a showstopper for "anti-circumvention" tools like whatever- the- next- generation- of - Tor will be. Dictatorships have little to lose by backdooring or rootkitting devices; they'll laugh off any outrage stirred up by the discovery of these methods.

But the economics flip around in Europe, Japan, the US, &c: governments there do have something to lose by surreptitiously backdooring huge numbers of devices, and the odds are good that any efforts to do so will be detected (the state of the art for reverse engineering now includes decapsulation and imaging of electronics packages).

That problem is, I think, a showstopper for "anti-circumvention" tools like whatever- the- next- generation- of - Tor will be. Dictatorships have little to lose by backdooring or rootkitting devices; they'll laugh off any outrage stirred up by the discovery of these methods.

Well until something like the DIY Cellphone gets more traction to deal with backdooring/rootkitting: https://webcache.googleusercontent.com/search?q=cache:http:/... (MIT Media Lab)

The US government is a special case again. Since most of the companies mentioned here are headquartered in the US, the US government can resort to the no-tech solution of just asking for the data and presenting a subpoena (or so was my experience working for a large US telecom carrier).

The difference is that a subpoena doesn't decrypt an EDH TLS session.

But it does decrypt the data at rest.

It could be argued that the scandal in Germany proves even european governments don't have a lot to lose by backdooring devices beyond what the law permits them. Admittedly, the number of backdoored devices was probably low, but the government did seem to act unlawfully.


We all supposedly know how totalitarian Saudi Arabia is compared to the free United States, so giving Saudi Arabia eaves dropping and decryption tools is something we all obviously dislike. But we are all bathing in American propaganda, so many people, like those "patriotic hackers", support the Feds having the same power that Saudi Arabia is seeking. In fact, the US government is doing much more sophisticated eavesdropping of all our communications and storing it for later perusal. And they use the same "terrorism" justification as the dictators. And what is a terrorist? Whoever they say is a terrorist. Which can include anyone advancing any political ideology that is frowned upon by the bipartisan "washington consensus" of what is acceptable debate. From libertarians to environmentalists to any kind of anti-authoritarian that doesn't serve the interests of the establishment.

  ...so many people, like those "patriotic hackers", support the Feds having the same power that Saudi Arabia is seeking
They do? I'm not sure I can name a single hacker that wants the US, or any other govt, to have the same power.

I don't think I'd classify the EFF as a hacking group.

From libertarians to environmentalists to any kind of anti-authoritarian that doesn't serve the interests of the establishment.

Who do you mean? Who is being called a terrorist?

Kids who post youtube videos of themselves rapping: http://news.yahoo.com/teenagers-social-media-terrorism-threa...

The Methuen, Mass., high school student was arrested last week after posting online videos that show him rapping an original song that police say contained “disturbing verbiage” and reportedly mentioned the White House and the Boston Marathon bombing. He is charged with communicating terrorist threats, a state felony, and faces a potential 20 years in prison. Bail is set at $1 million.

And lots and lots of other ridiculous examples, if you search.

Choice excerpt:

  “If you’re not a terrorist, if you’re not a threat,
  prove  it," he says.

  “This is the price you pay to live in free society
  right now. It’s just the way it is,” Mullins adds.
[Disclaimer: for some definitions of 'free']

Dear Y Hackers, Please know that there are dumb, non-technical, readers that sit on the sidelines of this site and stare in awe of your courage and abilities. The fact that this article is atop the leaderboard speaks volumes about the community's character. Moxie Marlinspike you are a hero. You are a wonderful writer, and your adventurous spirit is incredibly inspirational. Thank you for sharing your stories. To me this site is a beacon of hope, a daily reminder that remarkably talented people are out there fighting for good.

I'm very curious what aspect of Twitter's TLS code makes hard to intercept whereas other websites can be easily intercepted? I'm also very curious about how they intercepted Whatsapp. Does it do something stupid like eval'ing code received over regular HTTP?

Quoting the paragraph, in case my paraphrasing is inaccurate: "What’s depressing is that I could have easily helped them intercept basically all of the traffic they were interested in (except for Twitter – I helped write that TLS code, and I think we did it well). They later told me they’d already gotten a WhatsApp interception prototype working, and were surprised by how easy it was. The bar for most of these apps is pretty low."

They pin the TLS certificate: to successfully create a connection to Twitter, their mobile apps will check not only the validity of the certificate the server presents, but also a hardcoded digest of the correct certificate, so that a "valid" certificate for Twitter from a CA Twitter has no relationship with will be rejected.

Wouldn't that break when they need to update the certificate, due to expiration?

What's "pinned" isn't the site's certificate, but rather the CA's certificate. Or more accurately, the CA's public key.

This is the problem with public key pinning. The site is still vulnerable to a compromise from its own CA, and many sites actually use a number of different CAs for unfortunate reasons. If you check out the list of pins for twitter.com, it's quite large. Still, at least it's not vulnerable to compromise from every CA that exists.

Trevor Perrin and I have been working on something called TACK (http://tack.io) to make all of this easier and more secure. Rather than embedding pin fingerprints into the binaries of web browsers and mobile apps, you can advertise them and update them via a TLS extension. What's pinned is also your site's certificate, not the CA's certificate, making the site additionally immune to compromise from its CA (or list of CAs, as it were).

Just a quick note that there are apps that pin site certs and not just CA certs; if you're implementing your own iOS app, for instance, you can do it either way depending on your margin of error w/r/t certificate revocation and expiration and software update.

In that case, I would generally recommend that you create your own trust root and validate against it, rather than using pinning?

That makes sense if yours is the only client that connects to your endpoint, but less sense if your client shares an endpoint with, say, a web app.

I try and I try to get clients to consider just rolling their own root certificate and eschewing the TLS PKI, but people have an irrational fear of the process of making certificates.

Yes and that's kind of the point.

It's like Firmware in VoIP, although VoIP implementations leave something to be desired. In essence they're doing something similar to a checksum on the certificate such that any change to the certificate causes the transaction to fail.

You would have to hard code the new dates in.

WhatsApp doesn't use TLS, and their protocol has a fairly long history of criticism.


I seem to remember their authentication was a combination of your phone number and the IMEI of your handset, which is woeful security through obscurity at best.

and it seems somebody got so angry about they security that he made this site: http://www.whatsappsucks.com

"... insecurity predominately tended to be leveraged by a class of people that I generally liked against a class of people that I generally disliked..."

This, this, a thousand times this. We cannot look the other way because the "good guys" are benefiting. Not any more.

The person that reads an article about the USA implementing some automated service to monitor foreign communication must realize that other governments are now doing the exact same thing. We no longer have the luxury of pretending that as long as the outcome is good the means do not matter.

There's no reason to believe Saudi Arabia is alone in surveillance like this.

Meet the United States:


Furthermore - the US has a big advantage. Many of the companies that handle digital communication services (Facebook, Twitter, Google etc) are US based and have to by law co-operate with the US law enforcement agencies. So if the FBI (or other three-letter agency) wants access to your Facebook or gmail, they can get it via those companies.

In countries like Saudi Arabia they don't have the same level of power/control so they have to look at intercepting & blocking the traffic.

There needs to be an RFC for Postcard Key Encryption - send each other public keys on hand-written postcards to single-use P.O. boxes to avoid mitm of the initial key exchange. I don't understand why anyone trusts CAs any more.

Nobody trusts CAs. There is a lot of work being done on layering more trustworthy authentication features on top of the TLS CA system, one good one being TACK:


The problem with simply abandoning CAs is that it creates a situation in which it's even easier for government sponsored agencies to mass-intercept traffic, at least for a window of time (probably several years), and all that window buys us is the ability for sites that don't really care about security to save $10-$100 dollars on cert costs.

> Nobody trusts CAs.

No. The problem is that pretty much everyone trusts them, at this point in time. That was Peter's point.

Sure, there are researchers and engineers who rightfully don't trust CAs. But we don't really matter. The users, the consumers, the parents, the grandparents, the activists do.

I am not sure what the point of this comment is. What conclusion do you come to as a result of this "everyone trusts CA" belief that is different from mine?

The point I'm trying to make is that the CA trust problem has been known for quite a long time now (although it's only recently arrived in the hivemind).

Sure, there's a lot of research being done to find the Next Great Thing (tm), but how about a short/mid-term emphasis on shoring up the glaring problems in the existing technologies first? Tighten the number of default CAs, shore up bad SSL and TLS code, tighten default settings in client software.

Things like Chrome popping up warnings about self-signed, expired, or invalid certs may have been a great start, but nobody's really tidying up much on the server end, so the end effect is that the users blindly click through the Chrome warnings.

TL;DR: The Next Big Thing (tm) is going to be great, I'm sure, but how about fixing/tightening existing configurations in the mean time?

You and I are saying exactly the same thing. TACK, for instance, doesn't replace the CA system; it creates a vehicle by which browsers can pin certificates on the fly, the way Chrome already pins certificates for certain web properties, which creates a key-continuity system without changing browser UI or the protocol as it is run between browsers and servers.

You and I might also agree: browsers make it too easy to click through the bad-cert warnings. It used to be a trendy thing to argue on HN that these warnings were entirely pointless and should be done away with, which, of course, would have done grievous harm to security above the harm already done by the click- click-click- you're- done UX browsers have already established here.

I disagree with the list he uses, I would have said that most developers and many sysadmins trust CAs. The point is that, no, the phrase "nobody trusts CAs" is wrong, many people trust CAs.

In fact I think that so many people trust CAs that if someone provides a more secure alternative it should look like an evolution of CAs so it doesn't piss off people who have been trusting CAs all this time.

The users, the consumers, the parents, the grandparents, the activists, ... don't even know what CAs are. To the degree that people trust computers at all they trust what their browser tells them is safer.

Generally almost everyone has experience with malware, bugs and broken software. No one who has used a computer for more than 10 minutes absolutely trusts computers in any capacity including security.

What about a crowd-sourced decentralized list of known bad-CAs or certificates known to be used by Government security services? At the very worst, it will raise some warnings and let the user think twice before connecting to that service (which may or may not be annoying), but at best it would shield users from surveillance?

The government and corporations (and thus defense contractors) are experts at manipulating crowdsourcing for their own ends. I sure as hell wouldn't trust crowdsourcing.

Everyone is forced to trust them by default because of liberal cert inclusion policies of browsers and other ssl-enabled software.

Ask some random people on the street if they trust a mainland Chinese entity with ensuring the security of their communication with their banks, however, and I think you'll get a different answer.

> Nobody trusts CAs.

Hundreds of millions of consumers use devices that implicitly trust CAs today.

You're usually spot on, but in this case you're dead wrong. Most humans that use the internet use devices that trust CAs absolutely - which is exactly why they're being subverted for government interception.

This boils down to a semantic disagreement. You say people "trust" CAs because they don't know or care about them. I say that to "trust" a CA, you have to know what one is, and nobody who knows what a CA is trusts them anymore.

The distinction between these two vantage points isn't particularly relevant to my point; at least, I don't think it is.

The difference is explicit and implicit trust. Even if you explicitly distrust CAs, almost everything on your system implicitly trusts them.

Sure, that's a good way to put it, and I'm obviously aware of this, but it's not the point I'm making. :)

CAs solve a very specific threat model, governments or corporations are not included. It is purely designed to protect against Computer criminals with very limited resources. PKI is designed to be about as strong as the lock on your front door, experts and police could pick it or get a key from the manufacturer but many thieves will just break a window and run a small risk of being detected in the process.

If we want real security we have to build it separate from the CA model, but as you point out CAs do provide some security against government inception, just as a locked door does provide some security against police wondering into your house, and we should not abandon it wholesale.

So, even though the government can snoop on us, we shouldn't stop using CAs, because then the government could snoop on us.

With TACK you depend on CAs to establish the initial connection, set up a "pin", and then no longer rely on the CA for future connections. That initial connection is fungible by a MITM, so it's not secure. We need a term for connections that are "probably" secure, but for which there is no complete assurance of security, which is what any connection based around a CA really is.

This is why the postcard method is superior. Unless someone spends the time to intercept every anonymous postcard with a secret code on it, rewrite the message with perfect imitation handwriting, and send it on to the unknown P.O. box, without more than a couple days delay, it's next to impossible to circumvent this initial key exchange. For those that need real privacy I would recommend this method; for those that just want to order Jolt Cola on an open wifi connection, CAs are good enough.

Yes, that is exactly what I am saying. The current CA situation is bad, the "abandon the CAs and come up with an interim plan" solution is worse.

Meanwhile, the world in which most of the mainstream browsers support TACK is imperfect, but immediately better than what we have now. TACK also sets us up to continue decoupling ourselves from the CA system.

Nobody has to abandon CAs just because a new model is adopted. It's not like the old infrastructure will stop working (as long as there's a free market)

Pre-loaded public key pinning shipped by modern browsers is a better way forward IMHO. If you're trusting your browser enough to run their software, you might as well get your public keys from them, too. IMO, TACK works about as well as self-signed certs with the 'remember this certificate' option - just don't use a Starbucks connection the first time you browse the site.

No, that's not true. When you bring your laptop to Starbucks and deal with a site with self-signed certs, you're susceptible every time to that MITM attack; on the other hand, you're susceptible only the first time you connect to a site using TACK.

(Small technical nit-pick: the mass automated MITM of all postcards is entirely possible and with a ridiculously small budget too.)

"In the US Department of Defense, a `trusted system or component' is defined as `one which can break the security policy'." [1]

This really gets to the crux of it. We might deem the CA's untrustworthy but we currently trust them anyway because there isn't really that much choice.

[1] http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html - Ross Anderson (Trusted Computing FAQ - Q24)

To be fair, CA's and SSL in general were designed to protect e-commerce. As a risk-reword calculation, it probably works for that. The problem is, neither SSL or CA's are robust enough to protect against state-level actors.

You could also just email your certificate and text or call with the fingerprint. You'd have to be very directly and very actively targeted by a very competent government for that to fail.

> TextSecure and RedPhone could serve as appropriate secure replacements

sadly those are only available for Android.

...and, under these sorts of regimes, will likely get blocked should they gain any sort of real traction anyway. (Moxie's post is clear that they want to intercept, and block what they can't.)

Without jailbreaking or a dev cert, you can't ensure that an app you install from the App Store on iOS isn't backdoored anyway.

I'm an iOS devotee but even I'm going to buy a second phone specifically to support sideloading of crypto software for secure communications. My phone's primary function is to communicate (despite all the smartphone value-adds) and secure and private communications are a pipe-dream on iOS.

iMessage is great (and end-to-end encrypted) but if I can't control the list of keys to which it encrypts, it's only as secure as Apple (and presumably the DoJ by extension) allows it to be.

With Android all you have to do is tick a checkbox to install apps that are not from the market, you don't need to root it and you don't need a special certificate.

Oh and if you want a dev user, it is a one time fee of 20 usd (or was, when I got mine).

You can safely assume that mobile software has been already backdoored. If not by the vendors directly, it is by the companies that are selling "lawful interception" tools based on 0days and what not.

iOS versions are in the works!

Excited to try the iOS implementation. Is a desktop client (OSX, Linux, etc) for RedPhone feasible? I understand (I think) that RedPhone doesn't use usernames, but instead uses your phone number as the identifier - but I'm curious if that's a structural restriction at this point. Every time the issue of secure communications comes up the issue of what program to use comes up as well. There aren't a lot of good options at this point. Jitsi seems to be the best people can suggest.

Great news! Any ETA?

SA are just trying to catch up with the technology that countries in the west, such as the USA and UK, and other technology adept countries such as China have had for a considerable amount of time.

I suddenly had a realization why someday in the future everyone is going to want their own personal satellite.

Government interception/manipulation (or any other party) would become rather difficult.

Until hunter-seeker satellites intercept other people's micro satellites and destroy them, or worse 'wiretap' into their internal computers without their knowledge.

To be fair, is there anyone that doesn't want their own personal satellite? C'mon, that's pretty cool, right?

And I don't think owning a satellite makes your communications secure simply by virtue of owning a satellite. I'd argue that you'd need to own the methods of communication to and from the satellite, which probably isn't realistic.

So according to a comment in that article, Taher ElGamal would be working in KSA doing interception... Anyone has a source? We need to modify wikipedia, this is shameful if true.

I'm a total outsider to the security community. Of course I understand that are plenty of hackers selling exploits to shady government actors, but I'm to understand from this article that the practice is generally not considered abhorrent and immoral? Like, there's a real debate to be had here?

How do the people at Viber, Line and WhatsApp plan to respond to this? Should they implement OTR?

The easiest thing they can do is use HTTPS and implement cert pinning.

Mobily should presumably be shipping "special" loads on their phones, which would bypass cert pinning.

Hm, I'm surprised more people aren't angry at the lack of adoption of cert pinning in mobile apps. It seems like no one cares to prepare for attacks like this, despite widespread knowledge that they occur?

The reality is that they will eventually find a company or five that will do exactly what they asked. Thanks to their checkbook.

Moxie: Do not go to Saudi Arabia, you are probably persona non-grata by now.

The persona non-grata status isn't limited to the .sa region -- he will likely feel retaliatory measures implemented against him from within the safety of the West.

At the risk of sounding paranoid, if I were Moxie, I'd be on the lookout for sure :/ when you get into this sort of thing, not to mention the money involved, you're running serious risk of being "cancered" to death.

Unfortunately, this is the world we live in now. Its only going to get worse, and sooner or later will self-destruct.


As long as so many people are still willfully ignorant of such things, maybe.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact