Hacker News new | past | comments | ask | show | jobs | submit login
Facebook, WhatsApp Will Have to Share Messages With U.K.? (bloomberg.com)
1179 points by phissk on Sept 28, 2019 | hide | past | favorite | 556 comments

We were surprised to read this story and are not aware of discussions that would force us to change our product.

We believe people have a fundamental right to have private conversations. End-to-end encryption protects that right for over a billion people every day.

We will always oppose government attempts to build backdoors because they would weaken the security of everyone who uses WhatsApp including governments themselves. In times like these we must stand up both for the security and the privacy of our users everywhere. We will continue do so.

Will, Head of WhatsApp

Will you have something like a warrant canary [1] to let users know in case there ever is a compromise of security?

[1] https://en.wikipedia.org/wiki/Warrant_canary

A warrant canary isn't proof of anything.

Can you add some substance to this? Why do you think warrant canaries are weak?

Warrant canaries haven't been tested in court. (They have been used as notification of an NSL though.) In particular, judges are human beings, not robots, so the laws are interpreted and implemented by humans. Because they're not robots, removing a warrant canary toes the line on communicating to the affected, in violation of an NSL. Thus, removal of the canary most likely means an NSL was received, but the canary staying up doesn't necessarily mean that there wasn't an NSL. Lawyers at every organization have considered the situation and advised their client, but those lawyers are not at the FBI.

I don't think you're meant to remove warrant canaries when you get a secret court order, you're just meant to continue renewing your warrant canary at a regular interval as long as you don't get a secret court order.

My understanding is that they can prevent you from removing warrant canaries but they can't force you to continue announcing "I have not received a secret warrant".

Tactically, warrant canaries are dead-man switches. Conceptually, you can't compel someone to explicitly lie publicly.

But the FBI could advise the canary poster that not continuing to post the canary notice could lead to legal action (esp. since that person has willfully put him/herself into the situation). Then it would be up to the recipient of the NSL to decide if it’s worth that risk, which is as stated above, untested. It’s a fine line between telling them to lie versus telling them the ruse could be in violation of the gag order.

Conceptually you can't because you can put as a citizen any requirement to make a canary that the law cant compel you to do. For example, you can pay to publish the canary: the state can't compel you to spend money on it. Or you can make a small petty crime with it (say, an IP Violation).

In places where there are limits to what the government can do to an with you, its possible to resist.

Why are you arguing about what is conceptually possible? The reality is such that people can absolutely be compelled to lie in public, especially for "national security" means. It happens all the time. Failing to update could signal something, but continuing to update means nothing. "Resistance" and other such concepts don't hold up to scrutiny against shareholders and 40 year sentences.

“Compelled Speech“ has been tested in the Supreme Court.

EX: https://en.m.wikipedia.org/wiki/West_Virginia_State_Board_of...

The US government can’t legally compel you to lie, but may restrict what you can say.

Because thinking otherwise is saying that nothing really matters, the government can do anything at any time and you are toast no matter what.

The truth is in the middle for all nations on this earth. Adopting reality as a context is not nihilism.

Well... that might be true. In the words of George Orwell: https://www.youtube.com/watch?v=iP3T5tEs7yI

Welcome to the real world, kid. Population: all of us damned souls.

The constitutionality of whether the US Government can force someone to update a warrant canary has never been tested. Until it is, it’s foolish to declare with certainty whether it is or is not legal. We can only speculate at best.

We know that the legal bar for forcing someone to speak or not speak is high (compelling state interest), but national security has usually been held to pass such a bar.

If you can be compelled to be silent while breaking constitutional guidelines on the basis of national security, you can be compelled to update a beacon.

Warrant canaries are nice to have, but viewing them as something which provides proof of absence of government meddling is incorrect.

Perhaps, but that legal theory has never been tested in US federal court (as far as we know). It's entirely possible that the judicial branch wouldn't allow the executive branch to force private citizens into actively making false statements.

Right. My whole point is that there are unknowns in the equation which render the proposition void.

A lack of one is.

I've often wondered whether or not a sufficiently well worded warrant could require that the warrant canary remains published unchanged, rendering the warrant canary useless.

The idea behind the canaries is that they expire, and that one cannot legally force someone to sign false statements. So if no new canary is published when the old one expires, that's a red flag.

Please. A receipt for a $10 wrench and any IT guy will crumble.

That wouldn’t even be illegal, since they never hit you with a wrench - you just imagined they were about to go xkcd on you

If so, sounds like someone from FB Legal and the SEC should have Words with Bloomberg. Wouldn't be the first time they've intentionally maliciously misrepresented/lied about an infosec issue to the detriment of a company in order to move the market (Supermicro "grain of rice"...)

Looks like more crappy security reporting from Bloomberg: https://twitter.com/alexstamos/status/1178308065268920320

If you read the (short) story to the end, you will see it was first reported by the New York Times.

The reporting is accurate.

"are not aware of discussions that would force us to change our product."

It reads like your product is already compatible with govt ease dropping

Ha, at first I thought you were making a joke that WhatsApp could have been aware of the conversations by ... eavesdropping on the government employees who are using WhatsApp.

Reminds me of the timeless http://bash.org/?88575

Do you currently have any backdoors installed?

We do not.

You don't have to take our word on this -- I wouldn't want you to. As others on this thread have pointed out it's possible enough to tear through our binaries that if we did have a backdoor it would be discovered.

> it's possible enough to tear through our binaries

No, it's not "possible enough" and I strongly suspect you fully realize that.

A backdoor doesn't need to be in a form of an IF statement or something comparably obvious and silly. It can be a weakly seeded PRNG that would allow a "determined party" to brute-force the key exchange in a reasonable time. That would take man-years to fish out from a binary, and that's without considering that you may (be forced to) distribute an altered binary on demand and to specific targets only.

So in the end all we have - realistically - is in fact just your word. There's no way for you to prove that you are trustworthy by pointing at some random binary. The only option is to distribute reproducible builds from an audited open source.

Distributing an altered binary to specific targets should be impossible as WhatsApp don't control the distribution, Apple and Google do. They would also have to be complicit too for a targeted attack to be feasible. By having to distribute the same binary to everyone it is much harder to conceal a backdoor

Are you sure that there's no way for whatsapp to download and execute some code which will lead to upload of protected information?

Simple example: I'm sure that whatsapp main window is webview. Imagine that application inserts some kind of resource (e.g. CSS) from whatsapp server. So now whatsapp server can serve slightly altered CSS which will leak secret data via custom fonts, etc and you won't be able to find that, unless you're intercepting all traffic and can decrypt it (and apps nowadays love to pin certificates).

This is imaginary attack, I have no idea whether whatsapp does that. But HTML is a powerful and dangerous beast, yet it's used a lot in applications for rich media.

That doesn't help against attacks by US authorities. If they can make Facebook create a backdoor, they can make Apple and Google distribute it.

Signal has the same issue.

Juniper reveled the Screen OS backdoor with a quiet patch over Christmas. Within a month the backdoor was fully understood.

They had a diff, they knew where it was. This helps a lot.

I disagree that it would take man-years to fish that out from a binary. Black hat and white hat hackers do this all the time.

I agree. The crypto used is industry standard, and the actual process all the way from random number generation to deriving a key is relatively easy to follow.

Active ways to attack the client to make it leak the key are far more worrying - but even an open source project wouldn't protect against that.

Again, it may very well be a vanilla TLS, but then you have a bit of code in some obscure corner that repoints random() to an alternative weaker implementation when some conditions are met, including, for example, not being run under a debugger and not having certain popular functions trampolined.

Good luck finding even this without a fine comb. And that's us just getting started with code flow obfuscation.

No source = no trust. It's as simple as that.

Unfortunately, the WhatsApp terms of service say you must not "reverse engineer, alter, modify, create derivative works from, decompile, or extract code from our Services"

In (at least) the US this wouldn't hold up in court if they went after you: https://cr.yp.to/softwarelaw.html.

Of course if WhatsApp detected an abnormal or tampered version of the app, they can suspend or disable your account. I'm sure security labs that do reverse engineering of this sort probably do it on test handsets with burner numbers and identities so it wouldn't affect any personal accounts they use.

Perhaps, I just thought it was an odd thing for the head of WhatsApp to say: You don't have to take our word on this - just do this thing that we prohibit in our terms of service.

Whatsapp is owned by Facebook. Facebook has never been a leader in corporate governance, or even solid moral decision-making.

This should be completely believable for a company that relies heavily on user and community trust.

That said, @wcathcart: in community with deep technical expertise like Hacker News, folks do consider how many possible channels and means there are to confidentially leak information from applications.

You're correct that in the general case it's likely that tech-savvy users would scan a popular app like yours and find any 'obviously-placed' backdoors. It's an observational and opportunistic approach, akin to the way a passer-by might spot a poorly locked bicycle on a street.

Unfortunately there's an extra level of complexity here - any app may have unusual behaviors that a sophisticated attacker could trigger for individual users to exploit them - and it's really, really hard for the security-conscious of us -- who might never see or meet those users -- to truly trust that your app is doing what you tell us it is, whether that's end-to-end encryption in all situations, or anything else.

The reason is that without being able to see how the app is written, verify that it's genuinely the same compiled version running on all devices, and audit the behavior it will have under exceptional circumstances -- external observers just don't know.

I'm not expecting you to make the source freely available, incredible though that would be - but attempting to explain the potential disconnect in dialogue you might find with some commentors.

I'm not sure it wasn't answered before, but why do you refuse to open-source the client app, since, as you say it yourself, you try to have no secrets on the client-side, encryption is supposed to be e2e, technology is well known and implemented in many alternatives and basically there seems to be nothing to protect in the app itself?

Please implement a warrant canary while you still can, before you are legally compelled not to.

> tear through our binaries

That's explicitly against your terms of service.

We now have explicit, written authorization from the head of WhatsApp to reverse engineer ("tear through") the binaries. The ToS only prohibits unauthorized reverse engineering. I agree with you that it was disallowed prior to this comment, but I think it's OK now.

Thanks for your words, but unfortunately I think your hands are tied on this one. Australia was the first pin to fall within then Five Eyes, and I think the rest will soon follow.

Perhaps, but that legal theory has never been tested in US federal court (as far as we know). It's entirely possible that the judicial branch wouldn't allow the executive branch to force private citizens into actively making false statements.

Hassan https://www.clipart.email/

Is it true for US or for every country? How does WhatsApp legally operate in Russia?

It's true in every country. We are a global service, and our policy on backdoors is the same everywhere: we do not have them and we vigorously oppose them.

Sometimes this leads to us being blocked. We were blocked in Brazil, for example, but that block was overturned in the courts.

Thanks! What is your opinion on a rumor that FSB doesn’t have any complaints because they found unintentional/unknown vulnerability that allows them to read WhatsApp messages? Should WhatsApp users be concerned about that?

We do know that phones and tablets are vulnerable - so it's not like we're unaware of any backdoor that may also be used to subvert whatsapp.

It'd indeed be interesting to know if the FSB had some kind of baseband vulnerability that they'd used willy-nilly to facilitate dragnet surveillance.

I suspect William Binney was right though - blanket surveillance is just expensive and hides your needles in a mountain of hay; you really want high quality in the data you store in order to ease extraction of meaningful information / intelligence.

(that's not to say that aggregate meta data isn't interesting - just that with actual content noise is a problem)

So are you saying the backdoors will or won't be introduced to WhatsApp?

Will not. We are completely opposed to this. Backdoors are a horrible idea and any government who suggests them is proposing weakening the security and privacy of everyone.

Will you pull out of UK and US? Seems very unrealistic. You have no choice but to obey the law,even if the law is ridiculous.

Have you considered architectural changes that will allow for the app to be compiled and deployed by an affiliate corp outside of these jurisdictions?

Thanks for the clarity.

So will WhatsApp refuse to comply, if this goes forward?

And is that even possible?

I do appreciate that Facebook has the resources to fight. To fight an NSL, even. But IANAL, and have no clue.

Facebook give the government this and the government in acts regulations to “protect” Facebook. I’m sure Facebook is salivating at the thought at getting even more access to your sensitive data. Once the backdoor is installed who knows who’ll have access.


As much as would like to believe all promises coming from corporate execs - Facebook has been caught lying more than enough. So thanks for trying, but I have uninstalled WhatsApp and I'm happy with Threema.

Have you considered Riot (Matrix) or Signal? Both are open source so it's possible to verify claims made on their website, which is a lot less possible with proprietary software like Threema.

And with Matrix apps you can choose to run your own server. Not sure what legal ramifications that has, but practically speaking it allows the possibility of eliminating another potential weakness.

... but probably opening many new ones, unless one has a really strong security team in place.

These vulnerabilities would then at least be bespoke, particular to a specific server, preventing mass surveillance. At least if you're not talking about potential vulnerabilities in synapse (Matrix server software), but then a strong security team wouldn't help that much.

And? That is what you feel. What guarantees do we have?

Glad you have the packs of the people when the Government doesn't.

>people have the fundamental right to have private conversations

Any comment on this?


If the source code isn't available for audit by 3rd parties (or yourself), and you can't build it from source, then it was never really "secure" anyway. What lawmakers do or don't say is just noise.

Platforms that rely on trust (in this case, trusting that FB isn't doing bad things) provide very weak guarantees about privacy/security. They could easily include a keylogger in WhatsApp and bypass the e2e encryption, for example, and us regular folk have no way of knowing.

> If the source code isn't available for audit by 3rd parties (or yourself), and you can't build it from source, then it was never really "secure" anyway. What lawmakers do or don't say is just noise.

Careful - you're right that WhatsApp is untrustworthy, but laws that force them to add backdoors could well be applied to open-source code as well. Or make possession of non-backdoored software, open or not, illegal. Or compel OS/hardware manufacturers on which the code runs. The law is a dangerous thing to ignore.

If I can compile audited code from source myself, without any backdoors, then I can be reasonably assured there aren't any backdoors (excluding perhaps hardware level backdoors--but that's why we do the encryption in software).

Implementing hardware backdoors that are opaque to end users is theoretically possible, but more difficult in practice. You could, for example, build a screen/monitor that just captures everything on the screen and forwards it to some other entity, but in practice it's not so easy because of bandwidth limitations, etc. I suppose it would be much easier to create a physical keyboard that phones home over a mobile network, although it would only give you half the conversation.

*edit: added the word "audited".

> If I can compile the code from source myself, without any backdoors, then I can be reasonably assured there aren't any backdoors

Where does that leave the rest of society? Having open source software and hardware is not enough, we also need laws that prohibit mass surveillance and support our efforts to uphold human rights.

Relying on laws leaves a lot of wiggle room for bad actors, slippery slopes, and political opinions changing over time. Laws are based on trust in institutions (do you _really_ trust large governments?).

Laws are probabilistic, whereas math & source code is deterministic. You can verify that computer code does what it says it does. Laws depend on enforcement and complicated judicial systems (based on humans) to interpret and apply the laws, which means they can effectively change over time, and the goalposts are never stationary.

I agree that laws are not enough, independent verification must be possible. But your right to use secure software, and to audit it without risking to spend your life in prison or being killed, is ensured by laws.

This is why moving the goalposts and further normalizing surveillance is extremely dangerous. The rights that you enjoy today are not universal, and can obviously be eradicated in less than a generation.

Agreed! Thankfully software is protected by the 1st amendment in the US[1]. If not for that, I don't know where we'd be.

[1]: https://www.eff.org/deeplinks/2015/04/remembering-case-estab...

There is no deterministic, technological solution to the problem that all technological solutions can be banned and its users threatened with draconian punishment.

There is no mathematical escape hatch from society. All we have is a messy assortment of technological mitigations that change the cost of surveillance.

These mitigations work best in combination with constitutional rights that limit what the government of the day can do, triggered by the latest outrage in the news.

It isn't either-or. You can have reasonable laws that protect people's rights and privacy, and compile and check the source if you wish to do so.

Yes, and we should strive to use all tools for a defense in depth of our rights. Laws are the first line of defense, then politics/media, then technology from software to hardware, and finally trust in our fellow humans and ourselves. That way if one (temporarily) fails, we can fall back to the others while repairing the breach.

Auditing a large codebase is also probabilistic. Oversights happen, and there are ways to write code that looks like it does the intended behavior while also doing a second, nefarious thing. See https://www.ioccc.org

the backdoor could be in the hardware - the logical conclusion of your position is that we should all fabricate our own computers from scratch so that we're sure that they're secure.

This is clearly a straw man, no one wants to do this, or is suggesting that we do this. But at some point, even the most hardened OSS advocate has to trust someone (usually the hardware manufacturer). You cannot verify that the device you're on doesn't spy on you, you have to rely on the manufacturer's word that it doesn't. And the manufacturer's suppliers, of course, because the manufacturer is trusting them.

Somewhere along the stack, we all have to draw a line and say "beyond this point, I trust that I am not being spied on". You choose to draw that line at the hardware point. Others choose to draw the line at the software point.

That's a weird way too see things. You are pretty much saying that wearing a bulletproof vest won't save you from the bullet, but laws forbidding to kill people somehow magically will. Well, I suppose you also could literally say that and I kinda can see where are you coming from, but I think it's somewhat delusional.

Proper law enforcement (mind the "enforcement" part) can make a neighborhood safe enough so that nobody tries to shoot you and you won't need to wear vests. But in the end of the day, if you are messing with bad people in the bad neighborhood: bulletproof vests are real, laws are not.

And when you are talking about government enforcing the laws that are supposed to forbid uncontrollable government agencies to do what they do: well, government kinda is bad people in the bad neighborhood.

The most concerning thing about mass surveillance and mass manipulation isn't the direct impact to some hypothetical enemy of the state (although that is concerning, people in that situation are more or less a lost cause), it's that public discourse and democracy can be pushed around by some small controlling group.

Sadly it means the rest of society being under surveillance and actual criminals with day-to-day privacy.

And been that way since the days slavery was politically correct rather than having to jump through all the debt slave hoops it has to now.

>If I can compile the code from source myself, without any backdoors, then I can be reasonably assured there aren't any backdoors (excluding perhaps hardware level backdoors--but that's why we do the encryption in software).

Not necessarily. Have you ever heard of Ken Thompson's backdoored C compiler?


>Re-write compiler code to contain 2 flaws:

>When compiling its own binary, the compiler must compile these flaws

>When compiling some other preselected code (login function) it must compile some arbitrary backdoor

>Thus, the compiler works normally - when it compiles a login script or similar, it can create a security backdoor, and when it compiles newer versions of itself in the future, it retains the previous flaws - and the flaws will only exist in the compiler binary so are extremely difficult to detect.

It's not necessarily a viable attack method today, but it's the lesson behind it that's important. Anything can be compromised.

Source code availability isn’t really a solution to the trust problem. Sure, it allows for an audit, but the practical truth is that few people are qualified to perform those audits and few of them have a sufficient incentive to spend their time doing so.

So you still just invest trust in the maintainer or — if you’re lucky — the third party auditing firm who was paid to review the code.

That you can review the code doesn’t mean that anyone does so. At least not in an exhaustive and relevant way.

Closed source or open, the problem is made even worse now that we live in a Package Manager culture where even the simplest applications adopt dozens of dependencies.

I’m not saying that you should trust Facebook and their closed source applications, just that you’re not really all that safer trusting anyone else just because their source code is available.

I'm not sure I'd surmise the dependency of users to be an entire culture in and of itself. Plus, I feel like this splits hairs; going down the rabbit hole of "well who is checking the open source code" and "well who is checking the person checking the open source code" leads to endless complexity, especially when the move in question is more symbolic than substantive. If WhatsApp did not use e2e encryption by default (and they didn't), then there was possibility of governments reading the communications anyway. Does this new announcement really lessen the security and privacy of the users? To me, it sounds like they are making the policy clearer to the public, since US / UK governments have not explicitly made press releases telling citizens which of their communications will be monitored. While I am very much a proponent to end-to-end encryption for ALL communications, I think this move isn't going to sacrifice privacy that users previously had.

> If WhatsApp did not use e2e encryption by default (and they didn't)

Correct use of past tense. In the present, e2e encryption is by default.

> Does this new announcement really lessen the security and privacy of the users?

Yes it does, since e2e encryption is enabled by default now. Best I can tell, there’s no way to disable it either.

Your assumption how backdoors work is very limited/wrong, I am afraid.

You assume that you actually understand the code well enough to identify the backdoor - e.g. as some sort of function that will bypass authentication when some secret hardwired password is provided (to give a dumb example).

However, to give a real world example of backdoored crypto, it is nothing of the sort. For example, the issue with the potentially backdoored Dual_EC_DRBG pseudorandom number generator has been known since 2004 at least - but the algorithm has been standardized by ISO/NIST and used for years until the potential backdoor issue was widely publicized following the Snowden leaks and the standard was withdrawn.

Good luck finding something like that only by reading code unless you are expert in crypto and mathematics. If you were only auditing code whether or not it matches the published, supposedly correct, standard (or algorithm description), you would never find this. The backdoored code was working completely fine, exactly as intended. But the weak random number generator allowed an adversary with sufficient computing resources to break the encryption.

Yes, it is theoretically possible to create backdoors that are hard or impossible to detect, if you start 20 years ago and subvert the standards used by the entire industry.

An improvement in security and privacy isn't limited to "make it impossible, even theoretically, for anything bad to ever happen OR you've accomplished nothing". Most back doors aren't inserted by competent NSA-level actors 20 years in advance. Most are "whenever a message passes through, send a copy to this third party". They are inserted by court order for a specific case due to the government becoming interested late in the game due to a specific case. For example, when terrorists start using some secure email service, the government tries to force the service to allow them to snoop on the relevant conversations. Open sourcing the code would allow you (with the help of the community) to detect these sorts of attempts when the product involves end-to-end encryption.

So while having the source and a community auditing changes to that source doesn't prevent every possible attack against your privacy, it prevents almost every one that is plausibly detectable, which is literally as good as you can do.

Would you trust you can find any innocent 'bug' that could lead to a privilege escalation? Decades of vulnerabilities in software, open source or closed, would contradict that believe?

Yep, it's a super difficult problem. Having the source code available, and being able to validate the builds yourself makes everything a lot easier.

It's one of the reasons the Debian project has worked so hard at reproducible builds: https://wiki.debian.org/ReproducibleBuilds/About

Bugs can certainly occur (like Heartbleed etc) but the alternative (closed source opaque binary blobs) is much worse.

No ordinary person is capable of building anything from source.

Check out Nix. Deterministic source derivations of pretty much anything you might want to build, trivially re-buildable from source by anyone. It takes seconds to install the "Nix Shell" on pretty much any of the modern OSes.

Now, to avoid the "Reflections On Trusting Trust" exploit, building the C compiler toolchain from known-good "root" compiler/linker toolchains, and then comparing the output vs. self-compilation is quite a bit harder.

Define "ordinary person", as plenty of people here have. However, there's very little difference between downloading a reproducible system that compiles everything on your machine and downloading a binary with a known checksum from a perspective of trust.

I used to use Gentoo, and I built my entire OS from source. I'm not extraordinary in any way, I'm just an ordinary person who has a deep interest in software and computers.

I'd wager that fewer than 1/100,000 humans on the planet could feasibly build Gentoo from source.

I've done two "stage one" builds of Gentoo. I'm not super skilled but, I had a lot of time and reference material. My bet is that folks could but would not want to. There is significant time cost.

Also, I'm still using one of those original builds on my laptop - upgraded of course...still mad love for my daily driver.

>folks could but would not want to.

And so they never learned, and so they "can't".

In the same way, I can't use Gentoo or vim or compile either or ski or... :D

It's a pretty automated process. I'd estimate 1/10 of all people who can use a computer and install software at all could do it if they wanted to and had sufficient time.

Which is, in itself, extraordinary. It can't be the solution for most of humanity.

The Nix project has been a spectacular help in this regard.

I like this as a good counterpoint: https://www.schneier.com/blog/archives/2006/01/countering_tr...

If you have multiple compilers and they all aren't infected in exactly the same way (e.g. one is not infected, or they have different types of infections) then you can detect there's a problem with them.

Just because it's open source doesn't make it trustworthy or bug free. Are you going to audit tends of thousands of lines of code to find an obscure vulnerability that a state actor has gotten added in a way that's not obvious?

Do you think?

As soon as you’re building something written in, say, Javascript, then any semblance of assurance goes out of the window.

JS is an easy target. What about C or C++? You could audit the code but have you also audited your compiler? What if you used Visual Studio?

Code is an easy target. Can you trust the auditors themselves and in the absence of that, your own ability to detect a vulnerability?

The only bulletproof solution is to not use software.

That said, most of us are not as important or recognisable as we believe we are. The layperson won’t have a good reason to isolate their computer and install an airlock between their aluminium-lined office and the rest of their house.

This is just an updated version of the story about the US government scanning your emails for keywords after 9/11. They don’t even need to actually do it, they just need to say they do and most people will monitor their communications.

> If I can compile audited code from source myself, without any backdoors, then I can be reasonably assured there aren't any backdoors

Many people can compile code from source. Not so many people have the ability to audit code for obscured backdoors. The number of people that are capable to audit and have the time that is necessary to do it is practically nil.

No, you would also have to audit the entire codebase.

You could also decompile the Whatsapp APK and do the same thing (it's Java after all).

> No, you would also have to audit the entire codebase

Look no further than the OpenSSL Heartbleed vulnerability

WhatsApp contains significant amount of native code, making it much more difficult to analyze.

How are you going to get that source code onto your phone, when loading unauthorized code becomes illegal? Who are you going to talk to, when everyone else is using the approved apps? Try to think ahead a little. This arrogant naive thinking is how geeks lose and politicians get it wrong while you're not paying attention and then everyone ends up living with the consequences.

Debian's OpenSSL fiasco. For the record, I use, like and support Debian, OSS and free software but we must not spread myths

You're neglecting how a group of ignorant lawmakers could simply pass a law that forces secure-boot so your computer can only run proprietary OSes. Then they can force OS makers to only allow app store installs.

>If I can compile audited code from source myself

That assumes you can get the code. If it's illegal to distribute non-backdoored software, the code might be hard to get, for example Github might be forced to take it down.

Yes of course. The old argument that Linux is free of backdoors because it is open source. It's such a ridiculous claim. Software systems on that scale are so complex, there is NO way whatsoever to make sure there isn't a backdoor in there. I would go as far as saying that OpenSource software by definition is more vulnerable to backdoors than closed source software, exactly because the source code is available and anyone with some credibility can make patches. It is the dreamland scape of the NSA.

Hacking into closed source code is much more dangerous (politically) for them and also more difficult.

The thing with closed-source is that the binaries can be backdoored by the vendor on behalf of the NSA. This is not a new practice. With open-source code this venue is less likely to work and needs an additional layer of deception. Sure the NSA may manage to plant vulnerabilities, but flaws will not be persistent in the same way as when they're planted in cooperation with a vendor.

There's currently a push for reproducible builds which further hardens distros against such attacks.

The Whatsapp binary is sufficiently transparent to enable someone determined to write their own client. Thats enough info for an expert to verify their "messages are end to end encrypted and we don't know the key" claim.

The fly in the ointment is the client might have additional functionality to leak the e2e encryption key. That is far harder to find, but if it's use were widespread, it would be found by researchers.

The whole point is moot though - whatsapp is designed to (by default) upload cleartext chat logs to google/apple servers. Since all chats have 2+ recipients, the conversation is only safe from snooping if nobody in the chat has backup enabled, which is unlikely.

Yep, the chat log backup basically renders WhatsApp completely insecure. They also constantly nag you inside the app to enable it. This is how they caught Michael Cohen (and presumably others). Unfortunately Signal does it, too.

Signals backups are encrypted.

How are they restored? Is it password-encrypted?

Yes, you're given a random password when you enable backups that you need to use when restoring. They don't get uploaded to the cloud. On iOS there's no backup feature at all AFAIK.

you copy them to your new phone. yes.

And local.. whatsapp tries to get you to send unencrypted backups to google drive.

At least on iOS - whatsapp doesn't seem to nag about backups (i don't think i've ever enabled it), but you can do a local backup/restore if you replace handset. Signal encrypts your chats locally (but not binary blobs/video/images) but they cannot be backed up and restored locally.

I don't think that Signal uploads the backups to the cloud though. It seems like they're stored on device and encrypted.

Attack or cooperation with custom soft keyboard makers is much easier. It may only give you one side of the conversation but no ones seems to care if those are secure. As long as they do a good job of spell checking.

Having a 'secure' conversation definitely implies trust in Google/Apple, regardless of the Whatsapp binary behaving flawlessly (e2e encryption, solid PRNG, no logs etc.)

They could indeed serve you a different binary from the app-store. "Do you think that's Whatsapp you're using?"

Trust in the OS made by Google/Apple is slightly different from trust in the ability of Google/Apple to keep your data stored on their servers secure against nation states.

I have much more trust in the former than the latter.

It doesn't have to be whatsapp that "steals" the key, we have zigote and google play services to do that.

Unless you have reproducible builds, having access to the source code can be deceiving.

Audits should start with the actual binary. Only if you can ensure that the binary was build from a specific source code and does not contain any other logic in there (i.e. it was build by a trusted compiler), you can happily skip the decompilation process and analyze that source instead.

> If the source code isn't available for audit by 3rd parties (or yourself), and you can't build it from source, then it was never really "secure" anyway. What lawmakers do or don't say is just noise.

It's still nice to see this spelled out since I've seen many people claim that WhatsApp being closed source is not that problematic. This is definitive confirmation that it cannot be trusted anymore and it's time to start working on the problem (both from a legislative point and by seeking out and moving to technological alternatives).

If the source code is available for audit by 3rd parties, but nobody you trust have actually audited it (depending on your paranoia, this may be only yourself), then it was never really "secure" anyway.

Trust isn't binary. You can trust a company to keep your conversation private for most purposes without it being safe from a government wiretap. We do that whenever we use a phone.

Your phone conversations are only presumably private. There's no guarantee and you can't know for sure.

>What lawmakers do or don't say is just noise.

Back in the day, it was illegal to export "good" encryption. There was nothing stopping it from happening technically, just like there is nothing stopping you from stealing from a convince store, except for the threat of enforcement.

But the threat of enforcement can have a strong chilling effect.

Bad analogy - exporting "good" encryption was illegal and while for individuals that was basically irrelevant, _companies_ would absolute follow that law. The analogy is not between "you stealing from a convenience store", it's "you running a company that has a known practice of robbing convenience stores". It's so incredibly illegal, you're not going to. There is no realistic hypothetical in which that decision would even remotely make sense.

Wouldn't someone from inside the Facebook/Whatsapp team denounce it if that was the case? Like a whistle blower

FB's history suggests they have a culture of not blowing the whistle when shady things are going on.

I suspect most people want to keep their $500k+/year jobs instead of sticking their necks out. My friends who work at FAANG are largely mentally checked out, and just do it to collect their monies and retire ASAP. You can't pay rent with good feelings.

> You can't pay rent with good feelings.

So just keep making the world shittier. Gotta love individualist capitalism.

I quit my (good paying) job to burn through my savings and start my own company because I want to fix this, so it's not everyone who behaves that way :)

I left the NSA after 4 years because I didn't want to support some of the things going on there. Can't dismantle it, but can at least not contribute.

Look into B Corporations and read the last third of the book Lab Rats. You can make your company serve society and it's employees right without adopting shitty cultures that breeds suicides, pissing in bottles and burn outs and still make huge profits.

To paraphrase Adam Smith, nothing has helped more people more than self-interest in a free market.

Raised the standard of living for billions.

Fortunately for normative social theory and political economy, a lot of research has happened since Adam Smith evengalized self-interest and the free market.

The free market never has, nor never will exist. The kind of unfair distribution of scarce items to people with the most money, the creating of a competitive rather than cooperative substrate for social interaction. These things tear society apart and have always needed to be moderated whenever a market is present.

> The free market never has, nor never will exist.

If you define the free market as nothing other than anarchy, then yes the free market is rare indeed.

But I don't that know anyone that speaks in such absolutes.

Which part of Adam Smith are you paraphrasing exactly? Have you read him?

> Which part of Adam Smith are you paraphrasing exactly?

No one quote in particular; just a recurring theme.

"It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest"

- The Wealth of Nations

"The natural effort of every individual to better his own condition...is so powerful, that it is alone, and without any assistance, not only capable of carrying on the society to wealth and prosperity, but of surmounting a hundred impertinent obstructions with which the folly of human laws too often encumbers its operations."

- The Wealth of Nations

"The statesman who should attempt to direct private people in what manner they ought to employ their capitals, would...assume an authority which could safely be trusted, not only to no single person, but to no council or senate whatever"

- The Wealth of Nations

"In spite of their natural selfishness and rapacity [capitalists] are led by an invisible hand...and thus without intending it, without knowing it, advance the interest of society"

- The Theory of Moral Sentiments

> Have you read him?


But also:

"The man of the most perfect virtue, the man whom we naturally love and revere the most, is he who joins, to the most perfect command of his own original and selfish feelings, the most exquisite sensibility both to the original and sympathetic feelings of others."

Self-interest, yes, but in a system regulated just as much by the church and the aristocracy as by the flow of capital. Pure monetary greed without regard for doing what is right is nothing Smith would ever have advocated.

That's just passing the buck: instead of relying on the goodwill of government employees, you're now relying on the goodwill of Facebook employees. That's still "relying on goodwill" levels of security.

Facebook has a culture of snitching. It's advocated by the company to snitch on your teammates to higher-ups if you suspect something.

Imagine you are the someone in a position to doso, what's your risk tolerance?

I agree with this, but I think the main problem is the centralization of trust, or the user having to place trust in one or two entities.

Imagine if 10 or 20 different organizations all had access to the source code and could vouch for the checskum of a each build.

While it would be nice if we could trust FB, Apple, etc., it would be much better if we didn't have to, and could simply trust others who have less to lose from alienating government officials.

Bull. You still get an apk, which is easy to decompile and investigate. People absolutely can (and have) audited the WhatsApp code.

It's entirely possible to analyze the compiled program. And it's actively done…

> They could easily include a keylogger in WhatsApp and bypass the e2e encryption, for example, and us regular folk have no way of knowing.

This would be quickly detected by anyone looking at the data the WhatsApp app was sending back to the server (this isn't hard to do on a jailbroken device).

Yes, this is something that literally everyone who reads HN will oppose. Meanwhile do you hear the deafening silence from the average Joe who thinks he has "nothing to hide"?

Don't hate the politicians who keep pushing this. They're just trying not to get fired. And the surest way to get fired in a western country right now is to be seen doing nothing about the terrorism problem and then having terrorist acts committed under your watch. So the politician asks the security forces "what can we do to stop terrorism?" Security says "get us access to messages of terrorism suspects". Seems reasonable, let's go ahead with it.

Yes, we know that it doesn't stop with terrorism suspects. Then law enforcement wants to read the messages of drug kingpins, then drug suspects, then shoplifters, then jaywalkers, then everyone, just to be on the safe side.

But as long as people are told that they need to give up some privacy in exchange for security, they'll take the latter every time.

But by your very comment, it only makes sense to hate the politicians! If all they care about is that they keep their jobs, then isn't trying to get the politicians fired for threatening our privacy the best way to get them to care about our privacy?

No and yes.

Don’t hate the politician for trying to keep their job. In that respect they’re no different from any other person.

Yes, if you can convince politicians that you’re more concerned about the actual damage to your privacy than the hypothetical increase in the probability of a terrorist attack, then go ahead. Do it. Convince them. But my hunch is that most people are far more terrified of dying in a terrorist attack.

From my experience, people in my country rarely think of terrorists. Perhaps these same politicians have manipulated the people into obsessing about this?

It is not the question if you have something to hide or not but once they have unlimited access like this, what is stopping them from manufacturing truth and bend the information to support their biases. Privacy IMO is more important.

Could you elaborate in the link between unlimited access and manufacturing truth?

If I don't have access to your conversation and people know this, but I claim that I know you said something in such a conversation, people can ask how I know it and demand proof.

If your conversations aren't encrypted and I have access to them, and people know this, then if I claim that you said a certain something in such a conversation, people are more likely to just believe me.

Also, unencrypted data is data that can be altered by a third party.

data can still be signed.

You are absolutely right. I also already assume the government can access virtually anything I am doing anyways. I think we are all naive to believe otherwise.

I also have nothing to hide :-)

My questions to people who say they have nothing to hide:

- what are the things you and your significant other are doing in the private of your house?

- which co-worker do fantasize about, be specific in what you like to do.

- As teenager what legal/illegal drugs did you consume. How often did you pass out, who where the people you consumed these drugs with?

- What are your political leanings?

- How much do earn, where are your savings invested in.

- Which illnesses do you currently have, which exist in your social circle.

- what private information was shared with you by other parties. Are you aware of any wrong doings/illegal activity by other people? Be specific esp. friends and family.

Be aware that I will share this information with anyone in your social group/work/volunteering work whenever I feel like it. I may also share this data with extremists groups on the other side of the spectrum, if helpful. Lastly I can use this data to fabricate "helpful" information about you if necessary. Thank you.

I really like this response. Thank you for this.

> Be aware that I will share this information with anyone in your social group/work/volunteering work whenever I feel like it. I may also share this data with extremists groups on the other side of the spectrum, if helpful. Lastly I can use this data to fabricate "helpful" information about you if necessary

Citation needed

East Germany pre unification.

Edit: upvoted you. Asking the question, if and when this happened is important for further generations.

> I also have nothing to hide :-)

That's not your decision to make, as to whether you have something to hide or not. That will be decided by someone else. It's entirely out of your hands and - depending on where and when you are subject to such scrutiny - may be arbitrarily decided, which is in fact a requirement in all authoritarian systems (the law has to be heavily subjective and arbitrarily enforced so it can be directed against anyone at anytime as so required, such that the population is kept in constant fear).

> Social media platforms based in the U.S. including Facebook and WhatsApp will be forced to share users’ encrypted messages with British police under a new treaty between the two countries, according to a person familiar with the matter.

This sounds as if the platforms are already sharing with the US authorities. So this being about them sharing it now with UK authorities.

It's laundering. The idea is presumably to circumvent US rules on domestic spying by having GCGQ do it and transfer it to US agencies.

Similarly the UK has had its spying ruled unlawful under ECHR: https://www.theguardian.com/uk-news/2018/sep/13/gchq-data-co... (perhaps this is why Richard Dearlove, "C" of MI6, is so Brexity)

We urgently need new domestic spying laws given that five eyes has effectively broken them.

If people in a position of power can break those laws with impunity then new laws aren't going to change that. The problem is holding the lawbreakers accountable.

Not if the lawbreaking is enabled by a loophole, as in this case. Closing the loophole may not be enough, but it should be the first priority.

It's important but you need to consider the situation where the government is corrupt. The Us is about to impeach the President; and the two factions in government are each accusing the other of corruption, albeit using wildly different criteria.

So what? Does the fact that the current adminstration is criminal mean that legal loopholes are meaningless and that closing them helps nothing?

My point is that laws are only as good as the integrity of the government enforcing them, and can themselves be written in bad faith. The solution is greater participation by the public rather than only technical fixes to the legal code.

This guy read Snowden.

If I'm reading that right, they are getting the encrypted messages, not the decrypted ones. Either whoever wrote the article doesn't know the different or the uk police have been fucked over by the us...

It's been true for a long time that since intelligence agencies aren't supposed to spy on their own citizens, they agree to spy on each other's citizens instead and subsequently share the data.


There are two ways to read it.

"Encrypted" could mean that's how they'll be delivered to the UK.

Or it could be a definition of the set of which messages are in question. The set is defined as the ones that users expected were supposed to be protected by encryption (or that government could not access because of encryption).

In other words, from the phrasing alone, it's not clear whether "encrypted" describes the state of the messages or the scope of the sharing.

I wonder what happens with Signal as it's US-based.

Signal may or may not be ok at the moment, but not in a couple of years (it has all the power to silently push an update with a backdoor). Among all the popular messaging apps only Telegram is in a position to not cooperate with five eyes.

I'm not sure if I completely agree with the silently part. Doesn't Signal have reproducible builds? In theory, any binary they release can be checked to see if it corresponds to source code.

Whether anyone is actually doing that is another question. And there is no technical reason app stores couldn't send a special backdoor-ed build to select list of users under surveillance if government forced them to. (They can target sets of users for staged roll-outs, beta programs, etc.) Which defeats the notion that one person watching can detect it for everybody.

On the other hand, it's possible to do stuff with smart phones at the platform level. Whether through vulnerabilities or some platform capability (updates, etc.), it may be possible to have a backdoor-ed binary that looks to the user like it is the regular binary. That's not a capability that Signal has, but it is a capability that might very well exist.

Regardless of all that, for users who have auto-updates enabled (most users), even if Signal can't silently push a backdoor-ed update, they can unilaterally push one. You could wake up tomorrow with a different version that has a backdoor, so even if you can identify backdoor-ed binaries, you have to turn off updates or verify the binary every time you open the app.

Assuming for a moment that we can trust our smart phones at the platform level not to lie to us, the problem of targeted backdoored builds could be mitigated somewhat if the platform implemented Binary Transparency in a clever way:


When you install an app (whether through an app store, or side-loading) the app should state the location online of an append-only log that lists all the releases (with timestamps) for that app. The phone OS could periodically check to see if an upgrade is available, and security researchers could check that the log doesn't contain references to versions which aren't available to the public.

Ideally there should perhaps also be a way for users to anonymously report which version of any app they are using, so that people with particular security concerns could configure their OS to only update an app after, for example, 50% of users have already installed the update.

Whenever we talk about web encryption, people make the (valid) point that needing to trust the server to deliver the encryption mechanism greatly reduces the benefits of clientside encryption.

I agree with that analysis, but it's not clear to me why we don't have similar levels of skepticism about auto-updating desktop apps. Signal in particular uses a third-party software repository, so if it wanted to push a malicious update, it wouldn't even need to sneak it past package maintainers.

Package signing protects you against developers with bad personal security practices, because it makes it harder for a third-party to MITM their apps. But it doesn't do anything I can see to protect you from a developer that turns malicious in the future.

Sure, but the Signal source code is open source. I've been compiling it myself for years on Android (and signing the binary with my own key). They can't silently push a binary from the Play Store and overwrite my binary.

They still can for everyone else, probably everyone you talk to over Signal, so not much of a protection. Pretty much the only way to avoid that is for Signal to give up that control, don't supply Signal directly and let others build Signal from sources and publish in independent from Signal package repositories, i.e. f-droid, OS package managers on Linux, etc. Even if they don't federate, it will at least make their end-to-end encryption sound, not useless like it is now when everyone uses Signal supplied binaries.

Presumably, for most of us here, even if we compiled it ourselves, most of the people we'll communicate with won't.

Dude, I can only imagine what you are doing that you need to be so secretive lol

I think "trust, but verify" and "verifiable security" are good axioms.

Unfortunately, unless things have changed since last time I tried telegram, it doesn't encrypt chats by default, which is a huge problem for any movement. OpSec is really hard, even for those who know what to do. (Operators of Silk Road 1&2 were both ha taken down by a failure in OpSec somewhere.) Security researchers have cast aspersions on MTProto the protocol used. The truly paranoid I know, for whom Signal and Telgram don't go far enough, use Wickr, which chooses to more make sacrifices in usability in favor of security, but they are few and far between.

I use to defend Telegram but they are technically UK based AFAIK.

They have a legal entity registered in the UK which they can drop any time.

I thought it was Berlin.

> Among all the popular messaging apps only Telegram is in a position to not cooperate with five eyes.

Telegram is subject to US coercion as much as any US company: both major app stores are US-based, and without app store distribution, a product might as well be dead as far as the masses are concerned.

Maybe @moxie could enlighten us.

Use signal, use tor.

> Use signal

I would, except it requires a mobile phone number.

Nothing. Whoever is interested, has to get hold of the phones to access the messages.

Historical messages, yes. However an app maker can be coerced into adding a hidden user that can participate in chats and receive all future messages decrypted. It doesn't require any special crypto hacks for that to work.

Signal code is open. You can build it yourself and compare that build with whatever you download from the app store. So it's not that simple.

I don’t think the iOS App Store allows for that sort of checksum comparison.

I didn't read that - are you sure about this? I am reading that backdoors are being asked to be created on WhatsApp. Anyway the messages aren't stored.

From the article:

Priti Patel, the U.K.’s home secretary, has previously warned that Facebook’s plan to enable users to send end-to-end encrypted messages would benefit criminals, and called on social media firms to develop “back doors” to give intelligence agencies access to their messaging platforms.

> has previously warned

That's the UK's position but it's not clear from the article that some kind of forced backdoor made it into the treaty, just that WhatsApp will be forced to share users’ encrypted messages. But they have already been sharing encrypted messages through other legal means.

More speculation here: https://www.justsecurity.org/24145/u-s-u-k-data-sharing-trea...

> "I didn't read that"

Why comment if you didn't read the article? It's in the first paragraph: "Social media platforms based in the U.S. including Facebook and WhatsApp will be forced to share users’ encrypted messages with British police under a new treaty"

He is saying that he didn't interrupt that from the article--not that he didn't read the article.

The title is editorialized - it's not just Whatsapp, it's all social media platforms.

And it's misleading -- the article and the article's headline say nothing about adding a backdoor. There's not enough detail in the article to say exactly how this decision will affect WhatsApp.

(The headline on Bloomberg, "Facebook, WhatsApp Will Have to Share Messages With U.K. Police" is more restrained.)

On re-reading the sort article, I realize it only says they will be compelled to share "encrypted messages" and "information to support investigations". It never says anything about decrypting the messages. If that is literally all it is, then it is quite misleading.

What use would ciphertext be to GCHQ? I’m fairly certain that “encrypted messages” describes which messages they will have to provide in plaintext, not the format they will come in.

Your interpretation is likely right. But it's interesting to consider the other. Maybe they can get metadata from the ciphertext, such as size. Or maybe they are just interested in other metadata such as time. Or maybe they have some cryptographic attacks, maybe involving extracting a shared key from the peer's phone, or key injection via sim spoofing.

If messages are encrypted end-to-end, and laws force the ability to intercept those messages, the end-game (the backdoor) is fairly easy to tease out.

> And it's misleading

And worrying that a large number of readers interpretation of the article was influenced by a headline. We really have to do better and be vigilantly aware of how media and power influences public opinion, especially during times such as these.

> it's not just Whatsapp, it's all social media platforms.

...that are US-based. You may want to check out Threema for a valid WhatsApp replacement.

from whatsapps official homepage, respectively (https://faq.whatsapp.com/en/android/28030015, https://faq.whatsapp.com/en/general/26000050)

>WhatsApp has no ability to see the content of messages or listen to calls on WhatsApp. That’s because the encryption and decryption of messages sent on WhatsApp occurs entirely on your device. Before a message ever leaves your device, it's secured with a cryptographic lock, and only the recipient has the keys.[...]

>A search warrant issued under the procedures described in the Federal Rules of Criminal Procedure or equivalent state warrant procedures upon a showing of probable cause is required to compel the disclosure of the stored contents of any account, which may include "about" information, profile photos, group information and address book, if available. In the ordinary course of providing our service, WhatsApp does not store messages once they are delivered or transaction logs of such delivered messages, and undelivered messages are deleted from our servers after 30 days. WhatsApp offers end-to-end encryption for our services, which is always activated

Very interested to see what their response is and if their promise holds that they do not have technical access to content but merely to account information.

I’m much more interested in the wording of “in the ordinary course,” which I don’t think is just a quirk of the language and I believe it is an indication that certain differing measures can take place after being compelled to provide information.

The case law references records kept “in the ordinary course of business”. Such records are admissible, other records are not (being hearsay).


At minimum, it seems the kind of thing a competent lawyer would require.

Even if FB / WhatsApp doesn't currently have such a capability, there's no legal guarantee they couldn't be compelled to add it, thereby putting them in conflict with language that didn't include that clause.

Afaik, case law is murky over whether creating new code qualifies as speech (illegal to compel) or facilitating legally-approved wiretapping (legal and required).

Did you notice the article doesn't say anything about decrypting the messages? It just says they have to turn over encrypted messages.

The question should be why on Earth would they ask access to encrypted messages if they couldn't already decrypt them, or be able to do that soon.

You can infer a lot of information from the metadata, like the volume and timing of messages.

They're after the metadata. Who contacted who, and how often.

The phrase "In the ordinary course of providing our service" seems to imply that in some circumstances they do/can store messages?

Probably not deliberately, but something can always go wrong, which is probably why they added an additional clause that messages older than 30 days should be deleted.

More interesting is what happens if someone gets a new phone. In that case (if I understood correctly) they might ask for the sender to resend the messages with a different key. If they really are forced to add a backdoor then that's where I would fit in a MITM attack, as it is limited in scale and detectable when used excessively.

Whatsapp is under the obligation to comply with lawful intercept regulation wherever they operate, just the same as any other communications platform out there. and no, Whatsapp is not some 'small indie company that just happens to not have been noticed'.

Pretending and using obfuscated language to lead the reader to believe otherwise is disingenuous.

I doubt they would have to change a single word to allow a third party (yet another recipient) to decrypt any message ever sent over this service.

I doubt the politicians who signed this treaty actually understand this technology.

The idea that moves like this will "keep us safe" is utterly preposterous; there are a multitude of other ways in which terrorists (or the boogeyman de jour) could communicate - are the UK and US governments going to insist on backdooring IRC, Slack and face-to-face conversations? Are they going to outlaw encryption libraries?

I truely fear for the future that western governments, in particular the 5 eyes members, are hell-bent on creating. They denounce China and Russia for their human rights records in one breath, and seek to strip us of privacy and personal rights in the next - the hypocrisy is simply staggering.

What's perhaps even more frightening is that so many people believe that moves like this are to keep us safe, will keep us safe.

This can not end well...

The keeping safe argument from government is indeed preposterous. As if that was their mission to keep us safe. Why are they allowing our nature to be destroyed in favor of money/economy? This years heatwaves killed many thousands of people in Europe only, some estimates are in the tens of thousands. This is real deaths in 1 year, not because of terrorist attacks, no fucking backdoor will stop this. And what are they doing? Measures that won't make any difference. The cars will keep driving, the factories continue to produce poisons. Pesticides will still be allowed for better economy at the cost of entire insect species being wiped from this planet in just a few decades. The list goes on and on. It is so sad that I don't really want to listen to politicians anymore. They are the only ones who can change things by law, not me. And what law are they coming with for my safety? A backdoor for Whatsapp and Facebook.. We better ignore all this shit and try to enjoy our little lives for as long it will last.

I can not find any source that many thousands if not tens of thousands died in Europe in 2019 from the heat wave. Can you point me to your information source? I believe you are confusing the 2003 heatwave from over a decade ago with the one in 2019.

The 2019 heat was directly implicated in the deaths of at least 15 people. Five died in France, four in Germany, three in the United Kingdom, two in Spain, and one in Italy. Nine of these were drownings, attributed to people cooling down, and another involved an exhausted farm worker who went unconscious after diving into a pool. The three who died in hot air were aged 72, 80 and 93. Approximately 321 million people were otherwise affected by similar temperatures in the same countries.[1]


Edited for clarity

The difficulty of attributing deaths to heatwaves is something that comes up in books like freakonomics if I am not mistaken.. At any rate, from that wikipedia entry:

> Netherlands reported 400 excess deaths in the week of the heat wave, a figure comparable to those recorded during the 2006 European heat wave.

Netherlands is considerably smaller than the other nations.. Most put these spikes as a portion* that would have died in the next days weeks, or months lowering statistics later in the year and a portion that would not have.

(Edit) A paper putting the 2003 heat-wave death toll at 70,000: https://www.sciencedirect.com/science/article/pii/S163106910...

*The term from the 2003 paper is harvesting and there were little to no signs of it.

> The 2019 heat was directly implicated in the deaths of at least 15 people. Five died in France,

That is not correct whoever wrote it. just do a google search: heatwave 2019 total deaths

CNN reports already 1500 deaths in France only. This is a link to a news site where they estimate a higher count, although I of course cannot verify the correctness of the source: https://www.vox.com/world/2019/6/26/18744518/heat-wave-2019-...

The goal of national security isn't to protect its citizen, but to protect the established order and whoever leads it.

Backdooring a communication is only done to safeguard a government's power under the pretense of security.

This becomes more true the further a country moves away from democracy: in a vibrant popular democracy, people will honestly work for their nation's good and corruption is viewed as evil.

Further away from democracy, leaders are purely self serving, successful corruption is seen as a sign of intelligence and whistle-blowers, far from being heroes for pointing out maleficence, are threatened with execution.

And indeed, institutions eventually shift to working solely for the benefit of themselves and the people in charge. Mostly because any that don't are undermined until they do.

>They are the only ones who can change things by law, not me.

I had this thought yesterday as I read about several prominent politicians in Canada, including the prime minister, actively participating in the climate 'protests' that occurred yesterday. Who were they out there protesting? Themselves? They're the ones with the power to change things. Why were they outside with signs instead of in their offices doing something about it?

Because politicians in a democracy cannot do things unilaterally.

Not all politicians have equal power. In the US, the nominations of important positions, the laws and policies are determined by 20 Senators in GOP.

Those 20 senators (from Alaska, Missouri, Arkansas) are answerable to the demands of their constituents. Those people are not asking for Climate Change Policies. It has nothing to do with money, but the values and culture of the constituents.

It is incredible how HN and majority of the supposedly 'smart' crowd completely fail to understand the dynamics of how policies and laws are passed or made in this country or anywhere else.

It is also incredibly stupid to paint all politicians with the same brush, because if you are an immoral, evil politician you'd exactly want that situation. "All politicians are the same", "All media is the same" is the foundational strategy of bad actors.

It has nothing to do with money? How are the values and culture of those constituents determined, if not through vast sums of money spent on propoganda? Protecting the earth - not shitting where you eat - is an inherently sensical idea. The only way that people can align themselves against such an idea is when they are manipulated to believe it is part of a broader conspiracy to ruin their lives. It takes a lot of money.

Also, so what if the bad actors want us to believe that all politicians are the same? What if it were true that they're all the same and evil? Would it still be "incredibly stupid" to accurately assess the state of affairs?

That can presumably be explained by the fact that there is a federal election coming up in less than a month.

Right? I mean why can't politicians unilaterally change things without convincing the electorate and just fix climate change, the way Kathleen Wynne so successfully managed to in Ontario?

Well we know these actions look good from a public relations point of view. We only need ask if they had any other motivations or if it was only for PR.

Cold weather causes more deaths than warm weather.

See for example: https://www.theguardian.com/society/2018/nov/30/excess-winte...

Sounds like NHS underfunding is what killed most of them.

> As if that was their mission to keep us safe.

I think it is. The problem is, denying us access to safe communication is not going to make is more safe, but less.

> The keeping safe argument from government is indeed preposterous. As if that was their mission to keep us safe. Why are they allowing our nature to be destroyed in favor of money/economy?

I don't think it is quite as simple as this (I'll preface this with saying I don't think we should have backdoors and that I wish we had STRONG encryption everywhere). I think the problem is that different departments have different goals. It is very clear that the CIA and NSA's jobs would be easier if there was a magical tool that let them backdoor in and no one else. The police and FBI would have an easier time doing their job if encryption wasn't a thing. That's definitely true! The issue is who is watching the watchmen? That's why we need checks and balances (specifically by people that understand the tech). These departments are so focused on their goals that they lose track of the fact that introducing backdoors actually creates more work for them (and thus actually makes their lives harder). But as humans we're always focused more on the task at hand and less on the over arching tasks (we're notoriously bad at dealing with large scale multifaceted problems). It all really comes down to these departments thinking "if we had this tool it would be possible that we could have stopped this" (which possible is the key word, because we've seen that they can't. There's just too much data. You're just adding more hay to the haystack). The failure really is at the checks and balances stage, that those watching the watchmen don't understand the motivations nor the consequences, and thus let them do as they please. Agencies running the checks and balances are supposed to be suspicious and critical, not friendly. But these agencies aren't getting the funding nor can they attract those that are tech literate, so there's a feedback loop that is only getting stronger. What I'm trying to say is that there's this long chain and things are broken at many stages and that if a single stage was fixed there would be significant improvement. Basically solving at any single stage will help stop the feedback loop.

tldr: The intelligence agencies should be smart enough that they would know that backdoors will backfire on them. But they clearly aren't. There's also a huge failure at the checks and balances stage where these agencies are getting approval which creates a feedback loop and without solving this the problem will continue to grow.

Thanks for this nuance answer. Unfortunately, "Nuance" will always lose to "Pitchfork", even in supposedly smart and intelligent communities like HN.

There are ZERO people from the pitchfork community who understands the pressure of working in keeping a community, region or country safe. If there is a terrorist attack, the pitchfork people have to answer to ZERO questions, while CIA/NSA/Police will have to answer 'Why didn't you do something".

It is so easy to sit in their comfortable offices and homes and philosophize about privacy when you have no skin in the game.

I mostly agree, but I wouldn't call HN a pitchfork community. It can definitely get that way at times, but I think nuance is welcomed and generally encouraged here. Definitely the only way to keep it that way is to keep promoting it, so don't get disheartened. There's still hope.

> It is so easy to sit in their comfortable offices and homes and philosophize about privacy when you have no skin in the game.

And I definitely agree with this. But that's also why I made a big point into that lack of encryption actually gives these agencies more work (if we look at history). The problem is exactly what you note though, there will always be failure and we ask why they can't stop near impossible things to stop. Post hoc analysis is always easier than in situ.

Was the Europe heatwave in 2019 caused by government actions?

No, not by their actions of course. It happens because of the lack of appropriate actions by them. Governments should anticipate on future events pointed out by science. And that is what they did not. All I see is business as usual.

Can you cite the scientific evidence?

Rather, it was contributed to by government inaction.

Where is the scientific evidence of this?

Maybe the definition of politician should change to:

“One who is elected by the common populace to facilitate business at the cost of logical reasoning, human rights and the natural world.”?

While I’m sure one or two exist, I can’t actually think of a politician in the US, UK or Australia who doesn’t fit into the definition somehow. Again, there would be a few good ones, just not enough.

Still remember how one German Islamist terror group just used their web-email provider's draft feature. They never 'sent' anything. There are often quite simple ways to circumvent this kind of thing.

Good, but your e-mail provider can still see it. And be forced to eavesdropping.

Back in '90s I used a nym e-mail to receive e-mails anonymously and with no traces.

In a few words - e-mails are encrypted with your PGP key and posted to Usenet groups, where you scan all messages and extract only those signed&encrypted with your key.


Yep, there are 1000 and 1 method of communicating securely. Governments are just using this as an excuse to wiretap popular messaging services for general surveillance.

Unless they'll get away with making everyone dumber, they shall fail.

> Unless they'll get away with making everyone dumber, they shall fail.

This is not how it works. As long as enough people are not bothered, they will succeed.

Protonmail has(claims) client-side encryption.

And deletes your account for 'fraud' if you log in on an IP someone else used for malicious activity.

Great, so basically you are a criminal for using a VPN to protect your privacy, since criminals also use VPNs :facepalm:

My Twitter account has been suspended for suspicious activity. Someone with whom I disagreed probably tried logging into my account a few times that triggered it, and now I cannot regain access to my account unless I provide my phone number. When you sign up, phone number is optional, but when someone fucks around with you, it becomes mandatory. The fact that the system is designed this way is absurd. This is not limited to Twitter, by the way.

Yeah only a stupid person can think that backdooring whatsapp will actually prevent the next 9/11. And that's in my opinion the core issue with most politicians, stupidity/tech illiteracy.

I'd love to hear about either a possible alternative government structure in which there are no politicians or a way to attract the smartest people in governments.

> will prevent the next 9/11

When you consider the societal fallout and everything that has transpired since, the most insane part to me is that by its very occurrence, 9/11 itself already prevented the next 9/11. The "next 9/11" was to crash the fourth hijacked plane into a high value traget; the plane on which the passengers fought back which was crashed in Pennsylvania WAS the next 9/11. This was a tactic that was apprehended and adapted to before the day was out. It worked three times on one day, once. An update to the mental calculus of common folks was all it really took. If we had successfully prevented it re-shaping our society, it would have never, ever, ever worked again. This newfound understanding of the rules coupled with a straight-forward countermeasure like reinforced cockpit doors would have closed off that vector of attack entirely.

19 malevolent people acting in 2001 have colored nearly two decades of policy for America. I remember two particular circulating ideas from the time: "they're attacking our way of life" and "they hate us because we are free." The latter was much more divisive and so people spent much more time arguing with each other about it. Meanwhile, whether intentionally on the part of the attackers or not, the first was very effectively accomplished.

Maybe if there had been no 9/11, the agencies charged with protecting Americans would have still been seduced by the ease with which modern technology enables broad surveillance. Maybe hoovering up all the data is too good an opportunity to pass up. Regardless, I yearn for, and still miss the end of history. Our hubris has been rewarded with interesting times.

On the politicians side I agree, tech illiteracy seems to be largely correct. As far as the intelligence agencies are concerned, well I think they know exactly what they are doing. So they know that a WhatsApp backdoor doesn't help against the next 911, it still allows a lot of general surveillance so. And that is what the agencies are after. Terrorists are just an excuse IMHO.

Politicians are not at all stupid in general. The problem we have is in their selective listening after we elect them.

If a legislator is not technically up to speed, considerable tax payer money goes towards hiring people in government to do the research and the explaining. Some high level advisers may come from organizations with a private agenda and after a few years of working within the goverment these experts pop right back to their industry jobs and we don't hear of them anymore.

Ultimately its the same need in whatever form of government we want – we need people we can trust.

This is why people have advocated for bottom up governance, where local groups make decisions and select rotating representatives to take those decisions as made by the group to larger regional councils, etc. In this way no individual has any real power. This is called democratic confederalism and is in progress in Rojava now but could be done in the US. https://www.youtube.com/watch?v=LcndZ0nZ0mo

Along with the vote in process there should be a vote out process. The public should be able to force a vote of their representative at any time during their term and equally vote in their replacement.

The intention being that politicians are aware that they must be consistent with their words and actions throughout their term otherwise they will lose their seat.

Hope we find a way to vote some of them out fast.

Some months before the last big bail out legislations, in my state we had a US Senate candidate who was new in our political scene, appeared as a local man with a law degree from our state university, and he spoke about how the working people struggled with unemployment, delinquencies etc, as he knew middle class issues and can change the ways of Washington. PBS featured him and I still remember the interview they aired from his kitchen in a middle class home. I am among the people who voted him in.

A few years after this election I checked his voting records by chance and I realized that he had voted almost always in favor of the bailout system. I wouldn’t have known this by just reading the news or watching cable. In the next election cycle he won handily, this time supported by organizations with cash to blanket our news with favorable lines for him. That’s how life works I guess.

Everyone bitched at me for not voting the last election. To every person who asked me, I asked who they voted for our states railroad commissioner. They all said “oh I don’t know I just voted all blue”. I asked them if they knew what our railroad commissioner did, they did not know. So I proceeded to enlighten them on how the RR commissioner controls everything around our state’s oil fund and that they all just voted for the equivalent of donald trump to manage our oil rich state.

Of course he did not win because the other candidate was much better qualified and rural voters knew him en masse (he’s the one that signs checks for all the citizen stewards of oil fields), but I found it hilarious that people would flame me for not voting for a figurehead (president) but be a-ok to vote in some no name hack to manage our schools because of big money advertising in elections

> I'd love to hear about either a possible alternative government structure in which there are no politicians or a way to attract the smartest people in governments.

In all seriousness this is the aim of all historical anarchist movements. Despite the propagandization of the term "anarchism", that philosophy has a long history of attempts and writers and thinkers, and it has more often than not "failed" when a powerful state entity violently disbanded the efforts or killed prominent leaders. In other cases anarchism has not failed at all but has existed in tribal communities in different ways long before european thinkers wrote on the subject.

In particular anarcho-communism or the more detailed Communalism of Murray Bookchin (The Next Revolution, 2015 https://www.penguinrandomhouse.com/books/239261/the-next-rev...) looks totally plausible to me.

The biggest issue with state-less society is the conflict with the existing state powers this philosophy creates. However the method of governance has found success in present day Mexico with the Zapatistas: https://www.youtube.com/watch?v=Ww46lxIc6-w As well as Rojava in Syria: https://www.youtube.com/watch?v=LcndZ0nZ0mo

> I'd love to hear about either a possible alternative government structure in which there are no politicians

So the core problem is that humans don't make great rulers. Not are we good at selecting rulers.

Individuals are the flaw in the system. And a government without stupidity, means a government without humans.

Thus... http://bit.ly/2nsTLdE

Blockchain? (I didn’t click your link)

But in that system, how do you allow leniency for the 40 year old single mother of 5 who accidentally switched lanes without using a turn signal?

Individuals also represent humanity in the government. We are not a nation of robots.

That structure is called Republic, as opposed to Democracy.

https://youtu.be/rgUs5wtXgL4 (the video is a bit dated unfortunately)

It's the same tactic that former CIA Director David Petraeus used to send messages to his mistress. It's been around for a while, so investigators look for it.


People can communicate by shooting holes into CS 1.6 walls. There are literally no borders for the imaginative mind.

Or the paris attacks that were orchestrated on the playstation network and via unencrypted sms.. by ppl that were on a watch list already.. The problem is not encryption here

The idea that using a webmail provider's draft feature provides security might have been true a long time ago--I don't know--but it's really stupid to think it does so nowadays.

It was security through obscurity. Not very reliable against an attack, but very unlikely to attract an attack.

> Are they going to outlaw encryption libraries?

Funny story about that, those used to be considered controlled munitions [0].

[0] http://www.treachery.net/~jdyson/crypto/tattoo.html

When you submit apps to Apple, you _still_ have to declare if your app uses encryption.


TBH, I thought encryption was still subject to ITAR rules and considered "munitions" in the US?

The current situation looks like it still has a whole ton of potential legal tripping points, from Wikipedia:

As of 2009, non-military cryptography exports from the U.S. are controlled by the Department of Commerce's Bureau of Industry and Security. Some restrictions still exist, even for mass market products, particularly with regard to export to "rogue states" and terrorist organizations.

Militarized encryption equipment, TEMPEST-approved electronics, custom cryptographic software, and even cryptographic consulting services still require an export license.

Furthermore, encryption registration with the BIS is required for the export of "mass market encryption commodities, software and components with encryption exceeding 64 bits" (75 FR 36494). In addition, other items require a one-time review by, or notification to, BIS prior to export to most countries. For instance, the BIS must be notified before open-source cryptographic software is made publicly available on the Internet, though no review is required. Export regulations have been relaxed from pre-1996 standards, but are still complex. Other countries, notably those participating in the Wassenaar Arrangement, have similar restrictions.


The fight for power has this unavoidable conclusion. The saying that “knowledge is power” is more true than I think most people realize. He who knows the most holds all the power. It doesn’t matter who is in power or what their beliefs are - they would eventually have to resort to these tactics or risk losing their foothold. Those who don’t or can’t afford to will be overrun.

We dislike it even though we would be forced to do the same thing in their position. We dislike it because we aren’t the ones in power. Again, as it goes in the jungle, life isn’t always fair.

From what I can tell, it seems like the US is not forcing a backdoor. is that correct? This sounds like UK is forcing the issue.

However, I don't care if it "keeps us safe". It's just wrong in so many ways

No matter. Under 5 eyes if one has it they all have it.

> The idea that moves like this will "keep us safe" is utterly preposterous

The "terrorism" rationalization provided for this surveillance has nothing to do with reality. The United States government is not in any way involved in fighting the source of jihadist terrorism, and the state responsible for 9/11, Saudi Arabia. They are allies, and Trump's cabinet is now attempting to arm Saudi Arabia with nuclear weapons. The actual intent of this law is to surveil whistle-blowers and journalists. WhatsApp messages have already been used in the case against IRS whistle-blower John Fry, who exposed the Donald Trump-Michael Cohen hush payments.





Ah but you see the government is also mandating that terrorists use these backdoored apps. So it will work

It is reasonable for governments to want to be able to wiretap communications. They can wiretap telephones, including mobile/cell phones, for example as this is built into those systems. There is no problem with that and it is actually welcome.

Those apps that provide end-to-end encryption are a problem and they are working on solving it.

Claiming that they should not be able to wiretap lawfully (or even otherwise) is a rather naive view of the world and lawful wiretapping does not imply dystopia.

> Those apps that provide end-to-end encryption are a problem

Are they though? There will always be a means to communicate without surveillance. Backdooring encryption is fundamentally incompatible with privacy, and will be abused - at scale.

For any real cases, the security services have a multitude of other ways of getting the information they need regardless - for example, they could gaining physical access to the target's phone and backdoor/bug it, or swap it out with a backdoored/bugged one.

The real problem with backdooring or outlawing encryption is that it lowers the bar to entry for governments - it makes it too easy, and such sweeping powers will be abused.

> Backdooring encryption is fundamentally incompatible with privacy,

That's a very bold claim.

Lawful wiretapping exists because it is accepted (and reasonable) that privacy should stop in specific circumstances, e.g. a criminal investigation.

As I wrote before seeing this in black or white is naive. Wiretapping is useful for society. The key is to have proper oversight.

By the way, to lawfully wiretap a phone the authorities only need the phone number: The operator will then duplicate all traffic transparently and on the fly. Bugging a phone really means illegal/covert operations these days...

You don't understand. Encryption is necessary because on the internet, there is no distance. Any web service today must be built with the knowledge that everyone, everywhere is attacking it all the time. The only defense against that kind of attack is an encryption that nobody can break.

When a weakness is introduced into that defense, even if it's a secret key held only by trusted governments and only (they promise) to be used in emergencies, that key is a method of attack that anyone, anywhere can use against everyone. The only defense is that the secret key remain secret, and there are an endless number of ways it could be comprimised, ranging from human error, phishing, or plain old brute forcing. A single key for all Whatsapp conversations is a valuable enough prize for criminals all over the world to invest serious money in custom hardware to crack it.

Currently, encryption schemes like Whatsapp's rely on generation of new keys for every message, making the value of attacking any one message limited. Without this defense, Whatsapp will be cracked, sooner or later. And once someone publicly reveals that key, every single message sent before that point becomes compromised. Maybe even publicly exposed. Would you be okay with that? Exposing your private message history to the internet because governments wanted to snoop?

I do understand. This is an issue with Whatsapp's architecture, not with my comment or governments' concerns.

No, you don't. This is a fundamental rule of how encryption works. You can't design around it.

If there is a key which decrypts everything, then there is a risk it can be stolen or guessed and used to decrypt anyone's messages. This risk does not exist in current, properly designed systems.

Any government which wishes to add a backdoor to an E2E encrypted messaging system must understand that they will be HEAVILY undermining the security and privacy rights for all users of the service which has the backdoor.

I think you are the one not understanding at this point. This has nothing to do with "how encryption works".

Whatsapp decided to go for E2E encryption for commercial and marketing purposes following all those privacy scandals.

They did not have to. They could have gone with P2P encryption, i.e. that their databases could have stored messages in cleartext. That way authorities would not have needed to ask for any backdoor. As already mentioned, this is how cellular operators work (that's obviously something governments wanted).

I think we might see legislation brought in in the future in order to force this.

Oh. I didn't realise you were arguing against encryption in its entirety.

Okay. Let's just get rid of E2E entirely and let Facebook mine not only our metadata but also the content of our messages. Nice plan.

I am not arguing against encryption in its entirety...

> let Facebook mine not only our metadata...

Don't use Facebook, then.

> The key is to have proper oversight.

Yes, but if you make it too easy, oversight fails or is worked around with legalese, and the powers are abused at scale.

> Bugging a phone really means illegal/covert operations these days

My view here is that it should be difficult - the security services should only be able to intercept someone's communications with a legally obtained warrant to do so (the oversight you mentioned); then powers are far more likely to only be used where the is a credible threat.

Lawful wiretapping does not imply dystopia, but the actual practice sure seems to. This is very likely to be abused by enforcement agencies and politicians.

In the UK there is some oversight by the parliamentary committee and also the courts.

Here's a case where the courts say that police surveillance was not correctly authorised: https://www.bailii.org/uk/cases/UKIPTrib/2018/IPT_17_93_H.ht...

> Claiming that they should not be able to wiretap lawfully (or even otherwise) is a rather naive view of the world and lawful wiretapping does not imply dystopia.

I hold the opposite view: that the idea that the evolutionary trajectory of the internet seems wont to continue to allow "wiretapping" (and, for that matter, the existence of a capricious state tout court) is the naive view.

The state is looking increasingly obsolete every day, and moves like this are significant leaps forward in solidifying that conclusion.

Outlawing AES will be difficult since it is embedded into P CPUs.

That's assuming AES is safe.

AES is far and away the most heavily scrutinized encryption algorithm in history. Of course that doesn't make it flawless, but the level of genius that the authors would need to hide a backdoor in it for all these years boggles the mind.

The implementation of AES in popular CPUs? Yeah, who knows.

Yeah, I would not even go as far as AES in your CPU. Just... any part of your CPU, or your motherboard, or your GPU. RISC-V is still not suitable for desktop, or at least not commercially available AFAIK, and POWER9 is too expensive. I want open software and open hardware!

AES implemented on CPUs and AES being safe are two different things.

> They denounce China and Russia for their human rights records in one breath, and seek to strip us of privacy and personal rights in the next - the hypocrisy is simply staggering.

Politicians, generally, are not hypocrites. That implies the guise of good faith. Do you really think the biggest beneficiaries of terrorism are likely to be honest with you? Humans just like being told what to do and what to fear and whom to be mad at, and politicians make eager use of this role.

>moves like this are to keep us safe

From the government's perspective, they are to keep "us" safe. It's easier to do that if no one's safe from us. :)

Granted, that's a little over-ominous because the government's mission statement is to keep its people safe, and it's also elected by its people. Either of these two facts changing is the way bigger danger; backdooring centralized services is stuff that happens in the meantime either way.

If you look at the way elections work at a micro scale in the US, you will begin to lose confidence in the assumption that they are elected by the people. Political machines have huge influence in controlling who it's possible to vote for, and swaying low-information voters. That makes sure they have a lockdown on decision-making, even if they allow a few mavericks through the cracks for the sake of plausible deniability.

Even if this or that individual politician gets voted out, or even ten of them, it won't stop the machine's influence. They're still the ones who decide who the replacements can be chosen from.

They don't have to backdoor slack. Slack is able to respond to subpoenas.

> What's perhaps even more frightening is that so many people believe that moves like this are to keep us safe...

Strangely, HN is also very much in favor of the administration’s trade protectionism.

Ironically, the US and UK governments use the same exact encryption libraries (with the caveat that some use FIPS mode)....

> Are they going to outlaw encryption libraries?

Yes. Absolutely. That's the only logical endgame.

I've said this before, but you can't outlaw the maths. There's nothing stopping anyone from rolling their own encryption, using well documented algorithms. You'd need to literally outlaw Wikipedia. Hell, even one-time-pads could still be used by truly motivated bad actors that wanted to communicate securely.

They can make it so if they spot you using it (for other than communication with approved banks and retailers and such, maybe), that's instantly something they can charge you with. Then go after a few people who're spotted using it on the Internet by traffic sniffing, meaning the only folks left using it are cranks and actual bad guys.

A total tangent, but the words chosen in this comment and how it relates to the logic behind the US War on Drugs are eery - down to "sniffing" out paraphernalia, "trafficking" illegal goods, end users are "cranks" and bad guys...

This is why I am hesitant to use Tor, efforts to monitor it are probably several folds stronger than normal clearnet.

I can encode hidden meaningd in poetry. If you outlaw poetry, our US experiment is over.

That some may get around it isn’t the point. Most won’t risk charges to share recipes with aunt Edna or family photos with Grandpa. Some will keep using it and stay under the radar but any time someone’s caught using it, even if it wasn’t part of a crime, it’ll be added to the list of charges against them. It’ll be dead for common use, and risky and annoying to use for those who keep to it.

You also can't outlaw physics or chemistry. Let me know how non-approved rocketry enterprise goes.

Pretty well? People make small rockets as a hobby all the time. The barrier to entry to cryptography is many times lower too.

>Pretty well? People make small rockets as a hobby all the time.

That's pretty far detached from real rocketry.

A similar metaphor for computing would be the allowed use of a LeapFrog system, rather than a computer.

'My cryptography hobby is going pretty well, I just practiced a Caesar cipher on my LeapFrog.'

(neither a LeapFrog or a model rocket being a practical equivalent to their big relative.)

But if you want to communicate with somebody else in an encrypted way, then you can likely do so with effort that doesn't even approach a hobby. Setting up a system requires more effort, but these singular cases are hard enough to detect and crack that the law enforcement agencies would never be able to do it. You might not even need encryption, because you could communicate in ways that are just obfuscated enough that nobody's going to check. Eg write with blocks on the ground in Minecraft or something.

If the goal is to catch malicious people that are trying to hide their communications, then outlawing encryption won't work. But it will give the government a good excuse to spy on people.

Knowledge of encryption isn't what would be illegal, _using_ it to communicate would be illegal.

"I've said this before, but you can't outlaw the maths."

I don't think this is the right analogy - instead, I think a better statement would be "you can't outlaw random numbers".

A random number and the ciphertext output of a secure encryption algorithm should be indistinguishable.

I don't think I am being naive to think that even in our wildest dystopian nightmares there is no real path from (current jurisprudence in five-eyes jurisdictions) to (random numbers being illegal).

Not possible in the US unless the Supreme Court stops giving a crap about the first amendment. Incredibly unlikely no matter how unpopular free speech might become.

In the name of Children everything is possible

Oh, as well as the Clipper debacle, in the 90s I seem to recall there was a big push towards forced key escrow, on both sides of the pond.

The US government gave up the Clipper chip laws before even trying to take it to the Supreme Court as it was such an obvious loser.

If congress and the president decide they want to increase the SC to 99 justices and add and additional 90 of their choosing then that's when the US Constitution officially dies but it can be done according to the law.

The attack on the legitimacy of the courts will fail this time around, just as it always has. The justices rule mostly with integrity and they’re mostly respected and protected.

What’s the difference between 9 of their choosing and 99 of their choosing?

It's just a hyperbole for packing the court. Allowing one president to appoint >50% of the justices so that the court always rules in their favor instead of waiting for seats to be vacated and re-filled.

You have to kill some of them to put the others in place. If you add 90 slots you can just fill the new 90 slots with your guys. That’s the theory of his statement.

It's easier to have public scrutiny on 9 justices than 99. There isn't enough time for the media or for people's memories to constantly remember to be angry at a few dozen people.

What indication is there of this?

There is no other endgame in which this is pointful.

You're assuming the people making these rules actually understand encryption, which is doubtful at best.

In many cases, (In my country for sure, and I bet this is common elsewhere) members of the legislative branch are approached by people from the intelligence community, who hand them the drafts for stuff they 'need'.

And you can bet those people do understand encryption.

They might understand encryprion, but that does not mean they are trustworthy nor truthful...

Related to that, Bernstein vs. USG is a nice story.

IRC clients are largely open source. How do we know they haven't been backdoored?

I think you answered the second part of your post with the first didnt you? Get the source and compile it yourself.

There have been backdoors in Open Source code. I have only read a tiny insignificant fraction of the Open Source code I run.

I am about 90% certain that IRC and Slack is already well backdoored.

IRC? If that were the case, and open source servers like UnrealIRCd were somehow backdoored in a way the community couldn't detect, you're still free to implement your own backdoor-free server and client if you want. The spec is freely and openly available as RFC1459.

That, and there is OTR. You can create your own IRC client in a few lines of code in Python, and you can pull OTR on top of it quite easily.

As far as Slack goes... I trust them as much as I trust Discord: I do not.

Disagree on hypocrisy. US still affords significant freedoms and largely respects human rights. Whether your communications can be decrypted or intercepted on networks that are government regulated anyway is not hypocritical.

Residents of the US are still free to use whatever mathematical algorithm they want to encrypt their comms. Transporting OTP's across physical borders is trivial, and not technically illegally if not mistaken. Strong encryption is open source as you've pointed out. There's no law against using those open source libraries, nor any discussion to try to censor/outlaw them, AFAIK.

Policing the airwaves and internet pipes hardly qualifies as some major abuse of human rights, particularly when the best that the Intercept/Snowden crowd can come up with regarding things like "Parallel Reconstruction" is "abuse" of "surveillance power" to catch, e.g., methamphetamine traffickers [1].

[1] https://theintercept.com/2018/01/09/dark-side-fbi-dea-illega...

edit: Downvote due to disagreement? This seems to be the mantra of HN in recent years.

> Policing the airwaves and internet pipes hardly qualifies as some major abuse of human rights

The leaders of today are not the same as those of tomorrow - sweeping powers to invade anyone's privacy and communications could easily be used for nefarious purposes. I don't trust our current leaders with such powers, much less potentially worse ones.

> Residents of the US are still free to use whatever mathematical algorithm they want to encrypt their comms. Transporting OTP's across physical borders is trivial, and not technically illegally if not mistaken. Strong encryption is open source as you've pointed out. There's no law against using those open source libraries, nor any discussion to try to censor/outlaw them, AFAIK.

Do you really think things will stay this way?

It seems to me that TFA is just the next step on a slow, but steady, march towards an authoritarian nightmare - once they've worn us down some more, there will be serious moves against encryption (it's happened before, and politicians have been bringing it up a lot in the past 10 years or so).

While I don't agree that your argument was high-quality:

Paul Graham:

I think it's ok to use the up and down arrows to express agreement. Obviously the uparrows aren't only for applauding politeness, so it seems reasonable that the downarrows aren't only for booing rudeness.

It only becomes abuse when people resort to karma bombing: downvoting a lot of comments by one user without reading them in order to subtract maximum karma. Fortunately we now have several levels of software to protect against that.


News Guidelines:

Please don't comment about the voting on comments. It never does any good, and it makes boring reading.


Thanks for clarifying and stating your opinion about the quality of my comment. However, seems a bit too broad-stroke to use downvoting for both the (lack of) quality of the comment and to express disagreement.

I thought it was a comment that deserved an upvote.

But I always always downvote complaints about downvotes, such as your edit.

I do not personally have downmod capabilities, but I don't think it is necessarily too broad: If you interpret it as "People shouldn't read this", it seems reasonable.

I think equating "I disagree" with "people shouldn't read this" is problematic for a forum that wants to encourage discourse.

HN doesn't want to encourage discourse, it wants to encourage worthwhile discourse, and the distinction is significant. Consider "people shouldn't read this" as short for "Having made the mistake of wasting my time reading this, I will flag it to help others to not make the same mistake."

Revise that slightly to 'it is a waste if time to read this' maybe.

I downvote quite rarely in HN over disagreeing with someone. Usually it is when I don't feel the reply adds any value, and is actually negative for the discourse.

That is, e.g doesn't reach me anything about the opposing position, or is argumentative without any substance, but distracting from comments that are more constructive.

Of course other people use different judgement. At the same time, HN doesn't hide comments to a great extent. Even 'dead' comments are optionally visible (with the 'showdead' setting) and quite a few of us read HN with that on. It's very rare for downvotes to silence people here who aren't actively disruptive.

Couple that with first enabling downvotes when people hit a certain karma threshold, and various other limitations, and HN is free of a lot of the downvote problems of other places.

That to me makes it less of an issue if people downvote to signal disapproval here.

Often initial downvotes will be countered when people feel a comment has been downvotes too much as well.

Assuming that US citizens are safe, that doesn't apply to citizens of other countries. So even it the US and the UK respect their own citizens' rights (Snowden showed they don't) they won't respect other people's rights. And then surveillance becomes a tool against a countries and policies the US and UK don't agree with regardless if these are a genuine threat or not. So yeah, it is kind of a big problem.

> Disagree on hypocrisy. US still affords significant freedoms and largely respects human rights.

If you are US citizen maybe, for the rest of the world. Definitively not.

Without being a lawyer, I'm pretty sure random drone strike on civilians in Pakistan, torture in Guantanamo or intercepting entire world communication is not an example of "respect of humans rights".

> Without being a lawyer, I'm pretty sure random drone strike on civilian in Pakistan, torture in Guantanamo or intercepting entire world communication is an example of "respect of humans rights"

This is a good point, I think. The US has an appalling record on human rights (aside from your examples, arming terrorists and overthrowing democratically elected governments spring to mind) - as long as we're talking about the rights of non-Americans.

Take a gander at https://en.wikipedia.org/wiki/Five_Eyes#Notable_individuals

Some of those individuals were guilty of little more than political activism but experienced real harm (e.g. deportation) thanks to surveillance overreach.

Disagree. The end goal has always been to make civilian use of encryption in a such a way as to prevent government from being able to intercept communication illegal. That’s where we will end up.

Four lights. Nice try, Madred.

The US, by their own admission, is "killing people based on metadata" [0].

Which in practice is done by using machine learning [1] on huge data sets gathered with that global surveillance enabled trough Five Eyes.

Because the army of humans that could manually sort trough those zettabytes of data has yet to be cloned. All that ends up in the fancy-sounding "disposition matrix" [2] aka the USGs kill-list. It's just systems upon systems doing their thing and nobody is directly responsible or accountable for anything that ends up happening, like when yet another 30 Afghani farmers get "splatted" by accident [3].

Considering how this has been going on for close to two decades, and the US has a very convenient way going about the casualty statistics [4], I guess these Afghani farmers are just another rounding error in the "war on terror". Figures, because before that they were mostly considered biometric cattle [5] and lab-rats for fantasies about "full-spectrum surveillance" [6].

Note: Under Trump, the USG now even stopped releasing their shined up drone statistics. So it's pretty much impossible to know the full scale about what's still going on to this day.

[0] https://www.nybooks.com/daily/2014/05/10/we-kill-people-base...

[1] https://arstechnica.com/information-technology/2016/02/the-n...

[2] https://en.wikipedia.org/wiki/Disposition_Matrix

[3] https://www.theguardian.com/world/2019/sep/19/us-drone-strik...

[4] https://www.theatlantic.com/politics/archive/2012/05/under-o...

[5] https://www.wired.com/2010/09/afghan-biometric-dragnet-could...

[6] https://we-make-money-not-art.com/big-eye-kabul-surveillance...

How do people post such comments? Do you have these lists of source links just saved somewhere or do you just do it on the spot?

It's just stuff that sticks in my mind as noteworthy over the years and decades of surfing the web, following those particular topics.

I mostly remember the headline, Google does the rest of leading me back to the article.

It's impressive, thanks for taking the time.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact