Hacker News new | past | comments | ask | show | jobs | submit login

With the spread of misinformation and rage using messaging apps that have literally resulted in people getting killed by mobs (see for example https://www.nytimes.com/interactive/2018/07/18/technology/wh...) maybe we should re-evaluate our belief that making it impossible for governments to see what is spreading through messaging apps is an unmitigated good?



The problem is this situation is all or nothing. You can't break the encryption only for criminals, its either secure for everyone or its broken for everyone and not just in a way the government can exploit. Hackers will find a way to exploit it as well. Can you imagine the damage if the worlds IM history was leaked. This is probably the most sensitive data in the world.

Allowing governments access to IM history gives them way more power than they ever had. This is not just restoring lost powers. Never has to government been able to see your entire history of every conversation going back for years. I think most of us would be ok if it were actually possible to create a system where only the government after going through proper court process was able to intercept the messages from that time and forward but currently there is no good proposed solution and very dangerous laws like those in australia are being approved.


How people use a tool is not the fault of the tool - there is an underlying issue that drives that behavior. It would be like mandating that hammers have to be soft enough that they can't damage a skull because people use them to bash in peoples heads, which yes would prevent hammers from being used as weapons but would render them ineffective at their original purpose.


I don't think that's entirely true, sometimes tools have only one purpose. It would not be ethical to manufacture nukes and sell them to people, for example.

Even with a messaging app, imagine that you created a new one, and then found that for some reason 90% of your user base is hitmen communicating with their clients. Maybe that's not your fault, but I think you would be ethically obligated to shut it down, or significantly modify it to stop enabling hitmen.

Obviously these are contrived examples, and often in real life it's impossible to make a tool that can't be used for evil. But I don't think you're devoid of responsibility just because you didn't intend for your creation to be abused. If you accidentally created something dangerous, you have an obligation to take reasonable measures to mitigate the danger.


I think a large factor in this is the range of intended uses - in the example of a nuke, it can only be used for one thing which is evil, and so there is no downside to banning it or mandating changes to the properties inherent to its existence. But tools like private messaging and hammers have a huge potential for being used for good (due to the same properties that make them useful for evil) and targeting their properties to reduce viability for evil also reduces the amount of good they can do.

All that being said, I do agree that in some cases there is a definite ethical burden on a creator to consider the impact of his creation - I just think that in many cases the best solution is not to change the tool to avoid misuse but to figure out why the misuse occurs/would occur in the first place and try to solve that. I would conjecture that the misuse more often than not points to a deeper social issue that is for some reason not being properly dealt with but which is actually a really big deal that no one wants to confront. I can think of a few examples but I think that level of exploration may be better suited to a blog post than a comment.


>With the spread of misinformation and rage using messaging apps that have literally resulted in people getting killed by mobs (see for example https://www.nytimes.com/interactive/2018/07/18/technology/wh...) maybe we should re-evaluate our belief that making it impossible for governments to see what is spreading through messaging apps is an unmitigated good?

People get killed by mobs in China[1] as well, a country which backdoors all major social networks (Weibo etc) + at network edge (great firewall).

https://en.wikipedia.org/wiki/Human_flesh_search_engine


What makes you think these "mobs" used WhatsApp for its security reasons? I'd expect it's far more likely they use WhatsApp because it's the most popular messenger in that part of the world.


People aren’t dumb.

Remember Nextel direct connect? It was known that those communications weren’t tappable initially, and for a time every street level drug salesman had them.


The way WhatsApp was used in e.g. Myanmar included group chats involving dozens of random acquaintances - basically the equivalent of gathering in a town square to gossip. Needless to say, there isn't any security to speak of in such an arrangement. Nor did anyone ever complain that WhatsApp encryption is the obstacle. It's the part where everybody can gossip to everybody, with transparent scaling, that enables flash mobs - and it could just as well be a plain text SMS otherwise.


How would the ability to intercept everyone’s communications have mitigated that incident?


And how many people need to be killed by authoritarian states or rogue elements in the government of states with totalitarian powers before we consider a possible occasional death due to lack of surveillance acceptable?

I mean, yeah, sure it's something to consider. But it's not exactly like too much surveillance hasn't ever killed anyone.


So we should just go ahead and put Orwell's Telescreens in everyone's house so the governments can see what we're plot^h^hannng all the time?

(Looks around the office and sees the Echo and Google Home, remembers how many friends have those and/or Samsung "Smart TVs" in their home, or who have their phones constantly listening for "OK Google" or "Hey Siri". Right. As you were...)


No.

The state should be able to get a warrant to intercept communications for reasonable cause, and the accused should be able to litigate the validity of the search.


Unfortunately, in the world of crypto, if the state can intercept communications, then it's equally possible that a determined attacker can. Encryption either works for everyone, or it doesn't.

Governments are more powerful actors on this playing field, but they are far from the only ones on the field.

A warrant doesn't expose something your saying to the government, it exposes what you're saying to anyone who might be listening.

We're not talking the equivalent of handing some documents over to the police, we're saying the police are ordering us to stick the documents to the window of our house so that they can see them.


That's a little hard to do in a world where this kind of thing is done by National Security Letter.


You may notice that I didn’t include any advocacy for NSL in my comment.

Pretending that technology is infallible is a defect equally as bad as NSL with respect to the rule of law.

Obviously no complex IT system is infallible, and now we have a situation that with no legal remedy, the police and intelligence services are compromising or allowing latent defects to remain in order to fulfill their missions.

You never hear about law enforcement concerns with respect to iMessage or Signal, so it is likely that whatever security you think you have from the state is not meaningfully there.


I'm pretty sure NSLs are only enforceable in the US.


Plenty of other countries have similar mechanisms, where the user is also not informed (because of a court order or similar). The US is certainly not the only culprit here, although they may or may not be the worst.


Australian resident with a UK passport checking in here. Australian's are fucked too. And we pretty much copy/pasted our new laws from the UK ones, so UK residents are as well. If Canada/New Zealand have not already passed equivalent laws or are not in the process of doing so, my paranoia about Five Eyes might be a little miscalibrated. But realistically, I suspect its more likely that I'm not paranoid enough, rather than too paranoid...


How will the accused know about the search when they aren’t told about it?


I agree. There’s often an extreme point of view here with respect to this.


Meta, but this parent comment is a good example of the mis-use of downvoting on HN.

If you don't agree with the poster then engage in debate, don't just click the little arrow to try to grey it into oblivion.

We're all here to learn. Why not learn how to challenge this fairly widely-held view. Imagine you're talking with a 'normal' at a Christmas party and they say that; do you just stomp away singing LALALA?


It's possible this got downvoted because it is seen not as a misunderstanding of reality to be corrected, but in fact a harmful meme that the original poster doesn't actually believe themselves. Viewed through that lens, it certainly does deserve to be downvoted and it does not deserve to be interacted with. Not that I advocate either action.


Maybe we should consider that goods can still be worthwhile despite their mitigations.


I agree, but when you do that, you need to actually make an accounting of the costs/benefits. If you look among programmers and security specialists on say HN there is not even a debate or discussion about this, but rather an absolutist position that this is good and that the only reason to think this is bad is if you are a totalitarian government wanting to oppress your people.


I think you're conflating two different positions, which do admittedly co-occur in many people: 1. The technical, that any such "backdoor" is necessarily a backdoor, with all that implies, and thus to be eschewed on a "fundamental principles of good security" basis, and 2. The moral, that any such backdoor is crime against humanity, or whatever, because some of the people who have the technical capability will be leveraging it in order to oppress, and all of them will be doing so in order to act in a manner contrary to the user's interests.

Who do our tools serve? Is it just that they should be made to serve someone else, against us? Where, exactly, is the line on one side of which it's justified, but on the other it's abuse? How do you build a system that prevents abusive uses, but allows appropriate ones?

Decrying absolutist positions is all well and good, but it is a nigh-on tautology-level truth that a system with a flaw or backdoor, will be exploited — usually in multiple ways, and well beyond any potentially intended such.


Yeah yeah, but ultimately this all comes down to whether you think government interception of private messages is ultimately better than the messages staying private. People here obviously don't feel the same way you do about that.

Myself, I'm not quite so convinced as you seem to be of the harm of "absolutist" positions. Maybe the truth isn't always somewhere in the middle




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: