Hacker News new | comments | show | ask | jobs | submit login
Why privacy is important, and having “nothing to hide” is irrelevant (robindoherty.com)
697 points by synesso 535 days ago | hide | past | web | 316 comments | favorite



I think the tech crowd is in denial about their role in surveillance.

We expect professionals to behave ethically. Doctors and companies working on genetics and cloning for instance are expected to behave ethically and have constraints placed on their work. And with consequences for those behaving unethically.

Yet we have millions of software engineers working on building a surveillance society with no sense of ethics, constraints or consequences.

What we have instead are anachronistic discussions on things like privacy that seem oddly disconnected from 300 years of accumulated wisdom on surveillance, privacy, free speech and liberty to pretend the obvious is not obvious, and delay the need for ethical behavior and introspection. And this from a group of people who have routinely postured extreme zeal for freedom and liberty since the early 90's and produced one Snowden.

That's a pretty bad record by any standards, and indicates the urgent need for self reflection, industry bodies, standards, whistle blower protection and for a wider discussion to insert context, ethics and history into the debate.

The point about privacy is not you, no one cares what you are doing so an individual perspective here has zero value, but building the infrastructure and ability to track what everyone in a society is doing, and preempt any threat to entrenched interests and status quo. An individual may not need or value privacy but a healthy society definitely needs it.


"Yet we have millions of software engineers working on building a surveillance society with no sense of ethics, constraints or consequences."

No, we don't.

We have probably a few hundred doing hard-core surveillance. We have another few thousand functioning as enablers by making social media and ad networks really attractive. We have a whole lot of non-engineers insisting on placing ads and tracking on their websites.

And then there's the mass bulk of software engineers that have nothing to do with it, and nothing they do will stop it.

50% of doctors decide to stop doing something, and it gets noticed. 99% of software engineers decide to take enormously strong stands against surveillance even at great personnel cost, and surveillance continues on as if nothing happened, except maybe those who work on it get paid a bit more to make up for decreased supply.

It may, in that weird 20th/21st century fashionable-self-loathing way, feel really good to blame the group you're a part of, but basically what you're proposing won't do anything at all. You're imputing to "software engineers" in general abilities they don't collectively have. You've got to attack it at the demand level, you will never be able to control the supply. This also matters because if you waste your energy with that approach, you might decide you've done something about the problem and stop trying when in fact you've done nothing.


You're really low-balling.

Work in the greater D.C. area. Within a 150-200 mile radius, there are literally tens of thousands of developers working directly on surveillance. Probably even more. How do I know this? From random sampling. Go to any tech event, talk to any program manager at any government contractor. The work and money is in surveillance.

And, that's just government surveillance. All that tech is then spilling over into corporate surveillance. Location and behavioral tracking is big money. How do I know this? Because, sadly, that's how I have to make my money. The problem is that there's always another grunt like me willing to create the systems that enable this.

The solution: Use all of this surveillance tech and data to expose all of the VIPs. Publicly post where they are and where they've been, who they've been with, what they read, and what they buy. You do this and laws will be created pretty quickly.


I don't know what dreamland you live in, but everyone knows what happened during the 2008 meltdown what laws have been created?

Everyone knows how the invasion of Iraq was a complete mistake. Has someone gone to jail?

The public is not going to shutdown anything they are wholly complicit in and benefit from. Which is why empires eventually fall.


"The solution: Use all of this surveillance tech and data to expose all of the VIPs. Publicly post where they are and where they've been, who they've been with, what they read, and what they buy. You do this and laws will be created pretty quickly."

I'd like to think that'd be the case, but consider one of the more-recent privacy intrusions with "The Fappening" ... very little became of that, despite the wealthy, high-profile individuals involved. I realize they weren't the politically connected, but they were certainly what society considers "VIPs".


> The solution: Use all of this surveillance tech and data to expose all of the VIPs. Publicly post where they are and where they've been, who they've been with, what they read, and what they buy. You do this and laws will be created pretty quickly.

This has happened in the past and the reaction from the individual people has been to 180 completely on their opinion of surveillance (there was a recent post with sources, but I don't have it handy). This could work.


I fear that those new laws would target classes. "No surveillance of elected federal officials." "No surveillance of government contracting corporate executives."


> The solution: Use all of this surveillance tech and data to expose all of the VIPs. Publicly post where they are and where they've been, who they've been with, what they read, and what they buy. You do this and laws will be created pretty quickly.

Laws will be created pretty quickly, but only to protect VIPs.


> 99% of software engineers decide to take enormously strong stands against surveillance even at great personnel cost, and surveillance continues on as if nothing happened, except maybe those who work on it get paid a bit more to make up for decreased supply.

Yup. And the magic of digital content, software being a kind of it - it's infinitely copyable. It takes one guy to write a surveillance package and open-source it or have their company sell it, and everyone can now use it.

It's not engineers who make the decision to use surveillance technology. Hell, for most of the work a software engineer does, most of the data coming from surveillance tech doesn't even matter.


Every time one of millions of developers commits code to a centralized service, they have a hand in exposing individual's data to the surveillance society we live in today. Data exposure can come from being publicly available on the site itself, being obfuscated through aggregation in the service's APIs, leaked through security holes in implementation of said services, risk of theft by hackers looking for high value targets with private data, or misappropriated by the company itself or its upstream providers. The idea that customers can somehow understand the scope of their exposure to privacy violations is laughable.

One thing which became apparent to me when I began to focus on this issue was the fact there are countless other services which provide services to other services, all of which have some degree of access to upstream customer data. For example, if you log to a hosted logging service, some of your customer's data is sent to them. If that service use AWS, then data is sent to Amazon. And so on.

http://www.stackgeek.com/blog/kordless/post/a-code-of-trust

Arguing efforts to make things better is pointless is a very dangerous thing to do, assuming we actually want things to be better. Cognitive dissonance is a powerful force, especially when there are startups to be built!


That is not what's typically meant by surveillance.

Sure, it turns out using centralized web services has helped the government with things such as PRISM, but that doesn't mean we should blame people for those development practices rather than the government.

Prior to PRISM, pretty much any reasonable person would assume that the blobs you store in S3 aren't going to be looked at anyone or, worst case, metadata will be seen by AWS employees for debugging stuff.

What we have done is make things a ton better for developers; we can make things quicker and more easily which empowers society/humanity. The fact that it's incidentally contributed to a surveillance society through no intent of the developers in a way you wouldn't reasonably expect does not make the developers culpable.

It is true that customer data is trusted with a lot of services-of-services nowadays, but do you want to go back to the stone age where the only people who can store anything must run their own hardware with their own databases and so on?

The right thing to do here is to call for better use of encryption where possible and, for surveillance issues, to reign in the unreasonable government programs that make this practice result in such problems.


> Prior to PRISM, pretty much any reasonable person would assume that the blobs you store in S3 aren't going to be looked at anyone or, worst case, metadata will be seen by AWS employees for debugging stuff.

I beg to differ. Where you concentrate power, you have to expect abuse.

> It is true that customer data is trusted with a lot of services-of-services nowadays, but do you want to go back to the stone age where the only people who can store anything must run their own hardware with their own databases and so on?

That's a false dichotomy. Yes, I want to go back to people running their own hardware with their own databases. And to have that work as easily as your favourite cloud service. There isn't anything inherent in running your own hardware that requires that to be a major burden.


> There isn't anything inherent in running your own hardware that requires that to be a major burden.

Sure there is. Climate control, redundancy/backups, and power consumption/reliability, to name a few, are all concerns that we get to delegate to "the cloud," that are 100% "inherent in running your own hardware."

I applaud your usability argument, but there are most certainly inherent burdens to running your own hardware that don't exist for cloud services.


> Climate control

A 10 watt server doesn't need climate control.

> redundancy

Is mostly a matter of software.

> backups

Is also mostly a matter of software. With some simple peering mechanism, you can store backups on your friends' servers (and they on yours). Though a standardized pure backup storage API for cloud storage of encrypted backups at one (or more) of a multitude of providers might be a useful option to have.

> power consumption

Is a matter of plugging a plug into a socket in the wall.

> reliability

Is also mostly a matter of software.

Now, I am not saying that running your own datacenter is no work, but running a server or two for your personal needs or for the needs of a small company should be possible to make almost a no-brainer.

There is no technical reason why you shouldn't be able to buy a bunch of off-the-shelf mini-servers for a hundred bucks or so a piece that you can peer by connecting them with an ethernet or USB cable or whatever might be appropriate and that you then connect to the internet wherever you like and that automatically replicate their data among each other and allow easy installation of additional services via a web interface, with automatic software upgrades, and allow you to rebuild the state of a broken server by connecting a new one and clicking on a few buttons in the web interface ... well, there are many ways to solve the details, but my point is: cloud providers also don't employ one admin per machine, but rather automate the management of their machines to make things efficient--there isn't really any reason why much of the same automation strategies (which are mostly software, after all) shouldn't be usable on decentralized servers in people's homes.


> It is true that customer data is trusted with a lot of services-of-services nowadays, but do you want to go back to the stone age where the only people who can store anything must run their own hardware with their own databases and so on?

This statement is in conflict with itself logically. It's arguing that diminished trust levels for data are rationalized to achieve a savings in time and cost to run the infrastructure for the application. The conflict comes about when you start assuming the data has acceptable levels of trust requirements for a given customer. The fact is, you can't speak for my trust levels, which is exactly what is being discussed in the link.

I get to say what trust levels I want for my software and data. Not being able to use the software because I can't trust it is an unacceptable proposition, so I challenge our abilities to build something better than what we have today, and do so without rationalizing why we aren't building it.


must run their own hardware with their own databases and so on

I have hardware in my pocket that is hundreds of times more powerful than the first web servers I every worked on. There is no technological reason why that same hardware couldn't be used. I'd love to have a PAN based around my phone (which is way more local to me that much of the "hardward with their own databases" that I've ever worked on. Federation to Facebook/Google/Instagram/whatever the next big thing is would be amazing. And the reason it hasn't happened even though powerful hardware is everywhere isn't due to lack of technology.


I'm working on product that federates software deployments to any target cloud using immutable data structures provided by the blockchain. It's called Wisdom.


Doctors are actually professional. They have rigorous certifications, they have a professional board that administers examinations and issues licenses, they have a defined structure for reporting ethics violations with a code of ethics dating back centuries.

Programmers are just a loosely-defined group of tinkerers, labourers, and the odd scientist or engineer. How do you expect to impose a structure on that? A teenager can tinker around with software in his bedroom and nobody gives a damn. If he were to conduct medical experiments on his little sister, on the other hand, he'd go to jail. That is the difference.


Doctors (as a profession) have a professional regulatory board and are educated on the ethical proceedings, but doctors (as an individual) can be just as corrupt as anyone else. The difference is accountability within the profession. If a doctor starts doling out tons of prescription opiates, an auditing system is in place (Many levels in fact - either within the state, nationally via the DEA, by someone arrested who will rat the doctor out in exchange for lenient terms, or by a pharmacist who has seen one too many "Oxycodone 30 take as needed" pass through his shop.)

Programmers (as individuals) can't be ethically audited, but what we can do is regulate the data which is allowed to be collected. You regulate it like any other industry. Sigma-Aldrich is a corporate company that sells pharmaceutical grade precusors. I was dating a girl who was doing a post-doc in o-chem, in her office waiting to finish up something, and flipped through their catalog. I saw a precursor that was heavily flagged by the DEA which could be used to synthesize massive amounts of a recreational drug. Curious, I asked her the procedure for procurement, and she delineated it. In short, she could get it with a sign-off from the PI and a few other things fairly easily [she would never do that, she's far too ethical - but her PI was famous enough that a request on his letterhead with "Veritas" on it would have been enough] but there's a chain of custody and auditing system in that just like there is with doctors who are issued DEA numbers. If I call up S-A and ask for the same chemical not only would I be laughed off the phone, but they'd likely submit my information to the DEA to flag me for further investigation.

What am I getting at? You can't regulate people, but you can regulate systems. If that precursor was ordered and that drug happened to pop-up, the DEA could easily call up any of the suppliers of those precursors and figure out when it was dispensed fairly easily. We need to regulate any institution that collects data in the same way. When it's at a point where the institution is large enough to collect information at a level like that, issue compliance terms. In the same way publicly traded companies have to release financial information to the SEC and comply with numerous reporting terms (look at EDGAR to see how extensive it is), open up another branch of the government that is in charge of regulating the companies that collect data. That way, your engineer with loosely-defined morals who is capable of doing whatever will be prosecuted just like amoral doctors.


> We need to regulate any institution that collects data in the same way.

I feel like this is too wide. Everyone collects data. I don't mean all tech companies collect data, I mean, for example, your friends have copies of the emails you've sent them. They have photos with you in them of places you've been with timestamps and GPS coordinates. Your coworkers have access to your calendar. Your mechanic has the service history on your car. Your librarian knows which books you have checked out.

These aren't problematic situations because they each only have a little piece of your data, and you trust each those people with that little piece, and if you don't then you don't have to give it to them.

The problem is when you don't have that choice. Which is what happens when you're dealing with a government or a monopoly (or some other concentrated market where you can't trust any of the players). You can't reasonably choose to not have your location collected by your mobile carrier, or the traffic cameras in front of your home. If all your friends use Facebook, then Facebook Facebook Facebook.

But we don't really want to regulate Facebook. I mean holy cow, what is that even supposed to look like?

I think we can separate the problem into two pieces. The first is collection by, let's call it, unavoidable monopolies. Telecommunications carriers and other utility companies. This is where we know exactly what to do, because these entities should not be collecting any information about people at all. There is no reason Verizon needs to know anything about you other than whether you've paid your bill. So regulation here can be useful, e.g. make it unlawful for carriers to triangulate a cellphone's location without a warrant, or collect anything whatsoever about the contents of IP packets. But we also have a strong technical solution here. Encrypt all the things. Fully deprecate HTTP in favor of HTTPS. We need to build, for example, DNS query privacy. Things like that.

The other part of the problem is what you might call avoidable monopolies. There is no fundamental reason why Facebook has to be as centralized as it is. You have a phone which has all your photos on it and is connected to the internet 24/7. Why is there a copy of your photos on Facebook's servers? If one of your friends wants to see one of your photos, why are they not getting it directly from you? Then you don't have to trust Facebook with a copy of it. So the solution for this half of the problem is, disintermediate the avoidable monopolies.


An interesting point, and I generally agree. But RE "avoidable monopolies":

> Why is there a copy of your photos on Facebook's servers? If one of your friends wants to see one of your photos, why are they not getting it directly from you? Then you don't have to trust Facebook with a copy of it. So the solution for this half of the problem is, disintermediate the avoidable monopolies.

It's because decentralization like that is stupidly, stupidly inefficient. Not to mention that the assumption that your phone is actually on-line 24/7 is unrealistic, and that's before we notice we're not on IPv6 yet, or that people also use cameras, or that they change their phones, go out of service range or simply want to free up space on SD card for something else.

So the fundamental reasons are a) efficiency, and b) availability. That's not to say things couldn't be improved wrt. privacy. I don't know that much about crypto yet (that's about to change, for work-related reasons), but I vaguely recall that there are encryption schemes that would let only you and your friends access the data stored on third party servers, and that would make the data unreadable for said third party.


> It's because decentralization like that is stupidly, stupidly inefficient.

Disagree. If you're Netflix wanting to distribute Jessica Jones then you want something like a CDN (although in that context BitTorrent is also "something like a CDN").

But think about wanting to share photos with your friends. There are only thirty people who actually want to see the photos. Twenty five of them live in the same city as you, which makes direction connections to you about as efficient as a local CDN node, and the other five live in four different cities, so in all but one case there is nothing to be gained from caching in any of those places because there will only ever be one copy requested. In that one last case the CDN would conserve just one long-distance copy, and that's assuming we can't make P2P software smart enough to have the second person in Timbuktu get the photos from the first person there.

> we're not on IPv6 yet

This one is probably the main reason why this hasn't actually happened yet, but it's not like we don't know what to do -- how about we get on IPv6 already?

> or that people also use cameras

You seem to be implying there is some reason why a photo taken with a camera couldn't still be distributed using a mobile device (or plug server or PC or whatever you like).

> or that they change their phones

And then they can copy the stuff from one to the other.

> Not to mention that the assumption that your phone is actually on-line 24/7 is unrealistic

Availability is a different tack. OK, your phone doesn't have twelve nines of uptime, but it probably is actually online upwards of 90% of the time. And we know how to build reliable systems out of mostly-reliable pieces.

We're assuming that there is a piece of software on your device which already knows who your friends are. So now it just needs a check box that says "cache things for my friends if they cache things for me" and now your friends can get your photos from your other friends (or from their own device) even when your device is occasionally incommunicado.

> or simply want to free up space on SD card for something else.

I think there's a law of physics that says your photos, to exist, have to exist somewhere. I suppose "I would rather give my private data to Facebook than buy an SD card big enough to hold it" is the sort of thing you have to decide for yourself.


It's only inefficient because NAT ruined the ability to publish.


Not really. Even if you could expect everyone to maintain their own servers (because a phone is not a device suitable for the task) - and remember, we're talking about the general population, not just techies - connecting like this would still be inefficient, compared to a bunch of central CDNs mirroring the data. Also, I can imagine it would be a logistics hell, unless you're willing to add more layers of indirection (e.g. trackers, the torrent kind).


In this particular scenerio you may do better encrypting the content and having keys shared between you and your friend, but not Facebook.


Another important bit that contributes to your excellent comment is that software and data are very hard to control substances. Unlike physical goods such as precursors they can be transported through wires and all over the globe in less time than that it would take you to fill out that sign-off form.


Also, when data is stolen, it is simply copied, the "original" remains on your computer - unlike physical items (which disappear when stolen). So it is really hard to notice.


Very informative reply, thanks! How do we regulate data-collecting institutions internationally? Look at the EU's Data Protection Directive[0]. As extensive as it is, it's struggling in the wake of the failure of the Safe Harbour Decision[1].

[0] https://en.wikipedia.org/wiki/Data_Protection_Directive

[1] https://en.wikipedia.org/wiki/International_Safe_Harbor_Priv...


I'm not informed enough in law, much less international law w/r/t intangible assets [and, maybe more importantly, the political infrastructure surrounding them] to make an informed response to that but I'll try just based on my (limited) historical knowledge. (This is a pundit response, not an informed one.) Even if we constrained you request to simply a domestic domain, it'd be challenging because of the corporate interests who'd actively fight against it. Google et al would stomp on any bill that even remotely infringes upon their ability to aggregate data, as targeted ads are (or were as of circa 2011, when I last bothered to look a cash-flow report of their) ~95% of their revenue.

Magically, should a bill/resolution be introduced to the floor and not be stomped on immediately, enforcing it internationally would be about as difficult as say, enforcing international oil embargoes or a ruling by the ICC (i.e., nearly impossible - you don't see any proceedings against Cheney or Rumsfeld for war-crimes within the Hague, now do you?). Domestically, however, the US has (or had, historically from, say, 1930 until the mid 90s) the economic/political influence to effectively enforce their agendas fairly effectively. The new US gov't entity formed would have to have the intent to limit data collection then exhibit the willingness to penalize those institutions for violating those data collection policies (e.g. similar to an FDA fine issued for a multi-national drug company who has a presence within the US).

Again, too many financial interests opposed to see this happening, but the refusal to adhere to the legislation would mean (in theory) loss of US business, which would be catastrophic for most industries. HackerNews user:grellas (or was, I haven't seen him post in a couple years now) is an attorney specializing in tech affairs who'd be able to make a better response, but from a strictly political POV, even domestic legislation limiting data collection would never occur.


This web site is for you, it insulting to programmers, calling them hacks.

( http://acm.org/about-acm/acm-code-of-ethics-and-professional... )


One thing people seem particularly blind about is that private companies holding data from their own purposes is the huge point of failure for privacy.

The government can get your gmail, facebook, verizon, amazon data because those companies keep that data about you. The NSA doesn't need to spy on you, google already does. I don't think the NSA is reading my email, but I know Google is.

Not to mention that when all these tech companies are spying on your for profit, your privacy is already destroyed.


A lot of people are not blind about this at all.

They understand fully that their data is collected and they expect nothing less than the top result of their Google, Amazon, and Facebook queries to match exactly what they are looking for.


A lot of people have an abstract idea that information is being collected, yes. I suspect that few people realize or know what the amount of information is, who has access to it, what purposes they use it for.

Does anyone remember that angry email they sent 5 years ago where they were criticizing their boss? Google does. What kind of profile can you build from thousands and thousands of such emails, messages and queries, and location data and pictures, videos, actions on social networks?

I think some companies have a better idea about who some people are than those people themselves.


And thanks to this technology, people can understand themselves better, if they choose to. It's truly bizzare that some people instead take away that the right way to "correct" this discrepancy is for companies to know less.


Why do you find that bizarre? Given the asymmetry of resources and conflicting interests between people and the companies who know so much about them, it seems perfectly reasonable to want companies to know less. The vast majority of that knowledge is used to take from the consumer, not to give.


It's not a zero sum game.

Good companies use information they collect to provide better services. Bad companies use it to rip people off. The problem of bad companies doing bad things is independent of companies having information about people.


The arguments for information control are similar to those for anything else -- less [x] floating around, less potential for abuse.


And that argument by itself is not enough to restrict anything. Less hats floating around, less potential for abuse. Less televisions floating around, less potential for abuse.


The Tradeoff Fallacy: How Marketers Are Misrepresenting American Consumers and Opening Them Up to Exploitation

https://www.asc.upenn.edu/news-events/publications/tradeoff-...

    ... the survey reveals most Americans do not believe that ‘data for discounts’
    is a square deal.

    ... Rather than feeling able to make choices, Americans believe it is futile to
    manage what companies can learn about them. The study reveals that more than half
    do not want to lose control over their information but also believe this loss of
    control has already happened.


Define "a lot" and "fully".

But more importantly, sure, there are people who think they deserve to be mistreated, there are people who are drug addicts to the point of barely being anything else and still would fight anyone who gets between them and their dealer, and of course and there are plenty of people who have no problem with all sorts of messed up things up to murder as long as they themselves are not on the receiving end of it. Yet even if 99% of all people regressed to that station, that wouldn't do one bit to diminish my own human rights. That some or even a lot of people are fine with certain things, whether they understand them "fully" or, which I find more likely, "not in the least", is the problem, not the solution.

Drive to the extreme: he right of people to do what the White Rose did will always outweigh the right of people to not be part of the White Rose. It's dissidents and persecuted minorities who define the boundaries of these things, not the people who are living in comfort in exchange for not standing up for anything or against anyone. They exist, and their opinion matters as a problem to be solved or worked around, but that's the extent of it. Some things can not be justified by anyone agreeing to them, people do not have that power even when numbering billions.


That's a big part of why restrictions on governmental collection and retention of data will never suffice. (The other big part is terrorism fears. Whether or not you or I agree with the plan -- as a practical political matter, a lot of surveillance will be done to try to identify in-country baddies.)

http://www.dbms2.com/2010/07/04/fair-data-use/


Have you seen the Idle Words talk, "Haunted by Data"? http://idlewords.com/talks/haunted_by_data.htm

Pretty compelling talk, culminating in:

    I believe there should be a law that limits behavioral data
    collection to 90 days, not because I want to ruin Christmas
    for your children, but because I think it will give us all
    better data while clawing back some semblance of privacy.


I left the NSA a few months ago largely for these reasons. While I knew I personally couldn't do much (if anything) to stem the onslaught against human rights, I knew at the very least I didn't want my life's work contributing to it. So I left, leaving behind a very secure sysadmin position with good salary, six weeks of paid vacation and incredible health benefits. Now I'm a 1099 independent contractor for a financial services company with zero job security. But the freedom of burden on my conscience has been worth it.


You probably had some serious inner battle about this. I applaud you for taking a stand, these kinds of decisions aren't easy. It gives me hope that there are still people out there who care more about doing the right thing than tapping to a source of money. Tangentially, I read somewhere months ago there was an effort by a group of activist people who built a tent near the NSA building and invited the people working there to come and discuss their job and its issues. I'm wondering how successful was this effort? How was it received by the employees?


I didn't interact with any of the people you're referring to, so I can't comment on that situation.

Thanks, it was a difficult decision and took me a while to come to, but I knew I couldn't continue working there in good conscience.


I applaud your courage! This is so great to hear. It's only when people stand up for what they believe, and nearly always at great personal cost that meaningful change begins to look possible.

It's so important to have activism and structures in place to protect whistle blowers and others not comfortable with our current direction to take this difficult path. Respect!


Large parts of the tech crowd are making big money from advertising and tracking people. Unsurprisingly, it seems these developers and entrepreneurs have a hard time understanding why their source of income has negative effects on society.


I can't help but be reminded of a sentence by Theodor Adorno:

    Keineswegs weiß man bestimmt, wie die Fetischisierung
    der Technik in der individuellen Psychologie der 
    einzelnen Menschen sich durchsetzt, wo die Schwelle ist
    zwischen einem rationalen Verhältnis zu ihr und jener
    Überwertung, die schließlich dazu führt, daß einer, der
    ein Zugsystem ausklügelt, das die Opfer möglichst
    schnell und reibungslos nach Auschwitz bringt, darüber
    vergißt, was in Auschwitz mit ihnen geschieht.[0]
    
In short: the fetishization of technology makes its creators forget for which purposes their wonderfully efficient tools will finally be put to use.

[0] Erziehung zur Mündigkeit, S. 91


I think you missed an important bit in your translation.

"We do not know how to determine how the technology fetish in individual people leads to the point at which a rational relationship changes into one of over-valuing, which eventually leads to someone designing a train system to get the victims as fast and smooth as possible to their destination in Auschwitz, but who forgets what it is that happens to them once they arrive there"


This is a good time to remind readers of this Upton Sinclair quote, too: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it".


Well, I actually feel it concerns much more than simply money. The avant-garde - be it technological, artistic or intellectual - has always shown a tendency to join forces with the darkest of regimes during the early stages.


I wonder if it isn't because the darkest regimes, when they're just starting, show the most promise for progress and positive changes. Didn't Nazis offer the Germans their wealth and their honor back, in the times they were most desperately in need of both?

EDIT:

But then again, fascination with "the other guys" is also a thing. See: the intellectual world of the West being in love with Soviet Union well into the Cold War.

http://slatestarcodex.com/2015/08/11/book-review-chronicles-...


Populists will claim to give you anything you want as long as you vote for them.


True, but from what I remember from my history lessons, NSDAP was actually doing good on those promises. Which is something that rarely happens in politics.


That was more of a temporary thing right up until the point that the Nazi party abolished large chunks of the civil rights granted by the German Constitution in 1933.

They made lots of promises that they never delivered on (or even planned to deliver on).

One of the more interesting ones:

http://www.bytwerk.com/gpa/vw.htm


That was a really interesting story, thanks!

Also, the very positive tone of the historical article was refreshing. I know it's pure propaganda, but still, we could use some positive articles in the news every once in a while.


Salary is a lot more than simply money for most people; it's food on the table for their family.


> What we have instead are anachronistic discussions on things like privacy that seem oddly disconnected from 300 years of accumulated wisdom on surveillance, privacy, free speech and liberty to pretend the obvious is not obvious, and delay the need for ethical behavior and introspection.

You know, this is not coming from software developers. There's a group of people out there whose living is made by manipulating the public perception and speech. This group is not the software developers.


I'm inclined to agree. I'd like to see more active ostracisation against software developers that have worked for organisations like the NSA and GCHQ. If we can make it a career ending job, it will dramatically reduce their ability to recruit.

However, I'd also like to see general software development think more closely about the role it has in normalising these things. Next time you start to create an account system for your project, ask yourself whether you really need it. Could you engineer around it, perhaps by letting the user store their data, or using a stored key to identify them? Let's go beyond don't store what you can't protect, and aim for don't store what you don't strictly need.


How is Google or Facebook any better? They are spy agencies for advertising purposes instead of national security.

Or companies that deploy ad sense or otherwise depend on companies like Google or Facebook.


Exactly. Between themselves, Google & Facebook have access to a disturbingly large amount of the private information and the thoughts of billions of persons. Pictures, emails, messages, social graphs, you name it.

And now Microsoft decided they also want a piece of the pie.

The kicker? I see people still defend Google all the time, nowadays with bullshit arguments like "I am tired of this you are the product meme" and still excuse Facebook because they need it to keep in touch with others. And they found startups based on advertising and tracking, they work for them and generally support analytics as an inalienable right of software development.


People have a choice (at least to some extent) about the data they share with Google and Facebook. There is no such choice about the information that governments scoop up.


Because one problem at a time. Instead of throwing up ones hands because some other thing is bad, too.


>Because one problem at a time.

Pretty much. I don't use Facebook at all, and give in to Google only on technical searches (Which DDG still isn't good at), mapping and when forced to by work (GCE etc.), so you're preaching to the choir, but lets not try for an overnight coup here.


But...

NSA, GCHQ, BSI/BND, etc. aren't the "bad guy" in theory.

It's within a nation's interest to, within the extent of law and respect for human rights, try as thoroughly has it can to know what's going on in the world. Electronic intelligence is part of that, and a growing part.

In practice, the permissive reactions many/most/all governments have to allegations (or proof) that a comms intel agency has broken the law, that's what the trouble is. That these groups have been allowed to break the law or ghostwrite laws that allow them to violate what would generally not be approved by a citizenry, that needs to be addressed.

I'm not sure how ruining the careers of software developers and computer scientists who've worked for these organizations does anything other than remove from circulation some brilliant members of our community.

Ostracize the middle managers, bureaucrats, politicians that allow the trampling of our rights.

But don't arrest the guy designing the home theater system for El Chapo's vacation house and tell me you've taken down the Mexican drug cartel.


In reality, the software engineers that work for the above are the goons of the bad guys, and we cannot allow them to continue their work as it is detrimental to our interests. Any amount of perspective informs them that what they are doing is not supportive of a democratic state, yet they continue anyway.

Why must we accommodate their subservience? Following orders is no excuse.


>NSA, GCHQ, BSI/BND, etc. aren't the "bad guy" in theory.

Theory is not really relevant when the practical reality is monstrous. The five eyes are not redeemable.


Theory is of course relevant when the implementation of it, and the environment that it's implemented in, is what is poisoned.

It's easy for us to sit at our desks and churn out our work and be mad. And there's things to be mad about for sure. The wanton disregard for civil liberty and protection is simply irredeemable. And to be sure, I've been a fan of your country's very public responses over the last few years to personal privacy. I hope the US legislature can learn a thing or two.

But It's not the "five eyes". It's the entire world. Any country with an interest in protecting their sovereignty also has some form of information gathering operation.

When that operation gets big and exposes itself, folks get upset because, yeah, being spied on isn't a comfortable thing. Do some countries go about gathering this information more morally than others? Something tells me we'd have to be in the secret inner sanctums of the biggest opponents to really know, and I think the answer would be "a spade is a spade."

Does it help our countries protect themselves? I honestly don't know.

But I do know that "grey hatting" in the general development community doesn't garner this sort of bile and venom. I don't know why being a grey hat for a government should be treated differently.


[dupe]


>Are you giving the green light to guilt via association? Just because someone worked at GCHQ that does not mean they have done something unethical.

If the NSA would like to declassify it's employees job history in a credible and verifiable fashion, I'd be keen to take this on a case by case basis.

It is possible that the NSA is hiring engineers for benign reasons. If that's the case, I assume they are not classifying that work. After all, the government should not be withholding information from the public without cause.

As it stands, it is reasonable, in light of the revelations of the last several years, to suppose that any engineer who cannot reveal their work history with an intelligence agency has performed unethical work. This is what's known as "preponderance of evidence".

>Who are you associated with that I (or society) doesn't like?

What a fascinating question. So many insights to your character. I'm entitled to associate with whom I like, and your seedy, toad-like implication that for even suggesting such a thing as personal accountability, I must be a state enemy has the delicious tang of a low quality KGB-centric TV serial. You know, the kind in which the plucky all-american hero fights back against the oppression of a totalitarian russian government. It would be funny if global law enforcement wasn't looking towards digitally predicted thought crime as a genuine goal.

So speculate freely, because the real answer is terribly dull: I'm a normal citizen, posting my opinions on an internet forum and associating with you.


That question provides zero insight into my character but it does shed some light on your lack of intelligence.

I don't actually care who you associate with - it was a thought game - you are saying that you are for guilt via association (or at least I assumed you were) which I was I tried to throw that principle back in your face. Jeez, do you need jokes explaining to you too?

The fact you think my inability to talk about some secret work I have done means that I've done something unethical is ridiculous.


>The fact you think my inability to talk about some secret work I have done means that I've done something unethical is ridiculous.

The evidence we've seen so far says no, it's not ridiculous. Prove otherwise any time :)


Isn't history full of examples of people doing secret government work, that wasn't unethical, that they could not share outside of their immediate team?

Alan Turing helping to crack the enigma code comes to mind.

You're now assuming all intelligence work is unethical? You're childlike arrogance and ignorance is tiresome.


Doctors have classes that teach them ethics early on in school/training because the vast majority of professional doctors agree on what is ethical. On the other hand, you can't even get many university professors to agree that something like collecting metadata is unethical.

The idea that we could get the majority of the industry to agree on ethics is pretty far-fetched when a large portion think surveillance is making their country safe.


I have found that as an individual software worker, with no union or professional association with the muscle to back me up, any ethical objections I may have are meaningless, as the only thing I can do to object is lose all my household income and be replaced by someone more pliant.

For instance, I find a user control that prevents the user from changing focus whenever the input is invalid to be unethical, or at least severely impolite. It's the equivalent of grabbing someone's face while you're talking to them. Me: "The control you propose is hostile to the user." Customer: "Do it the way we want, or your company loses the contract."

As it turns out, the customer would love to grab someone's face, not just while they talk, but also as they yell, with a light rain of spittle falling gently onto the target's visage. That's because they assume everyone is a complete idiot, whose only salvation is absolute obedience to those officially certified as more capable. They fervently believe that you can order someone to not make mistakes. So it should be no surprise that my ethical objection was meaningless to them.

The people paying for software and hardware enabling Panopticon-style universal surveillance have a completely alien system of ethics, and more than enough money to ignore your personal morality. There will always be someone around in desperate enough financial straits that they will quash their own opinions and take the paycheck.

A cartel enforcer for software workers is the only way to significantly slow down technologies (you can't actually stop progress) that the majority of those workers find to be unethical. That enforcer has to be able to tell its members that they cannot do such work, no matter how well it pays, because otherwise, the buyers, for whom budget size is no obstacle, simply pay the higher price to those who need cash now more than self-respect later.

As long as there are mouths to feed and rent to be paid, the guy with deep pockets will be able to pay another to do his dirty work.

It isn't the ethical training that makes the difference in medicine, but the ethical enforcement. Doctors and lawyers can be decertified by their peers and elders, such that they cannot be rehired as a member of that profession. That means that an employer cannot demand unethical behavior, unless it is willing to compensate to the tune of all the money those people could theoretically make over all the remaining years of their careers.

I would hope that enough software workers could agree that it is unethical to casually collect and retain information from anyone without their fully informed consent, which is diligently confirmed, and revocable on demand. I further hope that we could agree that it is unethical to gather information to support any criminal investigation without reasonable suspicion that the target has actually committed a crime. Those people who believe that adding more hay to the stack makes the needles easier to find can form their own cartel.

I happen to believe that ethically-limited surveillance is more efficient and effective than the heavy-handed dragnet approach. I also think it is unethical to use an O(N^3) brute-force algorithm when an O(N log N) alternative is available. But most customers only care whether something works, and is delivered on time and under budget. They won't ever care about our opinions regarding quality, ethics, or best practices until after we are capable of making them pay dearly for not caring.


> […] any ethical objections I may have are meaningless, as the only thing I can do to object is lose all my household income and be replaced by someone more pliant.

How do you know the chosen ones? ‘No greater love hath a man than he lay down his life for his brother.’ Not for millions, not for glory, not for fame. For one person. In the dark, where no one will ever know, or see.

— Babylon 5, Season 2, episode 21, Comes the Inquisitor, 1995


Nice quote, but...

"The Earth belongs in usufruct to the living; the dead have neither powers nor rights over it." --Thomas Jefferson, to James Madison, Sep 6, 1789

For all the influence exerted by people like Mahatma Gandhi and Martin Luther King, Jr., they effected change with their lives, and not their deaths. One should not choose to die for a cause, or against one. Rather, live for your own principles, and teach them to those others who wish to learn. Those who sacrifice themselves, expecting no reward, grow no greater in my eyes. They become memory, and immediately begin to fade, except to the extent that they are renewed by those who still live.

What manner of scoundrel would I be to suggest that another to sacrifice for my benefit, that I may treasure the memory of it? What sort of fool would assent? That is the mentality of the beehive, where the workers die to protect their queen. In a society of equals, for anyone to die unnecessarily is a tragedy. For someone to choose to die, it is a horror.

Attributing some nobility to self-sacrifice is an ethic for hierarchies, to convince the lesser people, against their own interests, to hurtle headstrong into situations where they may be killed. It makes pawns of people who might otherwise be greater. It is not fitting to convince anyone to believe they are so unworthy that the best way they might serve others is by throwing themselves into fires that need never have been lit.


> For all the influence exerted by people like Mahatma Gandhi and Martin Luther King, Jr., they effected change with their lives, and not their deaths.

I would argue the most important changes they affected were for themselves. You don't risk your health by helping someone who gets attacked to earn their gratitude, but to be able to look in the mirror. That's the only thing that gives enough energy to sustain certain things for years and decades. And Rosa Parks for example didn't plan to end segregation, she was sick of putting up with it. Nothing more, nothing less. How great other people are in your eyes is does not matter for what value their own acts of moral hygiene have to them, and people don't need "expect" a reward for such things because the deed itself IS the reward. They already have it. And since you brought up MLK:

I say to you this morning, that if you have never found something so dear and so precious to you that you aren't willing to die for it then you aren't fit to live.

[..]

You may be 38 years old, as I happen to be. And one day, some great opportunity stands before you and calls you to stand up for some great principle, some great issue, some great cause. And you refuse to do it because you are afraid... You refuse to do it because you want to live longer... You're afraid that you will lose your job, or you are afraid that you will be criticized or that you will lose your popularity, or you're afraid someone will stab you, or shoot at you or bomb your house; so you refuse to take the stand.

Well, you may go on and live until you are 90, but you're just as dead at 38 as you would be at 90. And the cessation of breathing in your life is but the belated announcement of an earlier death of the spirit.

http://www.youtube.com/watch?v=pOjpaIO2seY&t=18m26s

> In a society of equals, for anyone to die unnecessarily is a tragedy. For someone to choose to die, it is a horror.

Here's a secret: everybody dies, either way. The only choice you have is how you live. From John J. Chapman's commencement address to the graduating class of Hobart College, 1900:

If you wish to be useful, never take a course that will silence you. Refuse to learn anything that implies collusion, whether it be a clerkship or a curacy, a legal fee or a post in a university. Retain the power of speech no matter what other power you may lose. If you can take this course, and in so far as you take it, you will bless this country. In so far as you depart from this course, you become dampers, mutes, and hooded executioners.

> It is not fitting to convince anyone to believe they are so unworthy that the best way they might serve others is by throwing themselves into fires that need never have been lit.

People who are great don't need to be convinced of anything. People who aren't are impossible to convince. And it's not "fitting" to justify stoking fires because otherwise others would do it, either. Then let those others do it? And hey, for all you know, they all might be doing it because otherwise you would do it.

And who is actually sacrificing? People who aren't sacrificing their ideals and their morals, or people who sacrifice them for some food and a few decades more?


"If you wish to examine a granfalloon, just remove the skin of a toy balloon." — Bokonon (aka Kurt Vonnegut, Cat's Cradle)

Just because I mentioned specific individuals does not mean that I agree with them. I only acknowledge that they produced an effect that propagated beyond their own deaths through the actions of the devotees they acquired while living. I might also have mentioned prophets of various religions, though I may not follow any of them.

Skilled as I am at seeing the fnords, in the MLK address you quoted, under the obvious text, lies this subtext: Is my cause not great enough that you might be willing to die for it? If you are not, and have no greater cause to hold your loyalty, then you are more a walking corpse than a living man, and unworthy of my regard. It is very similar to "Crouch down and lick the hands which feed you. May your chains set lightly upon you, and may posterity forget that ye were our countrymen." It is a recruiting speech. And every time a young black person gets "the talk", it is contradicted. According to MLK, every time black kids submerge their will in a police encounter, and come away from it alive, but humiliated, they will be dead inside until their bodies finally catch up. According to me, they will live long enough to either vote in comprehensive reform or to organize and rebel from a dearth of it.

Nonviolent resistance depends in whole upon the oppressors' general unwillingness to murder nonviolent protesters. Willingness to die only works insofar as the opposition is unwilling to kill. Gandhi's protests worked only because British forces in India were unwilling to massacre Indians wholesale. MLK's protests worked only because the segregationists were unwilling to kill in public, before the typewriters and cameras of nationally-published journalists.

If you are willing to die, and the other is willing to kill you, you would be prudent to arrange your affairs in advance, such that other people are positioned to impose meaningful consequences as a result. Otherwise, you are gifting your enemy with a tiny victory.

If you quit a job in the military-industrial complex for which you have some ethical concerns, such as one which enables dragnet surveillance, what is the meaningful consequence? Every failing of the project in recent months is scapegoated to you. The contractor hires a replacement butt-in-seat. The work goes on. Your sacrifice yields nothing. No one rises in gratitude to pay your bills. When you mention in job interviews that you left due to ethical conflicts with the former employer, you never seem to be a good "cultural fit".

Why then would anyone choose to do that?

I'll take the food and the decades. I won't go willingly to my grave, if doing so wouldn't be more meaningful than what I believe I could accomplish with the entire remainder of my natural life. Sometimes, you can't avoid it, but you should always try to not die as you work towards your goals. Don't fear death, but don't ask it out on romantic dates, either.


I quoted them because I agree with them, not because I think you would.

> According to MLK, every time black kids submerge their will in a police encounter, and come away from it alive, but humiliated, they will be dead inside until their bodies finally catch up. According to me, they will live long enough to either vote in comprehensive reform or to organize and rebel from a dearth of it.

Right, so when does the rebellion come? Why would you rebel ever when "someone will do it anyway", like that is some law of nature? According to you, hypothetical black kid should snitch on others when threatened to get beaten or arrested, and why wouldn't they -- if they don't snitch, someone else will do it, and the only difference would be their life being worse. Leaflet #3 of the White Rose comes to mind: "Do not hide your cowardice under the cloak of cleverness!" And I think we'll have to agree to disagree.

> If you quit a job in the military-industrial complex for which you have some ethical concerns, such as one which enables dragnet surveillance, what is the meaningful consequence?

I already said what it is for me and in my opinion, personal moral hygiene. The consequence is that you are no longer part of that. That is plenty meaningful to me. As Frankenstein said in The Death Race, (paraphrasing), "You can't save the world, you can maybe save a part of it, yourself". Well, I don't remember the exact quote, but that's how I feel about it. I don't even believe in something like a soul, but still, I would say saving your soul, retaining what little remains of our innocence, is the best anyone can achieve.

And as many found out, death doesn't always immediately follow making a stand. George Carlin found himself entertaining people he didn't like, the establishment, with cute things, and he pivoted. Had a long career, had a family, was heard, never sold out, never compromised. Noam Chomsky also has plenty haters, and I'm sure plenty who would love to see him hurt, but he is still rocking on.

> When you mention in job interviews that you left due to ethical conflicts with the former employer, you never seem to be a good "cultural fit".

Then either don't mention it, or don't interview for jobs with assholes. Get another job, and help take the assholes down. Do whatever you want, of course, but I don't see the dilemma here. It's not that black and white, i.e. either you go along or you're screwed. Actually, plenty people get screwed even though they're very obedient and have no flavour and no stance of their own. And as Lily Tomlin said, "The trouble with the rat race is, even if you win, you're still a rat." And you know, I don't quote this to put anyone else down, it's how I feel inside. Man, it's not just a feeling, it's a pretty solid thing. I had a lot of shit broken for me for trying to do the right thing, and had a lot of frustration and sadness for not just "popping soma" and going along, for questioning things. Yet I would not do it differently, given then the chance to do it again. I might be smarter or more patient about some things, but in general, I feel I got way more out of it than I lost. It's not just what it does to how I feel inside, it's also what it does to my perception, which is muddled, but less muddled as it would otherwise be. I see and speak with people who made and are making different decisions every day, and I don't envy a single one of them.

> Sometimes, you can't avoid it, but you should always try to not die as you work towards your goals.

Nobody (or hardly anybody) just keels over dead and thinks that advances any cause or does any good. It's usually "doing something or saying something, and then not stopping to do or say it even though others threaten you". You can hardly say "don't fear death" after arguing it's fine to fear quitting a job over ethical concerns, which is so much less than death.


Not everyone agrees with you that the tech sector is contributing to the building of a surveillance society or police state. There are a lot of people who have carefully considered the issue and come to the conclusion that facebook knowing what posts you liked or ad networks knowing which pages your IP address has visited is not a Bad Thing. It's clear that you don't agree and all debate is welcome, but I caution you not to trip in your rush to claim the moral high ground.

I don't think there's any need to rehash the debate here. Simply, I and many others do not believe that any western government is going to use information gathered by tech companies to preempt threats to entrenched interests and the status quo. I've seen the same arguments made here for years, and none of it is convincing.

It's admirable that you are so certain in your beliefs. If you don't like what the tech sector is doing, please by all means continue to advocate. Shout it from the mountain tops, go to work for the EFF. But don't discount people that legitimately disagree with you as being irresponsible. At least some of us have made the effort to understand your point of view. The least you could do is to try to understand ours.


Your whole post is written in bad faith and frankly revolting.

Which sector is building startup after startup for data mining, tracking, building profiles? This in addition to the already established companies. Then you're trying to downplay the issue to trivial actions such Facebook likes or tracking of IP addresses, a toy version of the state of the art. Finally, the sarcasm, showing how reasonable you are and putting the OP in a bad light for not being "more understanding".

It's quite simple: the topic of privacy is central to a free society and it's enshrined in the Universal Declaration of Human Rights. In the past, we have seen a rich history of abuses, lies and deceit from huge organizations with massive resources at their disposal. Private or not.

The majority of people go on with their lives without caring, as long as they have their basic needs met. The very few that take a stand, pay the price. Otherwise, some criticism of the behavior of these organizations can be found online, but not much because of:

1) Chilling effects. Funny how I had to think before posting this message, living comfortably in a democratic country, with freedom of thought and freedom of speech.

2) "Helpful people", quick to jump to the defense of said organizations, explaining away abuses, making up excuses, muddying the waters, asking for fairness and understanding their point of view.

So thanks for keeping the balance karmacondon. They might have mountains of money, lawyers, shills, PR people and most resources imaginable really, BUT we wouldn't want to unfairly hurt their feelings. I do apologize for that.


> Which sector is building startup after startup for data mining, tracking, building profiles?

You talk about it like it's necessarily a bad thing, by default, for everyone. Why?


Most people are not persons of interest and nothing particularly bad will happen to them if various entities have access to their private info. Still, they might have their identity stolen, get scammed (e.g. Dell) or pranked (e.g. swatting, disconnecting utilities) or have their house broken into if they have bad luck. They might pay a premium on insurance for having the wrong friends on FB or get fired for holding certain opinions. Might get mobbed by the internet, get harassed by salesmen or silly ads for herpes.

Let's assume for the sake of argument that the above events are unlikely though. When a few actors have access to the information of tens of thousands to billions of people though, this has an impact on a societal level. As jaquesm said, information is power and when one has so much information and lots of money to boot, they can begin to covertly influence policy and behavior and harass and marginalize their opponents. And they can do that directly, or by using the information of a third party, like a doctor, lawyer, religious leader, or even someone insignificant which happens to be a relative, etc. Moreover, companies can be sold, together with their databases, they can be forced to hand them over or they can be hacked. A treasure trove of data held by an otherwise principled company, might end up in the hands of an unsavory party.

Why is this a bad thing? History has shown again and again how such imbalances of power are abused. Here's a rather harmless example of data mining a mobile device + social network combined with social engineering to scam people out of money: http://toucharcade.com/2015/09/16/we-own-you-confessions-of-... If a game producer can do this, what are the pros doing?


Its, information on billions + authority with huge and complex set of laws which are selectively applied = problem.

RE regulation on software engineers, Its impossible. For a software written, its PURPOSE and AUTHORS are subjective interpretations. It is much much harder to get common consensus if the software is surveillance, malware etc. So any regulation would do nothing but increase the already-so-complex-and-huge set of laws.


OK, so in your comment you reviewed only the bad possible outcomes of some thing X, and came to a conclusion that thing X is bad.

Don't you see any logical problems with this line of reasoning?


The good outcomes should be achievable without the bad side effects of the implementation (centralization and surveillance) that's being criticized here, at least as far as technology is concerned.

There is only one positive outcome of concentrations of power, and that is efficiency in execution. Which is extremely scary when combined with huge power.

This is really just the democracy discussion with different terms. It is well known that dictatorships are much more efficient at executing their plans. The inefficiency we voluntarily introduce when establishing and maintaining a democracy (and if you have ever been involved in democratic decisionmaking, the inefficiency can be really frustrating) is the price we pay to insure us against the efficient abuse of power as we have witnessed it countless times in human history.


Because information is power and power tends to be abused over the longer term, all fig-leaves about 'improving the world' to the contrary.


The right to bear arms is partly about the people having the same powers or greater, collectively, than the government.

In the modern era it is information asymmetry that we should worry about. How to prevent such a thing pragmatically is tricky.


> The right to bear arms is partly about the people having the same powers or greater, collectively, than the government.

This only works in the US and even there I have no illusions at all about the ability of a present day militia being able to fight off a trained army, it's a pacifier for overgrown toddlers. The only people that have to fear from citizens with guns are other citizens (with or without guns), the military would have absolutely no problem whatsoever dispatching those if it was decided that their lives and the resulting PR fall-out are less important than whatever objectives they were given.

> In the modern era it is information asymmetry that we should worry about.

Note that there are always provisions in the law to protect the lawmakers from having the laws applied to them.

> How to prevent such a thing pragmatically is tricky.

I think it can't be done unless you simply outlaw it wholesale and are prepared to follow up on it. And from a practical point of view this is now a rear-guard action, fall-back bit by bit and try to push back the point in time where we will have to conclude the battle was lost. This is not a problem that will simply go away, it has already gone way too far for that.


> the military would have absolutely no problem whatsoever dispatching those

I'm less pessimistic about that. I'm a big fan of gun control laws but I also think that the one positive thing that has come from the ongoing middle-east conflicts is that a determined militia can be genuinely problematic.

> Note that there are always provisions in the law to protect the lawmakers from having the laws applied to them.

To my original point about asymmetry, this is what we should be devoting our energy fighting.

> simply outlaw it wholesale

Outlaw what wholesale? I'm personally of the opinion that the long term end state will fall more on the side of honesty (combined with increased acceptance) than secrecy.


> what

Any kind of abuse of power. The penalties for that should be severe. It's one of the few cases where I think that the penal system should be used as a means of discouragement rather than as one of education and rehabilitation.


> Because information is power and power tends to be abused over the longer term, all fig-leaves about 'improving the world' to the contrary.

This sounds more like a uncompromising proclamation instead of thorough analysis.


It's simply an observation made over history, it's not a proclamation and there is no analysis involved. Anybody that has been following the applications of information technology from the earliest of times would most likely come to that same conclusion.

The ancients had it as 'power corrupts', the abuses are plentiful and that every company that engages in these practices (and the government agencies as well) do this to ostensibly make our lives easier or keep us 'safe' is very well known and advertised. If you have evidence to the contrary feel free to share it but that's where we currently stand.


> The ancients had it as 'power corrupts'

Well, then logical thing would be not to give anyone any power, ever.

My point is, if you take general principles and blindly apply it with "no analysis involved", you're likely to get to a pretty ridiculous state.


You can take general principle and apply them with analysis, it does not take much in terms of analysis to extract a useful lesson from history, the analysis has already been done for you.

Just like any other tool such insights can be (and are) abused but it need not be like that.

The conclusion to reach is not to give anyone any power ever, clearly that's not feasible. The conclusion you're supposed to reach is that you can give power to people but you'll need oversight in place. Effectively you'll end up with checks and balances, pretty much the way most governments are set up.

And what history tells us - again - is that this isn't always sufficient to prevent abuses and our newspapers and other media seem to tell us that our current set of checks and balances have outlived their usefulness in the information age.

This flows from 'power corrupts' because it appears that those placed in power have - surprise - again abused their privileges.

Think of it as a warning beamed down from historical times to our present day that does not need more embellishment and is all the more powerful for its brevity, it is something so inherent in human nature that we need to be vigilant of it at all times, no matter who we end up placing trust in.


> Simply, I and many others do not believe that any western government is going to use information gathered by tech companies to preempt threats to entrenched interests and the status quo.

It's simply hard to take your stance as one made in good faith.

The US government has a long history of using its national police, the FBI, to infiltrate and subvert domestic political movements that the powers that be found unpleasant -- including using their police powers against modern groups such as the Occupy movement.

Further, we know that the US government has used records held by tech companies to create massive cross-referenced databases of people, including domestic activities. The recent leaks about surveillance programs has made that abundantly clear.

Your position is literally that an organization with a history of doing this kind of activity won't use the technology we already know the government possesses to keep doing the same thing.

So I think there is a need for you to rehash the debate here, because it's not clear how you sincerely hold that position.

Because rather than a rational view, what you describe sounds like irrational denial.


> Because rather than a rational view, what you describe sounds like irrational denial.

The people that I know in real life that hold views like these are best described as scared, rather than ignorant. They feel that they price they pay is a small one as long as it gives them an un-specified increase in perceived security in return.

Fear is a very powerful tool when it comes to getting people to choose against their self-interest.


You're saying, "We should not trust the government because they did things that I didn't agree with in the past". This seems like an unfair standard to hold any person or group of people to. I would be unhappy if people said "I think that karmacondon has made mistakes in the past, so he shouldn't be trusted to do his job ever again."

I understand what you're saying, and I think I get where you're coming from. But like the GGP post, you're begging the question and assuming that your beliefs are so correct that anyone who disagrees with them must be insincere.

I don't think there's anything inherently wrong with the government monitoring potentially criminal groups or building databases. That's what we pay them to do. If they get out of hand then we, the people, will deal with it.


> You're saying, "We should not trust the government because they did things that I didn't agree with in the past". This seems like an unfair standard to hold any person or group of people to.

A better paraphrase would be "We should suspect that the US government will act in a way similar to how it has acted repeatedly over the span of decades."

I think this is a perfectly fair standard, and actually am held to that standard all the time, including professionally. If I had a continual, systemic habit of flaws in my work, for instance, I would be fired.

Your phrasing suggests that these are things that "just happen", instead of a pattern of decades of intentional programs with the same kinds of aims and behaviors.

> But like the GGP post, you're begging the question and assuming that your beliefs are so correct that anyone who disagrees with them must be insincere.

I actually think you're insincere because you're minimizing and denying a pattern of sustained behavior as a few mistakes, rather than an intentional, continuing program.

That insincerity can be directly seen above when you switch from "did things I didn't agree with" to "made mistakes". No one is talking about the US government making mistakes, and decades of intentional programs operated with similar strategies is hardly "making mistakes".

Your entire analogy was insincere and meant to elicit an emotional response.

> If they get out of hand then we, the people, will deal with it.

Will we?

I'm actually very skeptical that we'll deal with it in any meaningful way, and find it much more likely that we'll surrender a great deal of control over the country to an autocratic government with a good social control program, precisely because people like you don't want to sincerely discuss the likelihood of that happening by stages.


> I would be unhappy if people said "I think that karmacondon has made mistakes in the past, so he shouldn't be trusted to do his job ever again."

It's not about mistakes. Mistakes are - usually - a sign that someone needed to learn. They do not as a rule include wanton intent.

And if a person were to make too many mistakes then they probably should not be trusted.

> I understand what you're saying, and I think I get where you're coming from. But like the GGP post, you're begging the question and assuming that your beliefs are so correct that anyone who disagrees with them must be insincere.

No, that's the opposite. You have beliefs that you state are so correct that they stand on their own, in spite of a bunch of historical evidence to the contrary, starting roughly at the time that we invented writing going all the way into the present. That's a pretty gullible position.

> I don't think there's anything inherently wrong with the government monitoring potentially criminal groups or building databases. That's what we pay them to do. If they get out of hand then we, the people, will deal with it.

Potentially criminal groupls: everybody.

You're apparently one of the people where the 'fear' button has been pressed, don't let your fear get the better of you.

Btw, I note that you write all these 'reasonable disagreement' things from the position of an anonymous coward which makes me think that maybe you do realize the value of your privacy after all.


> Potentially criminal groupls: everybody.

While this may be true, certain crimes are seen as worse than others. And, as un-PC as it may sound, certain demographics are many times more likely to commit certain crimes.

Homicide: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm6227a1.htm

Also, some government monitoring can be "for your own good":

Youth Risk Behavior Surveillance: http://www.cdc.gov/mmwr/preview/mmwrhtml/ss5704a1.htm

But, maybe the CDC is different than the NSA.


Government monitoring "for your own good" is pretty scary. We already have situations where people are attacked by the government or its agents because they did something that only harms themselves. For example [1]. Mass surveillance, if left unchecked, will eventually expand for whatever purposes the government wishes. Power is easy to incrementally grow (or in the case of the NSA, they simply ignore the laws) and very difficult to shrink again. We shouldn't think that this wouldn't be used against us sometime in the future and who can truly say that they never did anything harmless-but-illegal (take drugs? gambling? copyright infringement?) and as [1] shows, people have died for these "crimes".

[1] https://www.google.ie/search?q=sal+colusi&oq=sal+colusi&aqs=...


"Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience." -- C.S. Lewis

That's one of my favorite author quotes. The greatest evil in this world is done by those who can see their own work and tell themselves that it is good.


That is a fantastic quote. Thanks for posting it.


We both made the mistake of discussing the US government like it's a single entity. We're talking about hundreds or thousands of individuals spanning multiple generations. I'm not going to worry about government metadata collection because of something that happened during the Eisenhower administration. Each person and group of people should be evaluated based on their own behavior and merits, not the reputation of the organization that they are affiliated with.

It looks to me like the US agencies, and the Five Eyes in general, are capable people who are just doing their jobs. They aren't bothering me and I'm not bothering them. The past actions of the US government or hypothetical scenarios based on historical examples just aren't very convincing. Anything could happen. But I'm not going to concern myself with it until I see some evidence.


> Anything could happen. But I'm not going to concern myself with it until I see some evidence.

So, you are drinking battery acid until you see evidence that it's not good for you?

Or do you maybe take the evidence of other people's experience into account?

If so, how about you take into account the evidence of hundreds of societies that have dealt with massive surveillance (where "massive" still was "almost none" in comparison to today's and tomorrow's technical possibilities) and with oppression (those two empirically tend to go hand in hand).

If those are your sincere beliefs, I really would recommend you pick up a few books about recent German history. How Hitler came to power, how the state functioned once he was in power, how people tried to get rid of him but failed, and what it took to finally remove him. And then continue with the history of the GDR, how surveillance by the Stasi influenced everyday life, how people tried to reform the political system but failed, and what it took to finally reunite Germany.

The history of other countries might teach you similar things, but Germany is a good example because it is culturally a rather "western country", so it's easier to recognize similarities.


Don't bring up the Eisenhower administration, it isn't relevant.

It's also quite foolish to try to evaluate people in a vacuum... would you extend the same privilege to a member of a criminal gang or jihadi group? No.

https://en.wikipedia.org/wiki/Joint_Threat_Research_Intellig...

https://theintercept.com/2014/02/24/jtrig-manipulation/

"Campaigns operated by JTRIG have broadly fallen into two categories; cyber attacks and propaganda efforts. The propaganda efforts (named "Online Covert Action"[4]) utilize "mass messaging" and the “pushing [of] stories” via the medium of Twitter, Flickr, Facebook and YouTube.[2] Online “false flag” operations are also used by JTRIG against targets.[2] JTRIG have also changed photographs on social media sites, as well as emailing and texting work colleagues and neighbours with "unsavory information" about the targeted individual.[2]"

https://en.wikipedia.org/wiki/COINTELPRO

https://en.wikipedia.org/wiki/SEXINT

https://en.wikipedia.org/wiki/Optic_Nerve_%28GCHQ%29

https://en.wikipedia.org/wiki/PRISM_%28surveillance_program%...

There's your evidence-- it's been here all along. These programs are targeted at US citizens, some with the explicit aim of discrediting them, blackmailing them, or propagandizing them. These are not the actions of a friendly nanny state but rather a malevolent surveillance state.


> We both made the mistake of discussing the US government like it's a single entity.

I'm not making that mistake. I fully realize that the US government is comprised of many arms that even though some of those arms might have our collective best interests at heart this may not be the case for all of it.

> We're talking about hundreds or thousands of individuals spanning multiple generations.

So what. That only increases the chances of abuse, it does not diminish them at all. Just like in Nazi Germany there were plenty of people still fighting the good fight and at the same time employed by government. No government will ever be 100% rotten. But it does not have to be like that to do damage.

> I'm not going to worry about government metadata collection because of something that happened during the Eisenhower administration.

Because, let me guess that was too long ago and now it's different?

> Each person and group of people should be evaluated based on their own behavior and merits, not the reputation of the organization that they are affiliated with.

This is where you're flat-out wrong. Governments (and big corporations) have a life-span much longer than that of the individuals that are making it up, and as such we should look at them as entities rather than as collections of individuals.

If you'd be right then North Korea would not exist today as we know it (and neither would China, Iran and a bunch of other countries). The way these things work is that the general course will be slightly affected by the individuals but the momentum in the whole machinery is enormous. Think of it as a cable in which individual strands are replaced but the identity and purpose of the cable remains. Eventually you have a completely new cable and yet, nothing has changed. And in this case the entity has a huge influence on which parts of it will be replaced by who.

> It looks to me like the US agencies, and the Five Eyes in general, are capable people who are just doing their jobs.

That's a very very scary thing to say. "Just doing my job" has been used time and again historically to distance oneself from the responsibility taken when performing certain actions. Just doing your job is not the standard that needs to be met.

> They aren't bothering me and I'm not bothering them.

And most likely they never will.

"The Only Thing Necessary for the Triumph of Evil is that Good Men Do Nothing"

> The past actions of the US government or hypothetical scenarios based on historical examples just aren't very convincing.

Of course they aren't. After all, it's not you that is personally inconvenienced in any way.

> Anything could happen. But I'm not going to concern myself with it until I see some evidence.

And none that will convince you will ever come. Because if it did it would be too late for you to change your stance anyway.


> You're saying, "We should not trust the government because they did things that I didn't agree with in the past". This seems like an unfair standard to hold any person or group of people to. I would be unhappy if people said "I think that karmacondon has made mistakes in the past, so he shouldn't be trusted to do his job ever again."

Yes, you would be unhappy. But this is not about whether you are unhappy, but whether you should have control over military, police, our tax money, and thus everyone's lives.

It simply is a very well established fact that concentrations of power are extremely dangerous, and that they are extremely hard to break up once you recognize they are heading in the wrong direction. Just look at what the problem is in countries where people are doing badly, both historically and right now, and why things are so extremely hard to improve once they have gone bad. Which is why we have built structures that try to prevent such concentrations of power from forming. That is essentially the whole point of democracy and the separation of powers: To build distrust into the system. Dictatorships are the opposite of that (only one power, and no mechanism to remove the person in office). Yes, democratically elected officials certainly are unhappy when they are voted out - but that is the price we pay to prevent concentrations of power from forming.

And surveillance is undermining democratic decisionmaking. Having a democracy now does not guarantee you a democracy tomorrow if you aren't careful in who and what you vote for.

> If they get out of hand then we, the people, will deal with it.

Yes, "we" will. If history can teach us something, we can expect that it will take about a decade at least, with many unhappy lives, maybe millions of deaths, until foreign military gets into it to "deal with it".

Sure, maybe that won't happen. But given the prospects, wouldn't it be wise to use our experience from history, to try and make predictions where things will lead, and to then try and prevent things from happening in the first place?

You are aware, for example, that Hitler was democratically elected into office, and all his powers were given to him democratically? And you are aware what it took to remove him from office afterwards?


Not trusting the government because they have a perpetually anti-freedom mindset is completely fair. Are we supposed to take their every action as piecemeal and then be constantly surprised when they do the wrong thing?

They don't just monitor criminals-- that's why the anti-surveillance folks are anti surveillance! They monitor everyone, and create criminals as needed, and nobody can question them for fear of ending up on the chopping block.

They are currently very far out of hand, and "we the people" are doing somewhere between jack and shit because of how little the people understand the problem.


What about Palantir, then?

Very hard to suggest they aren't supporting the police state.

It's unquestionable that the tech sector is directly culpable for supporting the cops and the politicians to spy on us... to affirm otherwise is counterfactual. The moral high ground belongs to the people who don't collaborate with those who would rather have us dumb and controlled.

It's pretty hard to respect the pro-surveillance view because it seems flatly head-in-sand ignorant of reality time and time again. We have evidence of surveillance state wrongdoing in hand, and no successes to point to while simultaneously experiencing multiple terror attacks, and yet the pro-surveillance types are steadfast in their position, as though it's a religion.

The Snowden files showed us explicitly that disrupting political groups is actually done via GCHQ! This is very far from protecting the citizens, and is instead stifling them purposefully.

https://en.wikipedia.org/wiki/Joint_Threat_Research_Intellig...

I actively am discounting the opinion of people that do not understand this threat realized, currently unfolding threat to our democracy. An informed opinion doesn't sound like one passed via the government through the media.


> But don't discount people that legitimately disagree with you as being irresponsible.

Why not? You may disagree, that doesn't mean you can't be flat-out wrong. Having an opinion does not automatically give that opinion equal weight when history has proven to us again and again that that particular opinion ends up with making society either dangerous or at a minimum uncomfortable.

I'm sure there were border guards in former East Germany that were entirely convinced that their state was the greatest and that's why they had to keep people in at all costs, including shooting them if they persisted in believing otherwise and tried to simply leave. After all, that was best for them. But that particular opinion turned out to be very wrong in the long term.

People can rationalize the most absurd stuff to themselves and to others, especially when their pay-check depends on it, but that's not a requirement.

All those that try to pretend that there is some kind of 'reasonable disagreement' possible about the erosion of privacy and that directly and indirectly help to rush in the surveillance state have quite possibly not thought as carefully and have not considered these things with the degree of gravity required as they claim they have. Having a mortgage to pay may factor in there somewhere too.

Usually this is a combination of being too young, too optimistic and in general living too sheltered a life to know what can happen to you, your family and your friends when the tide turns. And the tide always turns, nothing is forever.

> Simply, I and many others do not believe that any western government is going to use information gathered by tech companies to preempt threats to entrenched interests and the status quo.

I hope you're right but history is not on your side in this case.

> I've seen the same arguments made here for years, and none of it is convincing.

Yes, it isn't going to convince you any more than that border guard would be convinced that his job is a net negative to society. Every stream, no matter how reprehensible will always have its fans and cheerleaders. And later on they will never remember that they had agency all along and were perfectly capable of making a different decision. Responsibility is weird that way.

> It's admirable that you are so certain in your beliefs.

It is not admirable that you are so certain in yours. May I suggest a couple of talks with some holocaust survivors to get a better feel for what the true power of information can get you?

Or maybe the family members of some people that were killed while trying to flee the former SovBlock?

Or maybe some first generation immigrants to the US or Canada or wherever you live to give you some eye witness accounts on what it was like to live in those countries before the wall fell down?

'It can't happen here' is an extremely naive point of view.

http://jacquesmattheij.com/if-you-have-nothing-to-hide

Agreed with your advocacy advice.

> The least you could do is to try to understand ours.

That's 'mine' not 'ours', you speak for yourself.


The problem with drawing historical parallels is that they never apply exactly. Saying "A thing happened in the past" can be instructive, especially to people who didn't even realize that the thing was possible. But what's much more practically useful is to say "I think what happened before will happen again in the current context and for these reasons". An example from the past is only useful if it can be tangibly connected to the current situation, right now in the present.

I can't think of a case where stable and mature democratic bureaucracy has ever used surveillance to influence the majority of its populace. Germany in the early 20th century was a very instable government in a bad economic situation. Soviet East Germany was communist, which isn't quite the kind of democratic that I meant. It's true that any government could turn bad, in the same way that anything is possible. But there's very little evidence for that in the current context.

So my position is this: Given that I live in the United States in 2016, I'm not worried about the government randomly deciding to screw with me by looking at my electronic communications and acting on them. It just doesn't make sense. I'm not significant relative to the scale of the US government, the government itself just doesn't work that way and all of the negative scenarios I've heard seem to be very contrived.

If you really think that it's possible that the government of a modern western nation could turn into communist East Germany, then it seems like your problem might be with governance, not privacy. If it's possible for the government to go all Walter White and just turn evil over night, then no amount of personal privacy is going to save any of us. And until it seems like that's a thing that's actually possible, I'm going to make practical decisions about my own privacy.


What stable and mature democratic bureaucracy would start a war of aggression, or a bunch? Of course you're not worried, you don't have your limbs torn off for geostrategic horse shit, and you're making no moves to put a stop to it either. If all you care about is your own well being, then why should I care about your opinion? Why should I care that you don't care? I know there are plenty of people who don't care, or we wouldn't be in this pit to begin with.


Let me ask you one question: What would be circumstances that would change your mind? As in: What would need to happen to make you think that surveillance is going to far?

So far, it seems pretty much like your belief that surveillance is not a problem for you is unfalsifiable, that you will believe that it is a problem for you only when the secret police is actually coming for you or maybe your family.


Something bad would have to happen to random citizens as the result of government surveillance. Something like "Private Citizen X criticized the government and embarrassing information about his life was revealed as a consequence."

There are lots of bad things that the government could do. But it just hasn't happened. They've had mass surveillance technology in place for over a decade now. The world hasn't fallen apart, Hitler hasn't risen from the dead and everything is pretty much the same as it was before.

I guess we can check back in another ten years to see if your apocalyptic visions have come to pass yet.


> Something bad would have to happen to random citizens as the result of government surveillance.

Define 'random'...

http://www.huffingtonpost.com/peter-van-buren/parallel-const...

http://www.reuters.com/article/us-dea-sod-idUSBRE97409R20130...

> Something like "Private Citizen X criticized the government and embarrassing information about his life was revealed as a consequence."

https://theintercept.com/2014/02/18/snowden-docs-reveal-cove...

You mean like that?

> There are lots of bad things that the government could do.

Does, not could do.

> But it just hasn't happened.

It happens, but it just does not manage to cross your threshold for worry because you personally are not inconvenienced.

> They've had mass surveillance technology in place for over a decade now.

For longer than that, and it has been abused for longer than that too.

> The world hasn't fallen apart

It will not 'fall apart' because of this. But it will change because of this, and not for the better.

> Hitler hasn't risen from the dead and everything is pretty much the same as it was before.

Yes, we still have willfully blind people that would require things to get so bad that they would no longer be able to avert their eyes before they would consider maybe things have gone too far. But by then they would have indeed gone too far.

> I guess we can check back in another ten years to see if your apocalyptic visions have come to pass yet.

It will never be a moment in time, we will just simply keep on creeping up to it, just like the frog in the pot of water.

What fascinates me is that there are people that are obviously reasonably intelligent that manage to actually see the pot, the stove and all that it implies and they still tell other frogs to jump in, the water is fine.


What is your definition of a "private citizen"? Does it, say, exclude anyone who the government (or parts of it) would consider worthy of retaliation by definition? If one has enough of an audience to embarrass the government, it might just so happen that one doesn't fall under your definition of "private citizen" anymore?

(Also: "random citizen" or "private citizen"? A citizen who criticizes the government is barely a "random citizen".)


> The problem with drawing historical parallels is that they never apply exactly.

They don't have to. History only happens once, if you refuse to learn from history because it is not an exact repetition of the past then you can never make any progress.

> Saying "A thing happened in the past" can be instructive, especially to people who didn't even realize that the thing was possible.

You seem to think it isn't possible because of "insert magical reason why everything is different now here", not just that it can't happen again for whatever reason. That's an impossible position to argue with. All the weight of history would not be able to sway you from that position because nothing can counter magic.

> But what's much more practically useful is to say "I think what happened before will happen again in the current context and for these reasons". An example from the past is only useful if it can be tangibly connected to the current situation, right now in the present.

No practical examples will counter your magic. You will either say 'that's not the same exactly' or 'that's too long ago to be relevant' and so on.

The only thing that will convince you is when you're lifted out of your bed at 3 am and we never hear from you again. By then it will be a bit too late, but you too will be a believer in government abuse if and when that happens.

Until then you're going to head straight for the last stanza of Martin Niemoeller's most quoted lines. The vast majority of the people living in the former DDR were never lifted from their beds at 3am for interrogation. To them life was just a-ok.

> I can't think of a case where stable and mature democratic bureaucracy has ever used surveillance to influence the majority of its populace.

That's not a bad thing per se. Meanwhile, you're trying hard to change that number from '0' into '1' by allowing the present level of abuse to spread unfettered, which invariable leads to an escalation. Each and every click that you hear is one of a ratchet, it will not voluntarily click back again, it can only go forward until on that scale between '0' and 'police state' you've gotten close enough to 'police state' that there is no relevant difference.

It can't happen here is a very dangerous line of thought. See the movie 'the wave' for some more poignant illustrations of how that thought is a dangerous thing all by itself. It can happen here, it might happen here, and it likely will happen here unless we're vigilant.

> Germany in the early 20th century was a very instable government in a bad economic situation.

ok

> Soviet East Germany was communist, which isn't quite the kind of democratic that I meant.

Yes, and like that there will always be one last thing that is not quite the same which will allow you to look the other way.

> It's true that any government could turn bad, in the same way that anything is possible.

I would consider that progress, hold that thought.

> But there's very little evidence for that in the current context.

That depends on where you are looking. There is plenty of evidence that pressure is being applied, but the pressure is applied subtly enough and in places far enough away from the focal points where change is effected that you'd be hard pressed to connect the dots. That's the beauty of having a lot of information at your disposal.

A nice example is the Iraq war, the run up to that saw massive world wide resistance in the populations of the countries of the 'coalition of the willing' whereas later on this was described as the coalition of the 'gullible, the bribed and the coerced'.

> So my position is this: Given that I live in the United States in 2016

The United States does not hold a privileged position in the world, and it does not matter whether it is 2016, 1938 or 1912. For everybody living in the past in places where these experiments went wrong they could have written "given that I live in X in Y" and they'd be accurate about that.

> I'm not worried about the government randomly deciding to screw with me by looking at my electronic communications

They might have substituted 'electronic' with 'written'.

> and acting on them. It just doesn't make sense. I'm not significant relative to the scale of the US government, the government itself just doesn't work that way and all of the negative scenarios I've heard seem to be very contrived.

They again would not have used US government but whatever place they lived in. And they would have been dead wrong, and in some cases, when the fog lifted they'd have simply been dead.

What seems contrived for you, living in a country that has never seen actual war on its own soil (sorry, your civil war does not count), that exports war on an ongoing basis, that uses IT to kill people by remote control, that used telephone taps, burglary and threats to affect they inner workings of its own government to me seems to be willful blindness.

For some reason it is more convenient to you to re-write all of history up to and including the present rather than to see that maybe your government is not all that benign, neither on the world stage (where they are a bit more overt about their intent) and internally (where they are out of necessity a lot more cautious). Have the Snowden relevations really not managed to at least peg your evidence meter that maybe not all is as it should be? That your constitutional rights were trampled and that the protections afforded you appeared to be of no value whatsoever?

> If you really think that it's possible that the government of a modern western nation could turn into communist East Germany, then it seems like your problem might be with governance, not privacy.

No, I think that we may be reaching a stage where influence can be wielded subtly enough that someone like you could convince themselves that there is none of it at all. And that's the true prize, to wield that power but in such a way that it can be applied selectively enough that as long as the bread is on the table and the games keep going nobody will notice how rotten the core has become.

> If it's possible for the government to go all Walter White and just turn evil over night, then no amount of personal privacy is going to save any of us.

It will never be that overt. It will be more along the lines of parallel construction and other nice little legal tricks such as selective enforcement. Never enough for you to cross that threshold.

> And until it seems like that's a thing that's actually possible, I'm going to make practical decisions about my own privacy.

You're more than free to do that. Unfortunately, those of us living outside of your beautiful country don't even get to have a vote in there. Your personal well-being trumps the rights of everybody that is not you, and like that we race ahead down the hole.


The argument between the two of you is baffling. It is incontrovertible that the U.S. is using electronic data gathered right now by itself and by private technology companies to screw with people. Literally billions of dollars will be spent on that purpose this year alone. There are several government agencies devoted to doing it. There are also the government agencies of a dozen or two other countries which the U.S. government agencies work with and share data with to a greater or lesser extent. Literally thousands of newspaper articles have been written about this.

The U.S. government has several orders of magnitude more information about the private lives and communications and beliefs and activities of its citizens than East Germany ever had. This is also incontrovertible and undeniable.

How can either of you talk about abuses that happened in the past as if those were the only abuses? Why would you need to?


> It is incontrovertible that the U.S. is using electronic data gathered right now by itself and by private technology companies to screw with people.

Yes, it is. But these are not the people that the argument is about and that's precisely the problem here. They don't feel that it concerns them at all, it is always others who need to worry about what is done with that data, they have nothing to hide and absolutely nothing to fear.

> Literally billions of dollars will be spent on that purpose this year alone.

10's to 100's of billions of dollars.

> There are also the government agencies of a dozen or two other countries which the U.S. government agencies work with and share data with to a greater or lesser extent.

Yes.

> Literally thousands of newspaper articles have been written about this.

Indeed. But since this has not yet resulted in mass arrests on US soil this evidence amounts to nothing in the eyes of those that see it as a 'good thing', these people are keeping us all safe and are merely doing their jobs. Incredible to you, to me and lots of others but still that's a position that quite a few people hold and not much that you will say or do will persuade them from that point of view.

So, I don't need to use the past as a reference. But it is strange to see a person that would refuse to learn from history to be able to apply the lessons to todays environment. I'm working on a second part of that blog post about 'if you've got nothing to hide' that concentrates on the present (I think the past has been dealt with), but I still feel that those are such enormously important reminders that they serve as a good backgrounder for why all this stuff matters.

So this is a simple choice grounded in the 'those that refuse to learn from history are bound to repeat it' line.

> The U.S. government has several orders of magnitude more information about the private lives and communications and beliefs and activities of its citizens than East Germany ever had.

This is true. But the mere possession is not enough to sway a die-hard denier of danger and supporter of the surveillance state. All that data by their reckoning is in good hands it is there merely to protect them from unseen dangers.

Obviously I disagree strongly with that position but that's probably because (1) I've lived for a bit in a country that was a police state by most definitions and (2) I've seen how the various layers of that society would deal with this (the majority were just like karmacondon here, only a very small minority dared to take a stance, the rest saw the whole thing as essentially beneficial, which retrospectively may seem very hard to understand. In fact even today there are still those that yearn for the communist days when life was orderly, everybody had a job and everybody had a pension waiting for them at the end of the line).


The question is 'not a bad thing' for whom? That phrase comes across a bit of doublespeak. Can self serving advocacy by those who financially benefit from surveillance be termed a 'debate', as they are the only ones who make that point. I don't know of anyone clamoring for surveillance as 'its a good thing'. Is it a social good?

The ability of power or authority to lock you up, take your property or worse your life is protected by rule of law and due process. Having a debate of the rule of law or due process is similar to having a debate on privacy or a surveillance state. The consequences are negative for the individual and society as a whole, even though they may benefit some stakeholders in the short term who will of course advocate for it but on the whole it's not a social good.

The only thing we have to come to this conclusion is history, a wide body of knowledge and reason.

We can thus say with some degree of confidence that a society without rule of law or due process is not a good thing similar to a society with surveillance is not a good thing. We don’t use the ‘moral high ground’ but reason and historical experience to make these conclusions. This is not a moral issue but a practical one that has consequences for our societies. The ethical issue is the social good for the people who build these systems.

Since we are discussing the social good the alternative view needs to be backed by reason on how surveillance can be good for society as a whole, beyond offering naive presumptions suggesting people are good and will not abuse the power, or how knowing details of everyone’s activities may be beneficial to an individual or company because while that may be true they do not address the social good.

And the only thing we use in these discussions is reason, let's not make it personal.


I'm more worried about a lack of government transparency than I am about personal privacy.


Por que no los dos?

I'm worried about both, and I can't say which I'm worried about more. What are the reasons you are concerned about one more than the other?


Governments are exceptionally powerful. In theory they have control of 50% of my income, as well as the police, legal system, and military. Genuinely corrupt government is one of the scariest things I can imagine.

If the government is not corrupt, I have optimism in being able to get through a personal attack.


Privacy is similar to property rights in this guise. Facebook is like your backyard or even your home for many, you don't mind people you invite in see the pictures on the mantle, wall, or whatnot. You would mind if they start rummaging through the house though.

So in regards to privacy we treat it like property. Governments which don't support good property rights are not going to care one whit about privacy and those who come after privacy will eventually tread on property rights.

People need to understand it as something to be protected just as you would your physical stuff and work towards having it treated similarly in government.


I left the intelligence community because of exactly this. Took a 40% pay cut (probably more like 50 if you count benefits) but I don't regret it. I couldn't in good conscience believe in freedom and privacy while working for a government actively working against those things.


Great words.

But I think it's already too late. The genie is out of the bottle and it's already doing it's darker things in many parts of the world.

One of the great things about technology, is that it knows no boundaries. But that is also it's biggest danger. Because powerful technology in the hands of weak people can lead to disaster.

Intelligence is the ability to create something from nothing, while Wisdom is the ability to choose (wisely) how to apply intelligence to reality - what to create and what not to create. If intelligence is the engine, then wisdom is the driver.

We have trained lots of engines but very few drivers. And that is what worries me most..


> We expect professionals to behave ethically. Doctors and companies working on genetics and cloning for instance are expected to behave ethically and have constraints placed on their work.

Yes, I believe we should have an "Hippocratic Oath" ([1]) for technology workers.

[1] https://en.wikipedia.org/wiki/Hippocratic_Oath


Second this. The Dell customer base "hack" we heard about yesterday is an example of this. It was not as if some extraordinary unethical programmers hacked their database, but inside technicians who were supposed to guard that data turned out to be unethical. Its high time we give the same importance to ethics of tech personnel that we extend to doctors, lawyers, etc.


Perhaps large corps like dell should face charges for leaking personal data.


The issue with the 'nothing to hide' argument is that it puts the burden of proof and determination of whether something is 'hide-worthy' on the target of the inquiry.

If you subscribe to the 'presumed innocent' premise of the law (https://en.wikipedia.org/wiki/Presumption_of_innocence) then the burden of proof is on the inquisitor.

Either you believe in presumed innocence or you don't. Pick one.


My respons to the 'nothing to hide' argument is: "Why are you so scared of things that are hidden?"

That puts them on the defensive. Since now they have to justify their fear for the unknown. And then we're onto the real topic: fear.


Someone who pushes the "nothing to hide" argument probably wouldn't understand the implication of your response. I suggest a different approach: say "Okay, in that case what prescriptions are you taking, how much money do you have in your bank account, and how often do you have sex? After all, you have nothing to hide right?"

That also puts them on the defensive side of the argument.


Are those really sensitive questions for most people?


For some people, no. I tried to cover several different bases, but you get the idea: ask sensitive personal questions. Maybe the money one should be about credit card debt.


Privacy is less important than the ability to trust your government won't do blatantly illegal things like put innocent people behind bars, steal their money or property, when they actually know that they are innocent. The biggest problem with America is that the government can not be trusted to follow the rules, their own rules.

I don't live in the US, but the stuff the government whether local, state, or federal gets away with is very scary to me. What scares me even more is how the United States encroaches on everyone else's legal system. That's the underlying problem. Under such governments that are actually out to get people at times without much cause breaking all sorts of rules, that's what's scary.

The type of soft totalitarianism that exists and passes as common place is very scary. And that's really the people you should be scared of, and that's who you really want to protect your information from. Your run of the mill government that's actually trying to do a good job and not break its own rules, that sort of government like my government, scares me a lot less. Despite the fact that they encroach on my privacy. I know heads are going to roll if it comes out that they do things that are blatantly wrong or abusive with the information that they are collecting.

Not so in the US. They always have a half-ass lie that still somehow passes muster.


>...The biggest problem with America is that the government can not be trusted to follow the rules, their own rules.

Isn't that the problem with all governments? Like they say "Eternal vigilance is the price of liberty”


Governments are also made up of individuals.


You can never completely trust your government won't abuse its power.

Privacy is a necessary tool for opposing those abuses of power and organizing a change.


Whenever I read one of these articles I am wondering why the examples of WHY mass surveillance affects ME as average Joe negatively have to be so weak / contrived:

For example, imagine someone convinced by the argument "nothing to hide nothing to fear". Would this example convince them that in fact they do have to fear something? "You might think twice about contacting or meeting people (exercising your freedom of association) who you think might become “persons of interest” to the state". I do not think so, after all, average Joe does not know such people.

The solution, in my experience when talking to sceptical people not convinced of the risks is talking about money. Imagine someone with the kind of knowledge we are talking about with mass surveillance. And imagine this person could inform your insurance companies. Do you still think that you have nothing to hide? One then must only show that data is never "safe" and could always be "leaked" to make a very simple, everyday example of why it is not in my (average Joe's) interest to be continuously monitored.


Still not convincing, at least not to me and many of the people that I know. I'm pretty sure my insurance company has already calculated my premiums based on my demographics and claim history. I made not have told them "I have more than five drinks almost every weekend", but they know. They were making inferences from Big Data since back when it was just "data". And practically, I don't think most insurance companies have an infrastructure set up for handling snitching, especially not the smaller ones.

I'm really still waiting to hear a convincing argument as to why I have something to hide, ideally something practical as opposed to hypothetical or philosophical.


Imagine some type of insurance that typically costs 100 USD/month in premiums per person and where the insurance company typically pays out on average 90 USD/month for damages per person. Now, imagine that one insurance company has figured out a way to detect a certain group of people who will only have damages of on average 50 USD/month, which make up about 5% of the potential customers, and detecting which of their (potential) customers is in that group costs on average 0.10 USD/month per customer. Do you agree that they will start offering lower premiums for that group of people, because that will attract new customers, and thus increase income?

And what if one company does this - what will other companies do? Will they keep the same price for that group as for everyone else? Then that group will leave for the company that's cheaper for them, right? Leaving the other companies with the higher-risk customers, right? So, they will have to pay out more for damages, right? Now, will they just go bankrupt? Or will they increase premiums to cover the costs?

Noone says that insurance companies aren't making inferences from data. It's just that the more data there is available to them and the more powerful computers and algorithms get, the better they will be able to model risks. And individual companies won't be able to ignore that, even if they want to. And it's the exact opposite of what insurance is intended to do: It's intended to distribute risk. The more exact insurance companies are able to model risks, the more insurance will become unaffordable to those who need it, and the cheaper it will become for those who don't need it.


The insurance industry is fairly heavily regulated (in the US). They can only charge different rates based on specific factors. This means that your entirely imaginary (and therefore entirely unconvincing) scenario will remain entirely imaginary (and unconvincing).

Personally, I would prefer if car insurers could price discriminate more based on data. I think this would lower rates for me personally, both in the short term because I try to cultivate safe driving habits, and in the long term because it would create an incentive for everybody to try to drive more safely.


> This means that your entirely imaginary (and therefore entirely unconvincing) scenario will remain entirely imaginary (and unconvincing).

Well, that very much depends on how you look at it. It might be imaginary in so far as regulation in the US possibly prevents those consequences, which is great. One might also see it as evidence that collection of personal data has risks--and that regulation might be one way to deal with those risks, at least in some cases. After all, this kind of regulation is in effect a prohibition on collecting certain kinds of personal data, even if the collection in itself is permissible, as companies won't collect data when they can't use it for anything anyway.

> Personally, I would prefer if car insurers could price discriminate more based on data. I think this would lower rates for me personally, both in the short term because I try to cultivate safe driving habits, and in the long term because it would create an incentive for everybody to try to drive more safely.

Are you sure that it would? Remember that for the insurer, it doesn't matter whether they calculate your risk (and thus your premium) correctly, what matters for them is that they aren't worse at calculating risks than the competition (i.e., the competition can't outcompete them on price or cause them to be left with a non-representative sample of their risk model), and that on average, their criteria match reality (i.e., they don't take on risks that are actually larger than they can pay for). Even if you in fact do drive more safely than the average driver (as in: at the end of your live, you will have had fewer/less severe accidents), it might happen that their predictive models group you in a different category, because characteristics of your driving behaviour that they use to categorize you are correlated with high-risk drivers. If insurers don't know how to (economically) measure why your driving behaviour is safe, it doesn't matter whether it actually is.

Also, the incentive can actually be a problem, exactly because risk models employed by ensurers tend to not be an exact representation of reality. If you have an incentive structure that does not align with reality, the incentive can end up promoting harmful behaviour. For example, one obvious proxy for safe driving habits could be lack of sudden decelaration. It's easy to measure, and generally, if you pay attention to traffic and drive with foresight, you usually will not need to brake suddenly as much as a reckless driver. So, it's probably true that both, incentivising people to not brake suddenly would have as one consequence people driving with more foresight, which should reduce accidents, and also that people who don't brake suddenly generally are a lower risk for the ensurer than those who do. However, this proxy can not distinguish whether you brake suddenly because you didn't pay attention--or because someone else didn't pay attention and surprised you. In the latter case, though, the thing to do to avoid an accident might be to brake as hard as you can. But that will be seen by your insurance as risky driving behaviour (which it most of the time is) that comes with a higher premium, so you have created an incentive for the driver to let an avoidable accident happen. Note that the driver in question won't think about this for an hour before deciding what to do, it's a gut reaction that might well be influenced by having internalized "braking hard costs money".

And also: What if you actually are a really good driver but you enjoy braking hard? What if you brake hard just for the fun of it, in situations where it's completely harmless. Is it fair if you have to pay higher premiums for that? Such incentives that work via proxy measurements of the actual risks tend to force adherence to a standard of behaviour. I find the idea frightening that insurance companies might get to dictate "safe behaviour", where the specific behaviour is not actually necessary for safety, it just happens to be easily distinguishable from risky behaviour, so behaving differently costs you money, simply because it's difficult to figure out that your behaviour is not actually risky.


I think your question was answered at a link provided elsewhere in this thread[0].

What you might or might not need to hide cannot be reliably determined in advance. It is not a constant, it is a variable and you don't get to pick which way it goes. Consider the plight of the gay Russian blogger using LiveJournal, which was later sold to a Russian company.

[0]https://news.ycombinator.com/item?id=10849248


I'm sure there are more direct examples, but I watched a video about Sesame Credit recently and it paints a rather dystopian picture of what's possible when you mix surveillance and gamification. https://www.youtube.com/watch?v=lHcTKWiZ8sI


Because nobody, as far as I know, has brought together the issues of knowledge and control yet.

In short, my argument goes something like this:

1) It is possible to manipulate or influence people. This extends to their memories, perceptions, emotions, actions, even complex beliefs. And it can be more or less direct. Humans think of themselves as special snowflakes, but we are actually quite simple.

2) The degree to which one can manipulate or influence other people depends on

___a) the effort and intelligence one expends on it,

___b) the degree to which one has knowledge about the target, and on

___c) how close one is to the target (i.e. are you "under their skin", in their house, or 10 km away; what are your intervention options).

3) As social animals, we have always been subject to the influence of other people. Usually, this influence has been local, fuzzy, costly, relatively obvious to the target and "controllable" (in the sense that knowledge about target was strongly negatively correlated to (generalized) distance; if target became suspicious of you, target could simply cut you off or increase distance).

4) Technology and science currently change the rules of the game, and in profound, basic ways:

___a) We learn more and more about how to influence people, both by physical means (regulating temperature, lighting or noise conditions; psychotropic substances or, you know, food; changing the color of a button or playing with the timeline of events; etc.) and psychological means (e.g. using the right words or framing to elicit a certain response or evoke a certain emotion; exploiting properties of the social graph; etc.). This knowledge is, of course, still very imperfect but it is also cumulative.

___b) More and more of our interaction with the external or social world becomes mediated by technology ⇒ the options to intervene in the life of someone multiply as technology becomes a more integral part of life. As a result, it becomes very cheap to make targeted interventions in someone's life. Example: Today, ranking of search results or filtering of news; tomorrow, entire articles machine-written for you (personally). Automated homes. The mind boggles with the possibilities of augmented reality and/or immersive experiences. Farther out: Optogenetics.

___c) Deep and very detailed information about people can be collected in real-time and stored cheaply (no memory decay). The more ingrained the tech, the more detailed the data. For example, real-time monitoring of blood sugar and, in the future, perhaps even stress hormone levels.

___d) Physical distance becomes meaningless.

5) Due to the tendency towards natural monopolies in the sector, all this information and power accrues in very few hands ⇒ strong and unprecented centralization of both fine-grained knowledge about individuals as well as the means to intervene in their world without regard to distance or cost.

It is not hard to see that, to indulge in some hyperbole, "mind control" of a large population undermines traditional means of checking power. Who cares about elections if I can control whom people like? Why bother with competitive markets if I can make people want whatever I have to give (and make them pay reservation prices)? No more need for violent suppression of dissent because I can detect and change inconvenient ideas surgically.

To be clear, I am not saying we are already living in a mind-controlled society.

What I am saying is that collecting data (or rather letting someone collect data) about us is an integral part of this scenario. If data became more compartmentalized and limited, say, this whole thing wouldn't work (or be far less effective).

In fact, because technology and science progress anyway, how we handle our data may be the only way we can influence the course of events in this respect.

At least to my mind, this is the real issue of privacy. Alas, I seem to be alone so far. It's really hard to see for me why this is not totally obvious to everybody. I should finally write that essay that I've been meaning to for the longest time. Then someone can at least attack my argument. Sometimes, in your weaker moments, that nobody seems to see what you see can make you question your own sanity...

_______________________

PS: I am also not claiming that Larry Page or Mark Zuckerberg are rubbing their hands gleefully ("hihihi") at the prospect of world domination. I think concrete persons are incidental to this scenario. Heck, the one who ends up controlling, in this scenario, might not even be human. It just doesn't matter who. If it is technologically possible, it will be done. Loss or neglect of privacy makes it possible.


> I should finally write that essay that I've been meaning to for the longest time.

No need. Orwell already wrote it.

I just don't get why you think you are alone.


> I just don't get why you think you are alone.

I would hope I'm not but it sure feels that way.

Maybe I'm not reading the right stuff or talking to the wrong people.

Can you point me to some (contemporary) arguments along the line I propose?


Richard Stallman, Lawrence Lessig, Cory Doctorow?


RMS, yes! How could I forget? Lessig, different story. Never read Cory Doctorow.

I was thinking more about something along the lines of Chomsky, McLuhan, and Orwell updated to the current day. But we'll see…


Ask to read through someone's bank statement, look at their photo library, access their medical history or read all their messages. I doubt anyone would be comfortable with someone doing that.

Another good example is the use of shredders at home, perhaps you should suggest people leave their sensitive data in a box on the pavement outside rather than shredding it.


I generally tell such people that I'll come round to their house next week to install the webcams in their bedroom. No need to wait for me; I'll just let myself in.


Flip it around, don't accept the questioners' premise that you need to defend your position. Challenge them to explain why they think a third party should be granted access to your info. And be ready to tell people 'no thanks' and that you are not willing to explain why. Resist taking a subservient position when you have what they other party wants, make them justify the need (this is obviously less practical advice for gov't surveillance than for corporate).

I do this frequently and while it can be a bit awkward when dealing with marketing or PR types, as long as you are polite about it things work out. And anyone pestering you with repeated requests for data or an explanation can receive a less polite response.


> For example, imagine someone convinced by the argument "nothing to hide nothing to fear".

"Please strip yourself to underpants. Yes, this very moment. We (society) need to make sure you don't have any concealed weapons or explosives. I will then assure these other people that are totally unrelated that you don't pose any threat to them. What? Your privacy? Safety is more important than privacy, right? And you do have nothing to hide, so why do you fear stripping?"


I'm reminded of an article from last year which is only tangentially related, but it has a perspective I think could be used to explain the severity to average Joes:

http://www.nytimes.com/2015/03/08/magazine/the-loser-edit-th...


It pains me to see that power-hungry people used most techies as useful idiots to implement their own goals - how many politicians with dirty hands are now getting on board of prime tech companies? They finally understood what technology can offer to them. It seems like the end game is who is going to control everything - those types can finally see the time when technology is sufficiently advanced to control every aspect of our lives. It seems like technology would enable a special caste above law, with power unlike anyone before them. Instead of using technology to improve living conditions of all, establishing new, unseen before, more democratic and free society, we seem to be hell bent on preserving all the nasty traits of previous societies and even doubling down on them by having almost complete control. Seems like some dystopian sci-fi novel is happening now :-(


I'm always suspicious of that specific dystopy.

It's not that they aren't trying, or that we are not letting them. The problem is that technology is very "stubborn", "subordinating" only to those it chooses to, and those are always almost impossible to predict. No elite was ever successful on that kind of coup.


"This affects all of us. We must care." is not an effective way of convincing someone.

I personally do not care about privacy. I see no reason why I should.

It's just my opinion. I know other people do but please don't generalize.


I think your opinion is valid and should be fairly represented, but consider that your reasons for not caring about privacy may be flawed or inconsistent.

Assuming that you don't care about privacy because you're apathetic, do you also not care about free speech because you don't say anything controversial? Do you care about your right to assembly even if you don't protest anything? As an extreme example upon which to build a baseline, would you mind if a neighbor had unmitigated access to watching you lounge in your underwear, take a shower or have sex?

Why do you not care about privacy? Do you feel that you don't need it because you have nothing to hide, or are you willing to sacrifice it for some greater good (e.g. terrorism etc.)? Are you merely indifferent or do you aggressively oppose the concept?


First of all thank you for respecting my opinion. I appreciate it.

1.) Free speech is a completely different topic. Snowden's quote on this page makes no sense to me no matter how often I re-read it. If free speech didn't exist I wouldn't be able to express my opinion about privacy :)

2.) Privacy means hiding the truth. Hiding what really happened. Hiding who you really are. I believe it is a flaw of the human personality that makes us want to hide information and eventually lie about it.

I don't care if Google or the government knows that I'm searching "[insert embarassing keywords for you here]" or if Facebook knows my location, or if Twitter knows what I like based on the people I follow.

Who is the government? It's people. People like you and me. If people decide to make assumptions based on data they collected and the assumptions aren't correct it's their own fault for assuming something in the first place (because...you know...it's an assumption...it can be wrong).

I am not aggressively opposing the concept of privacy. I respect other people's opinion.


I believe it is a flaw of the human personality that makes us want to hide information and eventually lie about it.

Who said anything about lying being a part of a desire for privacy?

I don't care if Google or the government knows that I'm searching "[insert embarassing keywords for you here]

Would you care if a prospective insurer knows you're (hypothetically) searching for "atrial fibrillation management" or "opiate addiction"? Or a prospective employer who knows you're (hypothetically) searching for "corporate firewall security exploits"? Or a prospective romantic partner who knows you're (hypothetically) searching for "genital rash"? Any of those searches could be legitimately borne of pure, unadulterated curiosity, but taken out of that context by people with whom you're hoping to establish some kind of relationship, they could easily doom that relationship before it begins. Hell, those searches may not even be made by you but by someone in your household, but if decisions are made and opinions are formed based in that information, you've suffered an unnecessary loss.

Who is the government? It's people. People like you and me.

Indeed, people like you and me, except those people have the authority and/or power to incarcerate you, or impinge on your rights in other (less direct/more insidious) ways. Privacy isn't about hiding the truth from those who have a need to know it, it's about controlling the context of that truth, or at the very least, having a say in any response that comes from the truth being discovered.


Recognize the real source of the problem.

Like you said, someone trying to get information about the topics you mentioned could simply be doing this out of curiosity. Now person A from the government says you are X. However you are not X, you are Y.

Think again, what is the actual problem? The actual problem is not the data which is 100% correct.

The actual problem is people's prejudices and assumptions. This is what we need to fix. If someone searches about topic Z we should think very carefully about the consequences of drawing an assumption.

However, this view is very ideological. Your view on the current state is more practical. I do not disagree with your statements, I simply wish that we can address the real issue here in the future. Even if it takes us centuries.


> The actual problem is people's prejudices and assumptions. This is what we need to fix.

Right, so the whole premise of your indifference or opposition to the privacy argument is that people should not have prejudices or (wrong) assumptions. Isn't that too idealistic and to rid people of the prejudices and figure out right moral standard for behaviour - will it not take many more generations, if at all it happens? Till then; till we figure out the right _prejudices_; till all of humanity naturally elevates to the right moral standard, shouldn't we be wary of those bad agents who can abuse others by breaking into their private matters?

Your premise, in short, assumes an ideal world where none is troubling others for their private acts, which unfortunately isn't the case yet.


You might be searching for those things out of curiosity, but if (in the insurance hypothetical) statistically more people searching for "opiate withdrawal" are addicted to opiates, then it's going to affect your health insurance premiums regardless of your intentions.

Or more generally, you can't choose how people interpret data they gather about you and that can adversely affect you.


I forget the term for this, but you've poisoned the discussion by leading it to a dead end with an impossible goal: ridding humanity of prejudice and assumption. Since that isn't remotely possible, we might as well throw our hands up in the air. And forget about data, it's not at fault here.

Like you, Snowden's freedom of speech line never impacted me... until I read this article. It suddenly hit me. The reason I was missing his point is because I was framing it in terms of what's in it for me rather than looking at it as what's in it for us. Someone who doesn't care about freedom of speech doesn't care because he doesn't see what's in it for him. But I doubt you'd argue the benefits of the first amendment.

Similarly, privacy is very important. You might not care (even though you really do), but defending privacy is about ensuring security. Privacy is important for all of us, just like freedom of speech is.

As for what the actual problem is, the problem for the most part is ignorance and a failure to quench it. We need more privacy / cyber-security advocates who can educate people on why they ought to care. It's like teaching people why it's important to lock their doors at night or why they should put their letters into envelopes instead of just using post cards. It's why my mom had to drill into my brain the importance of not giving out my social security number willy nilly. Are you so liberal with your SSN? You don't care about privacy, so would it bother you if Facebook or Google asked for it. After all, they just want to make sure you are who you say you are.

Things aren't obvious to us until they're obvious, and then it feels like common sense. DUH, lock your door! DUH, encrypt your messages!


> Any of those searches could be legitimately borne of pure, unadulterated curiosity, but taken out of that context by people with whom you're hoping to establish some kind of relationship, they could easily doom that relationship before it begins. Hell, those searches may not even be made by you but by someone in your household, but if decisions are made and opinions are formed based in that information, you've suffered an unnecessary loss.

I think the negative effects there are largely due to how private we are. If we were constantly confronting these things that seem embarrassing or concerning, we'd come to realize how normal they are.

It would require a completely shift in how we view privacy, one so large I doubt it would ever happen, but I think those are ultimately a symptom of the current system, where we often keep things private for the sake of societal or cultural norms, sometimes to personal detriment.

I'm not particularly arguing that either way is inherently right or wrong - but I do think the consequences you speak of are only meaningful in a world where a large measure of privacy, at least between most people in their day to day interactions, exists.


I understand that you believe that privacy is hiding the truth. It appears that you believe that the only reason someone would hide any information is because it only allows one to lie. Thus you conclude that since lying is bad, privacy is also bad because it promotes lying. If the above chain of reasoning is accurate, then let's do a thought experiment. What if you personally hold a belief that is contrary to public opinion, in fact, let's say it's a crime to believe this, but you still believe it? And for some reason you decided to make mention of it to someone and you are outed for holding a belief. Do you think that even though you disagree with society at large, you should be punished for that belief? Who is correct in this scenario? You? The people? ... privacy isn't just about lies, it's about being able to have space to have thoughts and develop concepts that may not be ready for public consumption. It's about freedom to think about concepts or beliefs without State retribution for not holding the party line. It's not about withholding truth. It's about being able to control the information that you personally generate without fear of judgement from external parties.


The underlying assumption to your hypothetical is that thought crimes exist. I would say that someone in that situation doesn't have a privacy problem, they have a governance problem. Either a dictator has seized power or their fellow citizens have voted to make it illegal to express certain ideas. And in either case, encryption isn't going to be much of a solution. It'll only delay the inevitable. Someone who talks about illegal ideas is taking a big risk anyway.

It seems common that the arguments for privacy trumping other values depend on bad behavior by state actors. In which case, reforming the state by whatever means necessary would probably do more good than advocating for philosophical concepts.


Fair. I was just trying to take the problem to the hypothetical edge of having no privacy at all; to a case where you do not even enough privacy to share a thought without fear of retribution. I was also trying to align the idea with their understanding of freedom of speech, they do agree freedom of speech is ok, so if you can tie speech into thought and then also into privacy, maybe there would be a logical connection that allows them to understand the need for privacy as a type of freedom.

The situation is very complex because privacy has been implicit in our daily lives for so long, it's really difficult to map out the ways it would reduce our personal freedom. If we want to remove privacy, then we need to make it impossible for anyone to keep anything private from anyone else.

If privacy isn't important; then we should all live in proverbial glass houses where everyone can see everyone else's lives. Why should we trust the government with that power, why not everyone?


Dave Eggers' The Circle is perhaps worth reading in this context. I don't actually think it's a very good book and is mostly tolerable if it's read in the vein of a deliberately exaggerated "if this goes on" cautionary tale. But there are a number of speeches by one of the characters (Bailey?) in the vein of why radical transparency is good.


> I believe it is a flaw of the human personality that makes us want to hide information and eventually lie about it.

At the very basic, we want to hide things because other people do not like it (which leads to reaction from shaming to prosecuting and stoning). Fundamentally, the only way for it to not happen is to have a completely homogeneous society, or all human to turn into saints. I will just assert the former to be bad, and the latter to be impossible .


I also feel, that some form of privacy is also needed to develop new ideas. To mull them, to test them out, before everybody with their own interpretations get's a say. How will we as a society develop further, if everybody will be more or less homogenized by means of ubiquitous surveillance and self censorship (or punishment for transgressions)?

We need save (and private) spaces. At least in my view of the world, where I am on your side, not believing all people will turn into saints. Not even most of people.

So killing privacy and upping surveillance of everybody, we as society will shoot ourselves in the foot and killing new ideas before they are even thought i fear.


This is all true and I agree wholeheartedly.

And when The People incorrectly decide that based on data you raped a 15 year old, you will be in prison for the duration of the trial, you will be on the sex offender list forever, and you will be inconvenienced with anything requiring a background check. You, not The People.

Ideologically, I agree, privacy is a lame side-effect of how groups of people work. Pragmatically, please don't take it away.


How is privacy really a good solution to the problem of mistaken convictions?

The lack of privacy may very well reduce the amount of false convictions. Sure, you looking up pix of teen boys might look suspicious. But the lack of privacy might catch the real criminal too.

If we had accurate gps for all people all of the time, it would probably reduce false conviction rates.

Plus, the way the system works now is that once you are a suspect, you really don't have privacy anymore. That's how the Constitution works. Once there is probably cause, the state will rifle through your stuff, ask your friends and family, etc.

On the mistaken conviction issue, I'd probably rather live in a privacy free state than a state with privacy. Assuming I was innocent.

Though I prefer privacy for other reasons.


I see your point. If society ends up believing in the fact that assumptions based on collected data have suddenly turned into "facts" then we will be truly...let me say it frankly...we are done for.

I believe when this happens Hacker News won't exist anymore because the intelligence of human beings will be comparable to that of a fly.

Luckily...this didn't happen yet because I can still have intellectual discussions, even on the internet.

I like your separation of "ideologically" and "pragmatically". I agree, it's not a pragmatical approach.


It's not even about "facts". Suspicion is enough because "innocent until proven guilty" is true in theory, but the period between "suspected" and "proven innocent" can be very ... inconveniencing.

And that's IF the internet or real lynch mob doesn't decide to go after you. If it does, then the being proven innocent part is the least of your concerns.


There are enough examples (at least here in Germany), where peoples lives got uprooted and destroyed exactly because false accusations or false interpretations of "facts" happened. Even after the where acquitted lots of people distrusted them, bullied them and such, because the press had already told everybody what awful people these people were.

And hey, if it is in the news, it has to be true - doesn't it?

We will never fix these idiots (myself totally included). Because even if we do not believe these things we will have them forever at the back of their heads, when presented with a name of someone because: "maybe they did do the thing non the less, even if the court acquitted them".

This is just human nature. You cannot actively un-know something you heard and this will sadly inform your inherent biases non the less - even if you intellectually know it to be untrue.


> I personally do not care about privacy.

> Free speech is a completely different topic.

((James Madison rolling over in grave))

Oh, but freedom is not a different topic. These two types are enshrined in the US Constitution after centuries of experience in the old world.

Imagine that you are too young or too lucky so far to have information used against you or your family. Yet history shows that it happens again and again, and will again.


> I don't care if Google or the government knows that I'm searching "[insert embarassing keywords for you here]" ...

If you were living in western China, you might care, because you might end up an involuntary organ donor. Or if you were searching for gay porn in Saudi Arabia.


Privacy is not deceit. Privacy is the right to be left alone.


> Who is the government? It's people. People like you and me. If people decide to make assumptions based on data they collected and the assumptions aren't correct it's their own fault for assuming something in the first place (because...you know...it's an assumption...it can be wrong).

What if the assumptions they make raise the premium on your health insurance because someone sells your data? People (or, more likely, algorithms) making wrong assumptions, even if it is their own fault, can affect you negatively.


Wouldn't you agree that the real solution is to fix the a) algorithms and b) assumptions rather than c) hiding the data?


I can't fix someone else's algorithms and assumptions, but I can hide my own data. Even if I agree with your premise, if someone else is in control of the "real solution," then it's not a real solution for me, is it?

There's also the chance that those algorithms and assumptions are "correct" from a business standpoint (it would cost more to "fix" them than the monetary benefit of fixing them) even if they're not correct for consumers, meaning nobody that's actually in control of them has any motivation to fix them.


>>I personally do not care about privacy. I see no reason why I should.

When late 19th century Germany started recording census data, a clerk made the suggestion that they should also record each person's religion. No one objected. What could be the harm, right? They were already collecting age, gender, occupation, etc. so they might as well collect one more thing.

Half a century later, the Nazis were able to use those same historical census records to identify whose grandparents were Jewish, and therefore who must be Jewish, which greatly aided in rounding up those people.

So imagine: a piece of information commonly believed to be harmless to reveal about oneself became the primary method that facilitated one of the greatest atrocities in human history.

The lesson here is that tomorrow's government may turn out to be very different than today's. Information you willingly reveal about yourself today, or don't mind others (such as the government) finding out about you, may be used against you, your children or their children.

That is why privacy is supremely important.


Yes! Everybody should read https://www.schneier.com/blog/archives/2006/05/the_value_of_... who quotes Cardinal Richelieu "If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged."

As Schneier puts it: two proverbs sum it up: "Who watches the watchers?" and "Absolute power corrupts absolutely."


This is the logical extreme. It may seem far fetched today, but it doesn't have to get nearly this bad to cause great harm.


> I personally do not care about privacy. I see no reason why I should.

I'd love to know why you don't care.

1) Do you really not care if people watch everything you do? Would you live in a Big Brother type house where everything you do is recorded and shown to the public? That includes you showing, shitting, picking your nose, having sex, watching porn, every conversation you have etc.

And maybe you really don't give a fuck about that. I guess some people could stand naked in front of a crowd and shit themselves an not think twice about it.

But then on the other hand, there is one more thing to consider. What if the current government, or another one elected in the next 5 to 20 years is really evil? And let's say they are against the things you believe in, and they believe anyone who believes what you believe should suffer unimaginably? Or what if you like to complain about things? You post negative restaurant reviews Write angry letters in the newspaper, stuff like that. And this new government doesn't like complainers, because they don't want to be challenged. They could just look at all the stuff you have complained about in the past and decide you are too dangerous. So they arrest you (and your family, because why not?) and then they put you in a concentration camp. Where you and your family are worked to death. Starved to death. Beaten to death. Raped. Experimented on. Burned alive.

I know it's an extreme example, but things like this have happened before. Except now it's easier than ever for governments to round up the people they don't like, and to find out who the people they don't like are. And the less you care about privacy, the easier you make it for them.

Would you still say you don't care about privacy? Or do you just believe something like that would never happen, and so it's not important that we take steps to prevent a future government to do such things?


From the perspective of political philosophy, I see no problem with you not caring about keeping your own personal information private. But that's very different than you proposing a government program that will forcefully prevent other people who do want to maintain control over their personal information from doing so.

To phrase it differently, in a world where individuals have perfect control over their personal information, so can still post any of your own personal information to the Internet if you want it to be public. It's basically the difference between saying "I'm not personally a Muslim" and "I think practicing Islam should be illegal."


I think you mean "I don't care about my privacy at this moment". These are very different ideas. Or would you not care about your privacy if you visited a country where reading HN content was illegal? Or would you not care about the privacy of someone denouncing some government wrong doing? Or if someone started harassing you because of the things they found out you're doing?

There are plenty of cases where privacy are important, even if you think they don't currently apply to you.


What about the impacts of all all-knowing government in the future?

10 years from now a new government is voted in, and they severely dislike people with your political views based on some online comments you made in 2016. They could do anything from just making it really hard for you to vote, all the way to getting you fired, pulling your mortgage out from under you, etc. etc.

You really have no idea what a future government or corporation will do with so much data about you. Based on history, I think it's best to assume it won't be good for you.


And, I think it's important to point out: This isn't just a scary story to make people side with you. This actually happened, albeit before the internet: https://en.wikipedia.org/wiki/Red_Scare


The part i hate is the asymmetry. I don't really care if people know where i am.

In the naive sense, there is a lot of value in knowing location. Restraining orders, for example, could become effective. "How did your cell phone get to the bank that was robbed if you weren't there?" stuff like that. Furthermore, politicians pander. being able to answer how many people showed up for that protest is valuable. what were their demographics? Perhaps this is an issue that matters.

But the asymmetry is horrible. You want real time access to my location? ok, but make the location public for the police. and NSA employees. and senators. Having a record, that's made public after a few weeks or months seems pretty reasonable to me. Having a record that's secret and controlled by a handful of powerful people, which is what i think we have now, is much more frightening.


Mind posting your medical records and tax returns? Once we see them, we can let you know what else we would like to see.


Can I have access to your email account? I only want to read everything you don't care about protecting. Promise.


So if police came to your home and installed surveillance cameras (you know, just to check if you didn't beat your wife or smoke pot on your couch) you are OK with that? Everyone has something to hide and needs some privacy, digital or not.


Let us imagine that someone out there doesn't like the fact that you don't care about privacy and thinks that you should. In a world without privacy where everything is under surveillance that someone can simply recreate reality so that you are now a criminal. Hell, they might just smear you on twitter as a racist rapist and post a bunch of fake data. It won't matter what the truth is any more. Allowing the state to trample all over privacy will only make such activity easier and more prevalent. Worse it can create the illusion of 'authoritative' truth which is even easier to manipulate.


>I personally do not care about privacy.

You might think that, but it's just a matter of time really before someone asks you a question you don't want to answer. The users of Ashley Madison certainly had something to hide. They weren't doing anything illegal, so why should there have been any concern about privacy?

It's somewhat humorous when politicians use the "nothing to hide" argument. The governments and politicians seems to have the most to hide. It should be any problem for any government to release all information about any decision ever made, at least not after the fact.


Do you have curtains in your house?


I agree with you completely.

Guns don't kill people. People kill people.

The value of data by far surpasses any risk involved with it. If people were more honest, transparent and straightforward, we would have a much more tolerant society. Everything would be much more efficient. Crowd sourcing of health data would be facilitated, and we probably would have greater understanding of health and be able to cure many diseases.

All arguments in favor of privacy naturally lead to the conclusion that we should get rid of the internet. And probably communication in general. Heck, make people blind and deaf, that'll solve the privacy problem. People are deluded.

Just one of the many instances where so-called progressive people blindly support an inherently conservative cause.


Privacy is important because information is powerful. One should be able to defend oneself against information related attacks. Just because you are a good person that wouldn't use information to hurt others doesn't mean others wouldn't.

A right to privacy is similar to a right to lock one's car doors when traveling through a bad neighborhood.


Civic responsibility. You choose to be part of this society and you owe it a debt for the stability and opportunity it provides you. Its not an entitlement and your egocentric attitude toward privacy places you, and society, at risk. There must be boundaries set for government or it will expand power unchecked. History demonstrates this time and again.


Constrain government power by shining light on it.


Can I have access to your Senator's philandering text messages so that I can bribe them to make a vote you don't support?


you don't mean bribe, you mean blackmail. It's a good example. When you get anonymous, credible, extortion/blackmail attempts a good reaction is to go to the FBI and say, look, some anonymous person is blackmailing/extorting me. (a crime).

Now you can decide whether you want the courts to have any ability to connect that crime to the medium that you were communicated via, or whether that information should simply vanish in the ether, leaving anyone to blackmail or extort anyone else from the safety of their own home and behind an anonymous Internet connection.

you've, in fact, made the opposite point to the one that you meant to :) when crimes start getting committed, sometimes society needs recourse.


If you outlaw anonymity, only outlaws will be anonymous.

But we were talking about privacy, not anonymity.

My argument is that even if you don't have secrets, someone with power over you (politician, judge, general, CEO) may, and you should want their secrets kept.


You don't care about your own privacy, but do you care about anyone else's privacy? If someone you care about deeply tells you that they value their privacy and they want to keep it, does that matter to you?


>I personally do not care about privacy. I see no reason why I should.

Maybe that's what you think. Let's see if you're willing to post the names, phone numbers, SSN or other ID numbers if you have them, and addresses of all your friends and loved ones. Got some weasel words about how that's not "personally"? Ok, start with your own info. Not going to do it? Didn't think so.

If you think you don't care about privacy, you probably haven't thought things through as others have.


A tragicomedic irony of course is that most "nothing to hide" advocates demand secrecy and legal cover for their own actions. This sort of doublethink leads to things like a "Freedom of Information Act" in the US that provides a legal framework for concealing information that should otherwise be out in the open.

If you apply the "nothing to hide" principle to states' own actions, I can think of two possible conclusions:

1) It is not true that if you have done nothing wrong, you have nothing to hide (i.e., there is a legitimate right to privacy)

2) State actors have something to hide, so they must be doing something wrong.


I have two unrelated thoughts.

"Chilling effect" has always been a profound term for me, because I imagine the "cold" (numbness really) sensation a human body often senses when something truly awful (disembowlment/dismemberment) occurs. The body's way of protecting itself is to go "cold", and in many ways that's exactly the effect taking place here, as well.

There's also an undeniable part of this conversation that rarely gets addressed simultaneously, and I'd like to see it sussed out more in concert; what about the folks who are doing Evil in these private channels? It's unacceptable to me that TOR gets used for child pornography, and it's unacceptable to me that my government finds out I'm gay before I come out to my family.

I don't want to provide those who would do Evil any safety or quarter. I also want to give people a powerful shield to protect themselves against judgement and persecution from the public and sometimes the law.

We should talk about achieving both of these goals, but we generally don't.


Does it bother you that national highways are used by kidnappers, and civic electricity is used by rapists?

It's infrastructure, so it's all inherently neutral. SSL is used by banks, protesters, and criminals alike. You can't weaken it for one group without weakening it for everyone. It's also global: you can't backdoor an IRC client only for marijuana users in the US, for instance.

So if you get to surveil pedophiles in the US, it means that Saudi Arabia gets to surveil homosexuals. We're on the same infrastructure.

Also, it's important to recognize that illegal behavior is a critical part of Democratic change. If SSL could discriminate based on your intent to break a law so we could arrest them all, people campaigning for marijuana legalization would all be in jail, and the law would not be changingl. So would people in the 60s campaigning for civil rights, and every homosexual in the country. There is always a grey area period of time in which people break a law because they don't believe in it. That period of civil disobedience is how laws end up getting changed. Even (especially) morality laws against things like sexuality, drugs, or alcohol. It's important to a living democracy that the police are not a perfect force.


I'm not content simply throwing my hands in the air as everyone else seems to be. We should talk about other options.

There might be a way to stop pedophiles and kidnappers, and rapists that we haven't thought of simply because we're not willing to talk about how we could do it.

Highways have police. Where are the digital police? I'm not sure I prefer such a thing, but why don't we even discuss it?


what about the folks who are doing Evil in these private channels?

Evil is agnostic of location. Your question is of no significance. You might as well be perturbed over "Evil" people living in houses or eating food. Unsurprisingly, evil people are people and will tend towards the same activities people generally engage in.


Yours is a defeatist attitude, I think, and there is plenty of evidence that Evil can be stopped/averted in many cases. It's beyond obvious that there is no complete solution for Evil, but if you keep ignoring this question/problem, then people will continue to pick the "no Evil/no privacy" option over your "Evil/privacy" alternative, which is ultimately devastating to everyone.

We need to talk about how to create "no Evil/privacy", or at least how to approach something of that kind, even if an absolute version doesn't exist.


I'm really curious what you think evil is since you keep capitalizing and speaking of it as some spiritual essence that can be eradicated.

You have not established the slightest bit of an operational definition, and resort to pathologizing neutral transmission channels as hosts of "Evil". This is a complete non-starter and not worthwhile to deliberate. "Evil uses Tor" is as useful as "Evil uses paper".


But evil uses paper is probably going to do less harm than evil uses Tor or a gun or a nuke. Societal control must adapt to the medium on a regularly adjusting basis depending on the perceived level of evil.


Do not underestimate the power of paper. Because with the right amount of legal framework (on paper) I can evict people from their houses, sell these of for a nice profit and not have to care for the fate of said people.

I could rob whole countries of their future (say Greece) with paper treaties.

I can rob people of most of their democratic power (see lots of tries at treaties like TTIP or some things like this).

All on paper and in my eyes tending to the evil side.

The problem with evil though is, that it isn't an objective term, it is not empirically measurable and it has so many definitions and perspectives, that it has none.

Evil is a weasle-word, is propaganda, nothing more. So we really need a better word, a better definition. And breaking the law or something like that does not work either, as for example the laws in Germany, the US or Saudi Arabia or China tend to differ massively. They are ideologically tainted and do not provide an objective framework/reference either.

So we should work on a actionable and helpful definition first and not go partisan on the medium to be controlled.


The issue here is more or less. A kitchen knife is bound to cause less casualty than a machine gun in the hand of a psycho, hence we require licenses for gun. It's not to eliminate death through violence, it's to reduce and deter it. We don't want people to reach for the nearest machine gun whenever they have a domestic quarrel (obviously we don't want people to reach for a knife either, but I hope you get my point) or when their neighbor's dog poops in their yard.


Maybe not worthwhile for you, but there's a set of people out there who all agree on what Evil actually is (generally, obviously it's difficult to be exact), and we'd like to try and figure out how to limit its ability to act, while maintaining the shield of protection for the persecuted.


Many groups agree within themselves. But they disagree with other groups. Who can judge which group it right? What are the odds that it happens to be the group you're in, not all the others that previous generations were in or people in other cultures are in.

Quick test - is looking at a photo of a naked child evil?


Quicker test - do you have a point, or are you just trying to bog down this conversation about privacy with a defeatist "we have to let murders/rapists get away with it" argument?


I'm concerned that you seem to have made a clear distinction between good and evil when really there is none. Certainly we can apply our own society's general ideas, but it's not black and white in any way. Even murder is acceptable in many cultures. For example killing of soldiers during a war.


It's a placeholder, and talking about it is distracting from the point (the "point" being maybe we can have both privacy and safety, and that the dichotomy is false). There is undeniably some activity that we can both agree is evil, and therefore we can talk about Evil without having to figure out exactly what that activity is.

Or rather, we could if you were being intellectually honest.

Most people, if given the choice, will nearly always pick safety over privacy. It's simply not enough to say you can't have both, because privacy will eventually get thrown out by the electorate, of any country.


> [P]eople will continue to pick the "no Evil/no privacy" option

If that was an available option, it would be fine. I'd choose no evil and no privacy because I'd know that my information, and more importantly the information of public officials and company leaders; folks who wield power; wouldn't be abused.

But that's not an option that's on the table, and whatever it is they intend to pick it is not what people accomplish when they endorse surveillance. Our governments do some pretty evil stuff and those are the people who end up with the power in these sorts of arrangements. What people are being given the choice between is 'a dubious promise of safety; an evil that doesn't offend them personally; and no privacy' - or 'a perceived higher rate of evil that does offend them personally; you're under threat, they're coming for your kids, hate freedom etc; and privacy.'

You seem to be taking the no evil part of things as essentially solved. But it's not. We do not have systems we can trust not to abuse this.


Please in the future consider the concept of rhetorical license. It's not literally zero Evil, it's actually "less evil" and "less privacy", vs. "more evil" and "more privacy".

This isn't a conversation about absolutes, it's a conversation about shifting degrees. I'd like to shift towards the "less Evil/more privacy", but no one here or anywhere wants to try to come up with ways to do that, because everyone just assumes privacy gives Evil room to grow.


> Please in the future consider the concept of rhetorical license.

I considered it, but I didn't grant it since it was not clear that it applied - nor for like discussions would it be clear in the future. You adopted an extreme position which is not analogous to the discussion you wished to have, and that is not an act of rhetoric; discussion as an art and a skill. It is simply poor communication - as evidenced by how widely your claims would imply you to be misunderstood.

Rhetorical license doesn't cover that. The charity of understanding others extend a speaker is not an unlimited effort.

> I'd like to shift towards the "less Evil/more privacy", but no one here or anywhere wants to try to come up with ways to do that, because everyone just assumes privacy gives Evil room to grow.

It does. You gave such an example yourself: TOR and child pornography. More widely speaking, it's possible to lessen the 'evil' in society through a great many means - better school, better welfare provision, stronger community links. More to the point many, even on this site, seem in favour of taking those steps: Strong encryption? Please. Better schools? Yes please. Better community organisations? Yeah. There are of course people who'd say no, but that does not change the fact that people in favour of them are easily found.

Many of the steps that could be taken in that regard seem orthogonal to the larger discussion at hand, (encryption security - as structured by reasonable context,) mind.


I'm not ready to give up my privacy because it'll make me safer. You shouldn't be, either, but what you're saying now points exclusively there.


I think most people just don't think about online and communications privacy in the same terms that they think about physical privacy. Computer and information privacy should be a basic right just as much as privacy in your own home.

The principle that governments should have covert back doors into our information and communications channels is no different from saying they should automatically get a copy of all of our physical keys, a way to secretly remotely activate and use every camera we own, or remotely activate and listen in on every microphone in our houses.

In fact, as everything moves to electronic, always-connected internet of things platforms these things become increasingly not just equivalent but identical. Soon electronic privacy will be the foundation of every kind of privacy.


I would like to see law enforcement focus on combating cases of child abuse in the real world instead of focusing so much on what happens online. The online component has been criminalized due to the argument that internet demand will spur abuse in the real world - I see the logic of that.

However, the true wins will come by doing real-world police work, educating parents and children on how to protect themselves and what are the potential offender profiles (hint: not guys in an ice cream van). They will come on a diplomatic level by negotiating better laws in countries where such materials are produced (Japan was a recent success AFAIK) and where sexual tourism is rife.

Finally, those individuals that haven't abused anyone should receive support from a mental health specialist if they come forward and admit to their urges, like in e.g Germany.

I have the feeling that this is a taboo subject, that is not discussed frankly in most societies. The authorities focus mostly on harsh punishments instead of prevention through education and mental health treatment.

Thinking about what happens on tor is mostly a reactive policy that doesn't do much to treat the causes.


Limiting privacy to stuff that's "not Evil" is unworkable, because there's no reliable way to distinguish "good" from "evil". For example, many Wahhabi Muslims consider homosexuality to be at least as evil as child pornography. And indeed, some decades ago, that was virtually the case in the US.


That's moral relativism. It can be used to argue that blood transfusions are evil, or that slavery isn't.

It would be better to ask whether someone, who is not a willing participant, is harmed by the activity. That's easier to establish.


That test allows child porn as long as the participants don't get harmed in real life. It does make sense but I think most anti-child-porn proponents would say that causing harm isn't needed for it to be evil.

The fact is, our collective idea of good and evil is internally inconsistent. It's still skewed by our repulsion to sexual deviancy and our fear of being judged by our peers. We really can't distinguish right from wrong. We aren't even able to apply the simple test you proposed.


Yes, I agree. But still, there must be some entity that tests for non-aggression. And everything can be gamed. It's far safer to bake in privacy such that it can't be compromised.


>I don't want to provide those who would do Evil any safety or quarter. I also want to give people a powerful shield to protect themselves against judgement and persecution from the public and sometimes the law.

I agree, but it's nearly impossible to have the best of both worlds.


Are you talking about physical - real world Evil, about digital real Evil, or about digital - inside of one's head Evil?

About the first one, physical, what king of stupid incompetent secret service lets an entire terrorist organization train their people on physical training centers, physically dispatch them to their targets, get their hands on weapons and explosives, and successfully commits their crime, while doing nothing because they are really focused on breaking Tor?

Universal surveillance is not just useless for fighting terrorism, it's actually harmful, in several ways. And most of those apply to almost any use you can came up for it.

About the last one, digital - inside of your mind (that is, inside of the criminal mind), those people need help, not punishment. Stop punishing them and they'll seek you.

Now, the real problem is the second set, digital - real Evil. I have no good answer for those, but the surveillance people also lacks this answer, and are almost completely useless against that too. I'm not willing to accept an argument claiming that mass surveillance needs to exist so those people may start doing that work in the future, when they discover some way to do it.


Try this:

Secrecy does not exist any more

Privacy is the politeness of your neighbours.

Now everyone is your neighbour

As such we choose to not be polite to paedophiles

However only a limited number of us choose to be polite to people still in closet. Others will impolitely sell pink insurance, others scrawl hate messages

I am not sure where I am going other than politeness is hard to enforce.


>politeness is hard to enforce.

Because it is a logical fallacy. To enforce something implies using violence which is considered mean and not polite.

The same logical fallacy applies to gun control. Oh you want to ban or restrict access to guns? How are you going to do that? With guns of course.


You always provide safety to bad guys along with the good.

Seatbelts, airbags, traffic lights, food safety, drug safety, pollution control, national defense and many more benefit pedophiles just as they protect good people.


Why?


> TOR

"Tor"


This issue always boils down to the LOTR argument for me: the surveillance power is too great, and no individual or group can or should be trusted with it, regardless of its actual current or potential future benefits.

The crux of the debate then is where to draw the line between safe and unsafe amounts of power?


It's not the amount of power that makes it unsafe, but rather its nature. In order for it to be safe:

1. It must be granted through democratic means.

2. It must be under strict oversight by an independently elected or appointed group that's free from both private conflict of interest and popular pressure.

3. There must be reliable mechanisms to quickly and efficiently strip said power away from the authority if they are determined to have used it irresponsibly.


#2 doesn't happen with US democracy now at any level. It also inherently contradicts #1.


One fact that mitigates the risk of large scale abuse by government is that the power to surveil is easily gained by a bad faith government.

Bush administration is a good example. They just did warrant-less wiretapping. It wasn't hard or particularly expensive. No public debate. And that was a government that more or less followed the rule of law. Imagine one that no longer follows the rule of law.

You don't even need an apparatus. Just send an FBI analyst to pick up Sundar Pichai and Zuckerberg. Have the companies run queries on their own database.

Our government already has the power to mass murder. Obama can order entire continents destroyed. The air force can drop a JDAM on any house in America. The police can arrest any political enemy of the state. The government already has immense power.

Comparatively the sort of privacy issues we are talking about are smaller powers. And like I argued above, they are easy to acquire.

Any government willing to abuse the power of surveillance would be willing to flaunt the law to create a surveillance program overnight (well not literally, but they could do it in months).

I'm not arguing that there is no downside to surveillance power, just that it's not as dangerous as many make it out to be. For example, there is still risk in official abuse by government employees acting rouge. There is risk of data leaks. And smaller scale abuses that can be covered up or that the public wouldn't care about.

But I think the fear that we shouldn't give the government power to surveil because they might go full nazi/communist/theocracic/etc. is silly.


Very good, but it's funny how on the "why IPVanish" page he links to, the first reason given for using a VPN is, to watch Netflix from any location! Oh the horror of limited localised Netflix content. We must protect ourselves. (Really it is awful, I use a VPN for that purpose too). But the point is, it doesn't seem popular to hide metadata from ISPs with VPNs. Will it ever be popular? I'm not so sure. For good or bad, I'm suggesting most people don't care that their IPs are recorded. Email content is not seen, nor what I type into this comment form.

Also, when I send an email to my friend "laserpants@something.com", sure the data captures the send-to email address. But the data doesn't know who laserpants actually is, nor does the email content get saved. I'm not saying laserpants can't be found if the law decides to investigate, but I doubt it's a matter of pressing a button to bring up the real name of laserpants. Especially if laserpants uses different email addresses and a shared internet.


I think Snowden showed that it is as easy as pressing a button which is why he went and blurted it out. The NSA were also working to make it easier to track people.

My take on it is privacy is dead or nearly and we have to have good legal protections of who can use what data and when. The privacy arms race will mostly be won by big government with lots of resources and enough willing/foolish patriots (Depending on your point of view).


I agree. And the logical next step is that there will be inevitably abuses, that some of the victims of these abuses will ultimately access power, and that we will have proper privacy protection after that, like we do for regular mail or like we do for your physical home.

It is just a shame that we have to go through the whole cycle given that we just know it should happen. But in a way, the more extensive the surveillance is, the quicker this cycle will happen. And right now we are in a pretty bad place already. So let's be optimistic!


Who cares about a "real name"? A "real name" doesn't tell you anything about a person. Or at least nearly nothing. It's an arbitrary label. If I know how much money you spend on what, which other people you communicate with, how often, and at which times, I know way more about you than if I knew just your name. And I don't need to know your name to recognize you by your cookies. Or your email address. Or your IP address.

If you regularly send emails back and forth with a specific doctor, I have a pretty good idea which condition you have. If you regularly call a specific company at specific times, I have a pretty good idea that you work there. ...

Also, private surveillance is not about "finding you", but about influencing groups of people. For business purposes. Or maybe for political purposes. If I know that you are likely to be receptive to a specific kind of emotional message, I don't care what your birth certificate says, I care about how to get that message onto your screen in front of your eyes.

And finally, as other have mentioned: Yes, it is a matter of pressing a button. That is the essence of what Snowden revealed, if you will.


But the problem is, it IS easy. Even anonymized at assets are de-anonymized trivially,using secondary datasets. You can uniquely identify more than 87% of the US population with gender, birthday, and zip code. For more than 50%, you can use the municipality name instead of zip.

If your friend uses a separate, privately maintained email address for everyone and every service, that will help a lot. Then we only know his identity because email is sent in the clear, even between his own server and home computer. Of course, everyone uses separate email addresses for everyone, right?


This is an interesting post on the topic from reddit:

https://www.reddit.com/r/changemyview/comments/1fv4r6/i_beli...


The thread under that comment just serves to reinforce the OP's point.


That was incredible to read. Thanks for linking to it.


This is correct but appeals to freedom of speech/expression and association won't work. People don't believe in these any more if they ever did. They're being paid lip service at most.

Both professional and amateur politicians are taking notice. "We support freedom of expression, however" there's always something more important: safety, inclusivity, respect, diversity, civility...

We have dozens of examples, in the west, with hate speech laws, discrimination laws, mobs organizing online over some tweet, employers firing people because of offhand remarks, even opensource projects being assaulted over unrelated comments made by contributors.


The definition of "privacy" very much differs significantly among different generations and cultures. Compare the Dutch to the Americans, or anyone over 35 with anyone in their teens or early 20s. There is a smaller overlap in what they all consider "private" than you'd at first think.


I really like the phrase "I have nothing to hide, so I don't care about privacy" is equivalent to "I have nothing to say, so I don't care about freedom of speech" .


I forget why now, but I was reading this article[0] by Moxie Marlinspike, from 2013, just a few days ago.

Interesting how the same argument can take so long to take hold, and how long some truths need to be told before they gain the traction needed to make a change.

0: http://www.wired.com/2013/06/why-i-have-nothing-to-hide-is-t...


meh, I want to have my cake and eat it too. (I'm not making fun of someone, this is actually how I feel, and there is some tension between requirements.) I don't want any surveillance whatsoever, I want to just be able to do whatever I want, jeez. To live freely. I shouldn't even think about being watched.

At the same time, take something like the Dell database that was just stolen, and criminals starting to do their criminal crimes. Then I want courts to be able to flip a switch and say, you know what, if you're brazenly stealing a private company's database and calling its customers trying to defraud them, at some point there is some probable cause to make you stop doing that or figure out who you are. You're not just going to stay anonymous behind a skype number while you're defrauding people halfway across the world.

Also I don't want some bitcoin asshole to pay off an old soviet general and get a nuclear bomb, just because they think it would be a fun troll to blow up a major city, trololo.

These aren't theoretical concerns - ransomware, kidnapping, all these yucky things that civilized societies don't have, all happen absent rule of law.

There's a reason there wasn't a period in the Constitution (specifically the fourth amendment) after the words "The right of the people to be secure in their persons, houses, papers, and effects shall not be violated." (Extra points for what is there.)

Even absent an anonymous Internet, way back in the eighteenth century, there were limits on privacy. Think of it like an operating system - a good kernel isn't reading my memory contents and slowing me down, but if I start performing illegal operations I might very well get shut down :)

It's not an easy line to find. Also, I don't want tens of thousands of people employed doing this crap. It's a minimal thing we need to live safely and sanely, not some fun snooping. Frankly I don't see why humans even need to be involved, until crimes start getting committed and the courts are trying to figure out why or where.


> Then I want courts to be able to flip a switch

But you also don't want anyone else to flip the switch, or it being flipped for any other purpose. That might just be impossible. So you might be better of without the switch.

> You're not just going to stay anonymous behind a skype number while you're defrauding people halfway across the world.

Also, for this problem, as for many others, there are many possible solutions that don't involve surveillance.

> Also I don't want some bitcoin asshole to pay off an old soviet general and get a nuclear bomb, just because they think it would be a fun troll to blow up a major city, trololo.

So, you would prefer them to use USD cash instead, then?

> These aren't theoretical concerns - ransomware, kidnapping, all these yucky things that civilized societies don't have, all happen absent rule of law.

Except they very much do happen in "civilized societies". And sometimes with the help of the powers of authorities.

> It's not an easy line to find.

No. But it's quite easy to see that the direction we are heading is completely at the wrong end of the spectrum.

> a good kernel isn't reading my memory contents and slowing me down, but if I start performing illegal operations I might very well get shut down :)

Which is very much the opposite of mass surveillance.

> Frankly I don't see why humans even need to be involved,

Humans who see the potential of the collected data will get involved. People who want to abuse power don't usually wait until someone gives them permission to.


Personally I really think that encryption is a matter of second amendment and in the day of knowledge and communication the right to bear encryption should fall under the second amendment. Hell, the US even classifies encryption as a munition. We should be using the same argument for encryption that we are using for the right to own guns and form militias.

I wonder if the encryption will be recognized as a right under the second amendment by the court if it goes to that.


meh - the second amendment isn't very popular these days, and outside the US (well of course it doesn't apply but additionally) people really don't seem to appreciate it


If you are interested in this topic, I highly recommend you read Tradition of Freedom by Bernanos (original title in French: La France contre les robots).

Written in 1944, there is a specific passage where he argues against this "but I have nothing to hide!" argument, only criminals benefits from hiding, right ? He talks about how a simple citizen who never had trouble with the law should stay perfectly free to conceal his identity whenever he likes for whatever reason, and laments how this very idea already died.

The extract is available online [1] in French, the google translation [2] is not that good.

[1] http://www.books.fr/quand-bernanos-predisait-une-societe-sou... [2] https://translate.google.com/translate?hl=en&sl=auto&tl=en&u...


Thing is that everyone have things to hide. I think there should be extensive list of what you have to hide and reasons why.

I would start with two:

-salary : thieves, kidnappers (do you really trust that everyone in law enforcement is trustworthy they can cooperate with thieves)

check your contract, you might get even fired because you leaked your salary, and it is your duty to keep it secret and explaining that some hacker did that does not matter

-contacts : maybe someone will be stalking your friend and he might get access to your friend because you were hacked. You also should not share your friends mails and numbers without their consent.

They might be at least angry at you because you shared their contact info. ------------------ Maybe you guys have better or more ideas.

But nothing to hide applies to hermit on top of the mountain. If someone thinks otherwise he is an as*le because he does not think about people around him.


Wishing privacy on the internet is like wishing no turmoil while shagging during a massive religious events of paranoid gunned puritans.

If privacy is such a problem for some it is not a technological problem, it is a political problem. If so, people concerned should make their revolution in an appropriate place: the real world, and let internet stay a public media.

PS noticed another fun topic there are blacklisted keywords on HN, like F words. Isn't censorship more concerning than privacy on a media? And funnily enough all the "lite" censorship nowadays are first about sex and gross words. Are sex and slang that dangerous?


> Wishing privacy on the internet is like wishing no turmoil while shagging during a massive religious events of paranoid gunned puritans.

Why do you think that?

> If privacy is such a problem for some it is not a technological problem, it is a political problem.

Why do you think that?

> If so, people concerned should make their revolution in an appropriate place: the real world, and let internet stay a public media.

The internet is the real world.

Also, who wants to make the internet a non-public media?


Because main manufacturers of equipment are also from paranoid puritans country ? (Huawey, Cisco, juniper, alcatel, nortel, ericson ...) And that it make MITM easy for agencies since government heavily subsidise telecoms maybe? ($$ for a backdoor)

And since Internet is globally like a very fast Gutenberg press, it has the same property as printed paper: privacy is not a problem of the media, but of the institutions/organizations trying to control it. It is controled by law or force. Law/force comes from/is backed by government, so if you don't have privacy then complain to your government for his actions (or lack of).

Sum up: privacy is not an internet problem (media). It is a problem between the citizens and their governments (use of media).

Had you opened a good history book you should know it. And also in the same books you should discover that the concern for privacy has already existed before internet : the more governments are authoritative the more they both tend to love secrecy for themselves and hate public debate for the others and want to control media.

Control of media by the government do not always align with the public's best interests. DMCA, patents, IP laws, "censorship for the protection of the kids" ? Does it ring a bell? Google who wants you to read only sites that have nice ads so they filter out as non relevant some information.

Want to overthrow your government in the shadow? commit a crime? Sext your gf? I don't care about these usage and consider them accessory. They divert people from real problems like accessing the information vehicled by the media.

By the way, they make very nice tinfoil hats nowadays. There are successful crowdfundings for it.

Ho, and internet is the real world? Ah ah ah.

As much as "war and peace" is the real world.


OK, I guess I'm starting to get where you are coming from. I read your original post pretty much as saying that people who wish for privacy on the internet are wrong.

I still disagree with you, though. Yes, of course, technology alone cannot solve the political problems. But I very much think that technology can play a role in implementing and securing the political will. If you think of a democratic voting process as a piece of technology, for example, that by itself won't make a dictator go away. However, if there is a general consensus that democracy is the way to go, the specific implementation of the voting process very much makes a difference as to whether rogue actors can subvert this political will or not. If you make it so that votes cannot be bought and that the general public can watch the election from start to finish, that makes a democracy robust against a minority of people who try to rig the vote, for example.

I think information technology has a similar role to play in securing privacy. You can build systems that are far more robust against surveillance than facebook, for example. That doesn't help if the police comes after you for not using facebook--but it might prevent Mark Zuckerberg from figuring out how to manipulate the masses to vote for him as the next president ... or whatever he considers his interest ;-)

Also, I think the internet is actually a pretty good basis for that. In contrast to previous networks, at its basis, it doesn't distinguish between clients and servers, and it is a mostly transparent network with all the intelligence at the edge, so it's technically relatively easy to build your own protocols and applications without approval from network operators.

As for the internet being the real world: Yes, I get where you are coming from, but I think the distinction in this way is still counterproductive, as it supports a narrative that is used by the other side that implies that somehow the internet is special and that therefore human rights don't apply, and that special (and usually more restrictive) laws are necessary, as if laws that forbid fraud, say, for some reason didn't already apply if the fraud was committed via the internet.


I am coming from the sububurbs of Paris. Internet is not the problem.

Internet is expensive, exclusive, not efficient, not securable. It is the wrong tool for privacy. I have taken part in building it.

Ideas main vector is words and language and their meaning not the paper on which it is written. Education change the world. Because with or without internet, words get exchanged by humans. All e-learning attempt without someone to mentor have failed.

Getting in touch with people is working best where they actually live (at my experience). Internet is a very updated map, not a territory, and it has blind spots.

And democracy is not about voting. It is not a system, it is a property that should ideally apply to a system.

A monarchy, a dictatorship, a republic can be democratic, the same way that any geometrical figure can be concave or convex. It is just a property that "the people"'s interest caged (oops born by luck randomly citizen of a place) in a nation are being "fairly" re-presentend.

Native americans not being representented by their government on their traditional land (or palestinians in israel, corsican & muslims in france, scotts in UK, ouigour in china, flamish/wallon in belgian) is it fair?

Well, I don't know. Fairness at my opinion is a non achievable goal but a never ending unknown path.

Internet has a lot of answers, but very few interesting questions being asked.


made a test, strong words have been erased in the past in one of my comment, but it was not an automated blacklist


Don't mix misuse of private information and privacy. All the "chilling effects" listed here are implications of misuse of private information. On the other hand, in a world where we have weapons which can single handedly wipe out whole civilizations, we need to make sure it doesn't get used by some lunatic.

From that angle, a tight security apparatus (which may include surveillance), is necessary for our survival. I think we should think more about how we can prevent the "misuse" of private information than preventing surveillance completely.


Well, yeah, how can we prevent misuse of private information other than by keeping it out of the hands of those who might abuse it?

It's simple: You have to create a perfect test that can tell good people from bad people, and then build a police force that only hires good people.

The fact is that there are people in this world who are willing to hurt other people for their own advantage. You cannot fix that by giving huge amounts of power to some more or less random selection of the general population. There is no silver bullet, and trying to construct one nonetheless tends to end badly.


"Our digital lives are an accurate reflection of our actual lives"

Which of course presumes we have a digital life, and which of course has been proven repeatedly to not be the case. It is also not accurate.

Take data warehousing companies who are profiling home IP addresses and hoovering up any digital breadcrumbs people leave behind, like user agent strings, length of time spent on a page, any previous cookies stored locally on the machine: an enormous store of value for anyone who decides to purchase such information, except for the fact that it has no value.

The 'info' exists without any context, and could even be poisoned by a small portion of users who decide to stuff the system full of disinformation to control market share or lobby for certain products.

Also - IPV4 addresses (now more than ever) can be attributed to several hundred people because ISPs grant a subnet to multiple customers.

This is not saying everything's fine and our digital doppel is a fuzzy haze of nonsense. But it does say that privacy advocates are apt to overestimate how accurate such information is, and that the people who buy such information are finding out this too and have probably decided to pay more to other collection points to get a finer-grained doppel of some person.

I say let them spend more, but I will cry tears of joy when I find that money has been ill spent too and doesn't accurately portray a person digitally.


The problem is people who have never been on the wrong side of an encounter with law enforcement have no clue about why all these "super tools and powers" given to police, TSA, FBI etc. are so incredibly dangerous.

They just watch the news and get fear-mongered into thinking, oh we better arm the hell out our "protectors".

Usually the first experience with abuse by police is enough to wake people up but if they are white and middle-class, such an encounter may take years to happen.


Why don't privacy proponents don't go all in and just ask to get rid of the Internet? Surely, privacy is easier to maintain when communication is inefficient.

Perhaps that's because privacy is actually an archaic and backward idea that maintains all of our problems alive. I can't think of a less progressive (more conservative) idea than privacy.

The next most important revolution in human history will be our transition to a completely transparent society.


> Why don't privacy proponents don't go all in and just ask to get rid of the Internet? Surely, privacy is easier to maintain when communication is inefficient.

Why don't airbag proponents go all in and ask to get rid of cars? Surely, vehicle accidents are easier to prevent when transport is inefficient.

> Perhaps that's because privacy is actually an archaic and backward idea that maintains all of our problems alive. I can't think of a less progressive (more conservative) idea than privacy.

Perhaps it's because it's the one thing that maintains all of our well-being?

> The next most important revolution in human history will be our transition to a completely transparent society.

How do you know that?


How do we get to a completely transparent society without massive collateral damage? I understand the rationale for the goal but the transition is so very difficult. I wish more thought was being put into it.


My only concern at the moment is to get people to realize that privacy is not good. Only when people understand that our goal is transparency can we come up with a strategy to make the switch.

Considering that we need to get to a transparent society, contributions to the privacy movement only ensures that the transition will be even more difficult and violent.


Have you written anywhere about how you arrived at such a radical position?

(I mean radical in the traditional sense, not as a slur)


I don't see how it's radical. It's the only logical conclusion.

Many people have written about this. I haven't read a single convincing argument in favor of privacy as anything other than a defensive measure.


So no? I don't care about the labeling. I want to better understand the reasoning. Or at least, what pieces of information make it so obvious.


I'm of a similar (although not identical) opinion to miguelrochefort. The short answer is... if you imagine a utopia, do you imagine lots of secrets or lots of openness and acceptance?

The reality is that I came to this conclusion through a long process but I'm pretty tired. The process definitely considered whether the 'private' version of the world was even possible. I don't think it is. Surveillance will happen. Better we accept it and keep an eye on how it's used than pretend like we can prevent it in the long term. Even if you were able to discourage the ubiquitous 'high tech hackers' and 'big data' forms of surveillance (which I don't think you'll be able to do), bribes and drones will continue to be used for the powerful to get what they want.


How do bathroom doors fit into this?

In your utopia, if you ask me a question, do I have to answer honestly? That people would always want to answer honestly is not a satisfying answer.


Bathroom doors are great. Let's not confound surveillance of interactions and privacy in the extreme sense.

Bathroom doors allow you to interact with yourself with some level of privacy. People know you're in there. They know for how long. They know if there are extreme sounds or scents. If you misuse the plumbing or other bathroom features there is clear evidence of this. In rare circumstances, the bathroom door can be kicked down while someone is using the bathroom.

The most common result of such information is "are you ok darling?".

Bathroom behaviours are interesting because they provide a real case study on the impact of acceptance on privacy. On a first date, where you're not presenting the real version of yourself, you don't want the other party on the date to know anything about the events while you're in the bathroom. Over time, if the relationship progresses, the secrecy around these events changes. The reason for this secrecy changing, I believe, is trust/acceptance. Over time you know that if the other person learns more about the bathroom events, it will not change how they perceive you. If you're in such a relationship, it also doesn't mean you wont use a bathroom door, and it doesn't mean you don't want your partner to use one.

Hopefully that analogy makes sense. Most privacy advocates have a list of 'secrets' they fear will be used against them. I think we're more likely to see a world where this fear is addressed through improved respect of differences, compared to a world where suddenly surveillance is effectively impossible.


Excellent comment, but I lost you at::

> Most privacy advocates have a list of 'secrets' they fear will be used against them.

What makes you say that?


Why do I think privacy advocates have a list of 'secrets' they fear will be used against them? Because most of them are intelligent people who understand how society works. Society today is full of lies, secrets, and hypocrisy. If someone knows your secrets but you don't know theirs, you're probably vulnerable.

I don't think privacy advocates, and persons such as myself, differ in opinion on the problem today. The difference of opinion is on where we're trying to get to and how we get there.

Tbh, I don't know yet how to get to the end state I'd like to see.


Ok, so you are making an assumption there. I don't think your assumption holds, it is very well possible to be a privacy advocate and to at the same time not have any crucial secrets worth keeping.

Privacy is a good thing, whether you have secrets worth keeping or not is immaterial, privacy doesn't have anything to do with secrecy. The two are often mixed but if you look a bit longer you'll see that they are in fact orthogonal concepts only very loosely related.

Taking your bathroom example: there is obviously nothing secret about what is going on in that bathroom, you can infer most of it from your own experience. And yet, we do seem to feel the need for privacy.

Another example would be a diary. Diaries are intensely personal and our etiquette around them is that if you happen to come across someone's diary that you do not open it to read it. It is considered a private document, even if it will not contain any secrets it may contain thoughts that the writer does not want to divulge to the world at large.

So privacy does not require any secrets at all to be a very important thing to many people, including privacy advocates.


We're into semantics here. Privacy provides a way to hide detail, to prevent confirmation, and to allow people to deny things. For simplicities sake I consider these, or more broadly, anything protected by privacy, to be 'secrets'.

I don't think the word 'crucial' that you added is useful here.


Such simplicity is lossy and in this case the loss is crucial, hence that word and that's why sometimes (not always, I'll give you that) semantics matter.

Especially when not seeing that distinction might cause one to state something that is either not true or that inadvertently allows re-framing the discussion in ways that hamper progress.


I don't know that bathrooms need doors.

If you don't answer honestly, then people won't consider your answers to be reliable, and therefore everything you say will have less value.

Imagine a person that's completely transparent and honest. You can't find a more predictable and reliable person. That's something you can trust and rely on. How could you dislike such a person? Honesty is beautiful.

We live in a society where there's a race to the bottom when in comes to openness. We all are ashamed, lie, cheat. Full of insecurities. Constant worries. Who would like that?

This leads to a society where appareance is more important than substance. You go to we'll rated schools nkt to learn and get better, but to impress your future employer and get prestigious jobs. They hire you for the same reason (safer to justify hiring Ivy League students to your boss). The best way to get a job and climb up the ladder is nkt to work on your skills, it's to bullshit well. All because we're used to seeing the perfect side of people (they get to choose what they show you). That's just disgusting.


I could have been clearer. What If I simply decline to answer? That's essentially me keeping the answer private, so my query might have been better phrased, must I answer any question asked of me?


I can't seem to reply to your last comment. So I'll answer here.

I don't favor coercion, so nobody is going to be forced to say or do anything.

Refusing to answer would be like refusing to help someone who request your help. Considering that answering the question is relatively inexpensive, you really don't have a reason not to do so. Unless you're hiding something, which will make people think of you as selfish.

I wouldn't trust a person that is selective when it comes to answering questions. They'll only say the truth you want to hear, which will contribute to bias and poor understanding of reality.

Moreover, its just plain simpler for a person to systematically tell the truth. Seriously, the mental burden of remembering lies and keeping secrets is probably extremely underrated.


I don't really see much support for radical transparency here. You are saying it will be great because it's great. You need arguments that are convincing to someone who thinks you are starting from nonsense.

If it helps, I tend to be frank/blunt to an uncomfortable extent. I sort of try not to because people find it off-putting, but I also sometimes don't notice I'm doing it. So it's not like I am in opposition to what you are saying, but I don't necessary see that it would be so great.

(re being able to reply, there's a speed bump built into the site, if you click the time stamp through to the individual comment page you can always reply there)


People of the U.S lost it because they allowed this to happen by "not caring about it". And the whole privatized everything is bollocks. See what the costs of "health care", "college education" and most importantly "how the cops" have become. Health care and education has to be free if the people in the society is to remain intelligent. Even by the scale of economics, providing with good health care for free would cost much less that what it currently costs. The police is severely lacking accountability that it much needs. Read the original article linked in here. Think about it.


OK, so now David Chaum is proposing PrivaTegrity.[0] It's "meant to be both more secure than existing online anonymity systems like Tor or I2P and also more efficient". But it includes a "carefully controlled backdoor that allows anyone doing something 'generally recognized as evil' to have their anonymity and privacy stripped altogether". Just exactly how the bloody hell can a backdoored design be styled as more secure than Tor and I2P?

There's no fool like an old fool, as they say. Sad :(

[0] http://www.wired.com/2016/01/david-chaum-father-of-online-an...


Many would trade their privacy for what they think is safety.


And they're more than welcome to trade their own privacy, although I think it's unwise. The trouble is when they trade my privacy.


Many have done things that were common place in the past and are not anymore; or are plain illegal; or were found to be wrong. Sustaining something just because many people still do it doesn't really seem reasonable.


This read changed my mind. Even I don't have nothing to hide, I need to care about my privacy.


Bravo!

I've been making the chilling effects argument for several years:

http://www.dbms2.com/2013/07/08/privacy-data-use-chilling-ef...

http://www.dbms2.com/2013/07/29/very-chilling-effects/

http://www.dbms2.com/2015/06/14/chilling-effects-revisited/

This article makes it with reasonable, appropriate breadth.


The reasons given are largely consequentialist. There are deeper philosophical reasons why privacy is important in this context. The essential reason it is important concerns the proper relationship between the individual and the state. Surveillance and intrusion violate the proper relationship and establish an improper relationship between the two. In other words, to justify the relation and thus intrusion, one hold concepts of state and individual that are anti-individualistic and place the state above the individual. The undesirable effects follow. To borrow Koneczny's terminology, a surveillance state is move away from Latin civilization perhaps towards Byzantine civilization.


(Apologies for the blatant plug)

On this subject, if anyone is interested, we just launched a free, open source, Android mobile app to help people manage the complex issues of digital and physical security. It's got simple lessons on everything from sending a secure mail to dealing with a kidnap.

https://play.google.com/store/apps/details?id=org.secfirst.u...


In a recent study I read, millenials overwhelminly didn't think that the fredom of speech should be upheld, so why would privacy?

Many people have already gotten in trouble or fired all based on private conversations. Most people didn't care about the privacy implacatons, because of personal feelings.

I feel like our civilization has gone backwards: online mobs determine guilt and the ends justify the means.


It's government that has something to hide and it's them I don't trust, which is why privacy is important. I wouldn't want government searching my home on a whim and neither my digital content because they're the ones with the track record of planting evidence, seizing assets, and perpetuating falsehoods for their agendas.


Is there any polymorphic encryption toolkit that could enable a custom encryption method tailored for your own use? Kinda like security by obscurity for encrypting your own important files by deviating from the established standard encryption schemes in order to overwhelm capabilities of Mallory's/Trudy's reverse engineering?


There are so many unproven ideas and theses in the first (argumenting) sentence alone.

Maybe privacy is a right granted by someone. But do I need to fight for all rights? Can I not trade some for some others? E.g. I don't want to live in a completely free market, becaues in a completely free market thiefs and bullies always win. In our real life markets at least they need a lawyer's license first, which stops some of them from succeeding.

Does it really underpin freedom of expression? Is complete freedom of expression something that is worth fighting for, something people want? Look at something where expression is nearly completely free: Clothes. Most people tend to wear what other people wear.

Free society. What is that? Why do I need it?

Democratic society. Doesn't the current global development show that democracy is failing us? Every system comes to an end, and democracy certainly is behind the top of the hill.

I stopped reading after that sentence. If a blog post doesn't even think about the nuances of what they are talking about, there won't be much content anyways.

Just a little side note: I was in China three times now. Many people consider China very unfree, very undemocratic. But I see people there having more hope, more optimism and more opportunity to develop their dreams than we have in the west. And mobile internet is developing bigger and faster there than anywhere else, despite having nearly everything run via one mobile, government observed app: Wechat. In some regards I wonder if it is really "despite" government control or "because".


Don't trust someone who says "I have nothing to hide". And certainly don't trust them with your secrets!


There are very few topics where I just cannot get the point of the discussion among smart people, but this is one of them.

Look at the real world NOW, 15 years after all the surveillance. You still can explode bombs and kill people middle in a european capital without any encryption at all. Is this the kind of surveillance you are afraid of?

If you want to hide something, there are infinitely many ways to do this. No surveillance can (or ever will) read the one time pad encrypted communication. So you have (and always will have) your freedom and capabilities to hide — what's your problem?

Arguments like "well then show me your bank account" are just plain stupid: I have no interest in sharing this information with my work colleagues, my neighbours or my friends just because it would have implications in some social aspects (it's not about security!). But his information is only sensitive in context of a personality. I'd neither have problem to show anything to a random stranger nor would I be interested myself in this information coming from a random stranger.

If somebody uses my information in an unethical way, it is not the problem of a surveillance, but that it's possible at all.

Exposing my personal data to a government during an investigation could also protect me by verifying my alibi. We have nothing to hide, right?

The comparison with free speech is ridiculous. Free speech is the opposite of hiding and doesn't imply breaking the law. Hiding implies playing by other rules, than commonly established. Free speech is important because eventually I might have something to say. But no one would ever agree that he or she will have something to hide eventually (without getting criminal).

So I'm still missing the point...


Yes because you assume the people who are collecting your data are ethical, secure and will always share your opinions and beliefs.

Say your bank account information was stolen and someone used it to blackmail you because as you said it would have, "implications in some social aspects".

Say the country you live in converted to a religion you are not part of or want to be any part of. Imagine if they had a record of your beliefs and used it as a handy tool in mass genocide.

Say someone working for the government or a start up was jealous of something you had, and used their access to take your information in order to discredit you.

Say you medical information was sold by a fitness start up that went bust and sold to insurance companies to bump your premium.


If we assume, that — by default — people are unethical, governments are corrupt and most people in power are criminal, then you're f*cked anyway. With or without mass surveillance.


We don't assume anything by default, we just make an empirical observation of how people behave, and then act accordingly. The empirical observation is that there are people who are unethical, corrupt, what have you. And also, the empirical observation is that there exist certain group dynamics that make certain societal developments very hard to reverse. That is why it seems like a very good idea to avoid putting too much power into a single person's hands (in case it turns out to be one of the bad apples, or in case the power is delegated to a role rather than a specific person, in case one of the bad apples ever gets into that role), and to try and avoid the kinds of developments that tend to end badly.

Nobody says that _all_ people are unethical, or _all_ governments are corrupt, or that _all_ people in power are criminal. But rather, that being part of a government or having power does not prevent people from being unethical or corrupt. Bad people are generally a minority, but they do exist. That is one reason why we have government and police and military in the first place. But there is nothing that necessarily prevents bad people from becoming part of government, police, and military. That is why it is important to limit the power of those institutions. To limit the damage that bad people inside them can do. And also to limit the appeal to bad people wanting to become part of them. That's essentially the whole point of democracy and the separation of powers, BTW. It's a security mechanism that protects you from bad people in power - not because all people in power are bad, but because occasionally bad people manage to get into powerful positions, and that tends to end badly.


The potential for harm is greatly increased with mass surveillance. If you can accept that, you're getting closer to seeing the problem.


The most concise description of what you're missing is "chilling effects".

In a hardcore surveillance society, almost any innocent act can be harmful to one. If government, for anti-terrorism reasons, has a rather complete picture of our lives, then we need strong safeguards against how they can act based on that picture.

And the same goes for the private sector, which might form that picture for business reasons.


"I have nothing to hide, and what I am keeping private is encrypted by a quantum- and $5-wrench-method-resistant encryption."


My goal is more modest, to fix the problems with posting using real names if possible (and not by using real name policies).


Truth prevails; it is not mass surveillance but mass data manipulation we must be concerned about now.


What an irony that the link is missing the "s".




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: