We expect professionals to behave ethically. Doctors and companies working on genetics and cloning for instance are expected to behave ethically and have constraints placed on their work. And with consequences for those behaving unethically.
Yet we have millions of software engineers working on building a surveillance society with no sense of ethics, constraints or consequences.
What we have instead are anachronistic discussions on things like privacy that seem oddly disconnected from 300 years of accumulated wisdom on surveillance, privacy, free speech and liberty to pretend the obvious is not obvious, and delay the need for ethical behavior and introspection. And this from a group of people who have routinely postured extreme zeal for freedom and liberty since the early 90's and produced one Snowden.
That's a pretty bad record by any standards, and indicates the urgent need for self reflection, industry bodies, standards, whistle blower protection and for a wider discussion to insert context, ethics and history into the debate.
The point about privacy is not you, no one cares what you are doing so an individual perspective here has zero value, but building the infrastructure and ability to track what everyone in a society is doing, and preempt any threat to entrenched interests and status quo. An individual may not need or value privacy but a healthy society definitely needs it.
No, we don't.
We have probably a few hundred doing hard-core surveillance. We have another few thousand functioning as enablers by making social media and ad networks really attractive. We have a whole lot of non-engineers insisting on placing ads and tracking on their websites.
And then there's the mass bulk of software engineers that have nothing to do with it, and nothing they do will stop it.
50% of doctors decide to stop doing something, and it gets noticed. 99% of software engineers decide to take enormously strong stands against surveillance even at great personnel cost, and surveillance continues on as if nothing happened, except maybe those who work on it get paid a bit more to make up for decreased supply.
It may, in that weird 20th/21st century fashionable-self-loathing way, feel really good to blame the group you're a part of, but basically what you're proposing won't do anything at all. You're imputing to "software engineers" in general abilities they don't collectively have. You've got to attack it at the demand level, you will never be able to control the supply. This also matters because if you waste your energy with that approach, you might decide you've done something about the problem and stop trying when in fact you've done nothing.
Work in the greater D.C. area. Within a 150-200 mile radius, there are literally tens of thousands of developers working directly on surveillance. Probably even more. How do I know this? From random sampling. Go to any tech event, talk to any program manager at any government contractor. The work and money is in surveillance.
And, that's just government surveillance. All that tech is then spilling over into corporate surveillance. Location and behavioral tracking is big money. How do I know this? Because, sadly, that's how I have to make my money. The problem is that there's always another grunt like me willing to create the systems that enable this.
The solution: Use all of this surveillance tech and data to expose all of the VIPs. Publicly post where they are and where they've been, who they've been with, what they read, and what they buy. You do this and laws will be created pretty quickly.
Everyone knows how the invasion of Iraq was a complete mistake. Has someone gone to jail?
The public is not going to shutdown anything they are wholly complicit in and benefit from. Which is why empires eventually fall.
I'd like to think that'd be the case, but consider one of the more-recent privacy intrusions with "The Fappening" ... very little became of that, despite the wealthy, high-profile individuals involved. I realize they weren't the politically connected, but they were certainly what society considers "VIPs".
This has happened in the past and the reaction from the individual people has been to 180 completely on their opinion of surveillance (there was a recent post with sources, but I don't have it handy). This could work.
Laws will be created pretty quickly, but only to protect VIPs.
Yup. And the magic of digital content, software being a kind of it - it's infinitely copyable. It takes one guy to write a surveillance package and open-source it or have their company sell it, and everyone can now use it.
It's not engineers who make the decision to use surveillance technology. Hell, for most of the work a software engineer does, most of the data coming from surveillance tech doesn't even matter.
One thing which became apparent to me when I began to focus on this issue was the fact there are countless other services which provide services to other services, all of which have some degree of access to upstream customer data. For example, if you log to a hosted logging service, some of your customer's data is sent to them. If that service use AWS, then data is sent to Amazon. And so on.
Arguing efforts to make things better is pointless is a very dangerous thing to do, assuming we actually want things to be better. Cognitive dissonance is a powerful force, especially when there are startups to be built!
Sure, it turns out using centralized web services has helped the government with things such as PRISM, but that doesn't mean we should blame people for those development practices rather than the government.
Prior to PRISM, pretty much any reasonable person would assume that the blobs you store in S3 aren't going to be looked at anyone or, worst case, metadata will be seen by AWS employees for debugging stuff.
What we have done is make things a ton better for developers; we can make things quicker and more easily which empowers society/humanity. The fact that it's incidentally contributed to a surveillance society through no intent of the developers in a way you wouldn't reasonably expect does not make the developers culpable.
It is true that customer data is trusted with a lot of services-of-services nowadays, but do you want to go back to the stone age where the only people who can store anything must run their own hardware with their own databases and so on?
The right thing to do here is to call for better use of encryption where possible and, for surveillance issues, to reign in the unreasonable government programs that make this practice result in such problems.
I beg to differ. Where you concentrate power, you have to expect abuse.
> It is true that customer data is trusted with a lot of services-of-services nowadays, but do you want to go back to the stone age where the only people who can store anything must run their own hardware with their own databases and so on?
That's a false dichotomy. Yes, I want to go back to people running their own hardware with their own databases. And to have that work as easily as your favourite cloud service. There isn't anything inherent in running your own hardware that requires that to be a major burden.
Sure there is. Climate control, redundancy/backups, and power consumption/reliability, to name a few, are all concerns that we get to delegate to "the cloud," that are 100% "inherent in running your own hardware."
I applaud your usability argument, but there are most certainly inherent burdens to running your own hardware that don't exist for cloud services.
A 10 watt server doesn't need climate control.
Is mostly a matter of software.
Is also mostly a matter of software. With some simple peering mechanism, you can store backups on your friends' servers (and they on yours). Though a standardized pure backup storage API for cloud storage of encrypted backups at one (or more) of a multitude of providers might be a useful option to have.
> power consumption
Is a matter of plugging a plug into a socket in the wall.
Is also mostly a matter of software.
Now, I am not saying that running your own datacenter is no work, but running a server or two for your personal needs or for the needs of a small company should be possible to make almost a no-brainer.
There is no technical reason why you shouldn't be able to buy a bunch of off-the-shelf mini-servers for a hundred bucks or so a piece that you can peer by connecting them with an ethernet or USB cable or whatever might be appropriate and that you then connect to the internet wherever you like and that automatically replicate their data among each other and allow easy installation of additional services via a web interface, with automatic software upgrades, and allow you to rebuild the state of a broken server by connecting a new one and clicking on a few buttons in the web interface ... well, there are many ways to solve the details, but my point is: cloud providers also don't employ one admin per machine, but rather automate the management of their machines to make things efficient--there isn't really any reason why much of the same automation strategies (which are mostly software, after all) shouldn't be usable on decentralized servers in people's homes.
This statement is in conflict with itself logically. It's arguing that diminished trust levels for data are rationalized to achieve a savings in time and cost to run the infrastructure for the application. The conflict comes about when you start assuming the data has acceptable levels of trust requirements for a given customer. The fact is, you can't speak for my trust levels, which is exactly what is being discussed in the link.
I get to say what trust levels I want for my software and data. Not being able to use the software because I can't trust it is an unacceptable proposition, so I challenge our abilities to build something better than what we have today, and do so without rationalizing why we aren't building it.
I have hardware in my pocket that is hundreds of times more powerful than the first web servers I every worked on. There is no technological reason why that same hardware couldn't be used. I'd love to have a PAN based around my phone (which is way more local to me that much of the "hardward with their own databases" that I've ever worked on. Federation to Facebook/Google/Instagram/whatever the next big thing is would be amazing. And the reason it hasn't happened even though powerful hardware is everywhere isn't due to lack of technology.
Programmers are just a loosely-defined group of tinkerers, labourers, and the odd scientist or engineer. How do you expect to impose a structure on that? A teenager can tinker around with software in his bedroom and nobody gives a damn. If he were to conduct medical experiments on his little sister, on the other hand, he'd go to jail. That is the difference.
Programmers (as individuals) can't be ethically audited, but what we can do is regulate the data which is allowed to be collected. You regulate it like any other industry. Sigma-Aldrich is a corporate company that sells pharmaceutical grade precusors. I was dating a girl who was doing a post-doc in o-chem, in her office waiting to finish up something, and flipped through their catalog. I saw a precursor that was heavily flagged by the DEA which could be used to synthesize massive amounts of a recreational drug. Curious, I asked her the procedure for procurement, and she delineated it. In short, she could get it with a sign-off from the PI and a few other things fairly easily [she would never do that, she's far too ethical - but her PI was famous enough that a request on his letterhead with "Veritas" on it would have been enough] but there's a chain of custody and auditing system in that just like there is with doctors who are issued DEA numbers. If I call up S-A and ask for the same chemical not only would I be laughed off the phone, but they'd likely submit my information to the DEA to flag me for further investigation.
What am I getting at? You can't regulate people, but you can regulate systems. If that precursor was ordered and that drug happened to pop-up, the DEA could easily call up any of the suppliers of those precursors and figure out when it was dispensed fairly easily. We need to regulate any institution that collects data in the same way. When it's at a point where the institution is large enough to collect information at a level like that, issue compliance terms. In the same way publicly traded companies have to release financial information to the SEC and comply with numerous reporting terms (look at EDGAR to see how extensive it is), open up another branch of the government that is in charge of regulating the companies that collect data. That way, your engineer with loosely-defined morals who is capable of doing whatever will be prosecuted just like amoral doctors.
I feel like this is too wide. Everyone collects data. I don't mean all tech companies collect data, I mean, for example, your friends have copies of the emails you've sent them. They have photos with you in them of places you've been with timestamps and GPS coordinates. Your coworkers have access to your calendar. Your mechanic has the service history on your car. Your librarian knows which books you have checked out.
These aren't problematic situations because they each only have a little piece of your data, and you trust each those people with that little piece, and if you don't then you don't have to give it to them.
The problem is when you don't have that choice. Which is what happens when you're dealing with a government or a monopoly (or some other concentrated market where you can't trust any of the players). You can't reasonably choose to not have your location collected by your mobile carrier, or the traffic cameras in front of your home. If all your friends use Facebook, then Facebook Facebook Facebook.
But we don't really want to regulate Facebook. I mean holy cow, what is that even supposed to look like?
I think we can separate the problem into two pieces. The first is collection by, let's call it, unavoidable monopolies. Telecommunications carriers and other utility companies. This is where we know exactly what to do, because these entities should not be collecting any information about people at all. There is no reason Verizon needs to know anything about you other than whether you've paid your bill. So regulation here can be useful, e.g. make it unlawful for carriers to triangulate a cellphone's location without a warrant, or collect anything whatsoever about the contents of IP packets. But we also have a strong technical solution here. Encrypt all the things. Fully deprecate HTTP in favor of HTTPS. We need to build, for example, DNS query privacy. Things like that.
The other part of the problem is what you might call avoidable monopolies. There is no fundamental reason why Facebook has to be as centralized as it is. You have a phone which has all your photos on it and is connected to the internet 24/7. Why is there a copy of your photos on Facebook's servers? If one of your friends wants to see one of your photos, why are they not getting it directly from you? Then you don't have to trust Facebook with a copy of it. So the solution for this half of the problem is, disintermediate the avoidable monopolies.
> Why is there a copy of your photos on Facebook's servers? If one of your friends wants to see one of your photos, why are they not getting it directly from you? Then you don't have to trust Facebook with a copy of it. So the solution for this half of the problem is, disintermediate the avoidable monopolies.
It's because decentralization like that is stupidly, stupidly inefficient. Not to mention that the assumption that your phone is actually on-line 24/7 is unrealistic, and that's before we notice we're not on IPv6 yet, or that people also use cameras, or that they change their phones, go out of service range or simply want to free up space on SD card for something else.
So the fundamental reasons are a) efficiency, and b) availability. That's not to say things couldn't be improved wrt. privacy. I don't know that much about crypto yet (that's about to change, for work-related reasons), but I vaguely recall that there are encryption schemes that would let only you and your friends access the data stored on third party servers, and that would make the data unreadable for said third party.
Disagree. If you're Netflix wanting to distribute Jessica Jones then you want something like a CDN (although in that context BitTorrent is also "something like a CDN").
But think about wanting to share photos with your friends. There are only thirty people who actually want to see the photos. Twenty five of them live in the same city as you, which makes direction connections to you about as efficient as a local CDN node, and the other five live in four different cities, so in all but one case there is nothing to be gained from caching in any of those places because there will only ever be one copy requested. In that one last case the CDN would conserve just one long-distance copy, and that's assuming we can't make P2P software smart enough to have the second person in Timbuktu get the photos from the first person there.
> we're not on IPv6 yet
This one is probably the main reason why this hasn't actually happened yet, but it's not like we don't know what to do -- how about we get on IPv6 already?
> or that people also use cameras
You seem to be implying there is some reason why a photo taken with a camera couldn't still be distributed using a mobile device (or plug server or PC or whatever you like).
> or that they change their phones
And then they can copy the stuff from one to the other.
> Not to mention that the assumption that your phone is actually on-line 24/7 is unrealistic
Availability is a different tack. OK, your phone doesn't have twelve nines of uptime, but it probably is actually online upwards of 90% of the time. And we know how to build reliable systems out of mostly-reliable pieces.
We're assuming that there is a piece of software on your device which already knows who your friends are. So now it just needs a check box that says "cache things for my friends if they cache things for me" and now your friends can get your photos from your other friends (or from their own device) even when your device is occasionally incommunicado.
> or simply want to free up space on SD card for something else.
I think there's a law of physics that says your photos, to exist, have to exist somewhere. I suppose "I would rather give my private data to Facebook than buy an SD card big enough to hold it" is the sort of thing you have to decide for yourself.
Magically, should a bill/resolution be introduced to the floor and not be stomped on immediately, enforcing it internationally would be about as difficult as say, enforcing international oil embargoes or a ruling by the ICC (i.e., nearly impossible - you don't see any proceedings against Cheney or Rumsfeld for war-crimes within the Hague, now do you?). Domestically, however, the US has (or had, historically from, say, 1930 until the mid 90s) the economic/political influence to effectively enforce their agendas fairly effectively. The new US gov't entity formed would have to have the intent to limit data collection then exhibit the willingness to penalize those institutions for violating those data collection policies (e.g. similar to an FDA fine issued for a multi-national drug company who has a presence within the US).
Again, too many financial interests opposed to see this happening, but the refusal to adhere to the legislation would mean (in theory) loss of US business, which would be catastrophic for most industries. HackerNews user:grellas (or was, I haven't seen him post in a couple years now) is an attorney specializing in tech affairs who'd be able to make a better response, but from a strictly political POV, even domestic legislation limiting data collection would never occur.
( http://acm.org/about-acm/acm-code-of-ethics-and-professional... )
The government can get your gmail, facebook, verizon, amazon data because those companies keep that data about you. The NSA doesn't need to spy on you, google already does. I don't think the NSA is reading my email, but I know Google is.
Not to mention that when all these tech companies are spying on your for profit, your privacy is already destroyed.
They understand fully that their data is collected and they expect nothing less than the top result of their Google, Amazon, and Facebook queries to match exactly what they are looking for.
Does anyone remember that angry email they sent 5 years ago where they were criticizing their boss? Google does. What kind of profile can you build from thousands and thousands of such emails, messages and queries, and location data and pictures, videos, actions on social networks?
I think some companies have a better idea about who some people are than those people themselves.
Good companies use information they collect to provide better services. Bad companies use it to rip people off. The problem of bad companies doing bad things is independent of companies having information about people.
... the survey reveals most Americans do not believe that ‘data for discounts’
is a square deal.
... Rather than feeling able to make choices, Americans believe it is futile to
manage what companies can learn about them. The study reveals that more than half
do not want to lose control over their information but also believe this loss of
control has already happened.
But more importantly, sure, there are people who think they deserve to be mistreated, there are people who are drug addicts to the point of barely being anything else and still would fight anyone who gets between them and their dealer, and of course and there are plenty of people who have no problem with all sorts of messed up things up to murder as long as they themselves are not on the receiving end of it. Yet even if 99% of all people regressed to that station, that wouldn't do one bit to diminish my own human rights. That some or even a lot of people are fine with certain things, whether they understand them "fully" or, which I find more likely, "not in the least", is the problem, not the solution.
Drive to the extreme: he right of people to do what the White Rose did will always outweigh the right of people to not be part of the White Rose. It's dissidents and persecuted minorities who define the boundaries of these things, not the people who are living in comfort in exchange for not standing up for anything or against anyone. They exist, and their opinion matters as a problem to be solved or worked around, but that's the extent of it. Some things can not be justified by anyone agreeing to them, people do not have that power even when numbering billions.
Pretty compelling talk, culminating in:
I believe there should be a law that limits behavioral data
collection to 90 days, not because I want to ruin Christmas
for your children, but because I think it will give us all
better data while clawing back some semblance of privacy.
Thanks, it was a difficult decision and took me a while to come to, but I knew I couldn't continue working there in good conscience.
It's so important to have activism and structures in place to protect whistle blowers and others not comfortable with our current direction to take this difficult path. Respect!
Keineswegs weiß man bestimmt, wie die Fetischisierung
der Technik in der individuellen Psychologie der
einzelnen Menschen sich durchsetzt, wo die Schwelle ist
zwischen einem rationalen Verhältnis zu ihr und jener
Überwertung, die schließlich dazu führt, daß einer, der
ein Zugsystem ausklügelt, das die Opfer möglichst
schnell und reibungslos nach Auschwitz bringt, darüber
vergißt, was in Auschwitz mit ihnen geschieht.
 Erziehung zur Mündigkeit, S. 91
"We do not know how to determine how the technology fetish in individual people leads to the point at which a rational relationship changes into one of over-valuing, which eventually leads to someone designing a train system to get the victims as fast and smooth as possible to their destination in Auschwitz, but who forgets what it is that happens to them once they arrive there"
But then again, fascination with "the other guys" is also a thing. See: the intellectual world of the West being in love with Soviet Union well into the Cold War.
They made lots of promises that they never delivered on (or even planned to deliver on).
One of the more interesting ones:
Also, the very positive tone of the historical article was refreshing. I know it's pure propaganda, but still, we could use some positive articles in the news every once in a while.
You know, this is not coming from software developers. There's a group of people out there whose living is made by manipulating the public perception and speech. This group is not the software developers.
However, I'd also like to see general software development think more closely about the role it has in normalising these things. Next time you start to create an account system for your project, ask yourself whether you really need it. Could you engineer around it, perhaps by letting the user store their data, or using a stored key to identify them? Let's go beyond don't store what you can't protect, and aim for don't store what you don't strictly need.
Or companies that deploy ad sense or otherwise depend on companies like Google or Facebook.
And now Microsoft decided they also want a piece of the pie.
The kicker? I see people still defend Google all the time, nowadays with bullshit arguments like "I am tired of this you are the product meme" and still excuse Facebook because they need it to keep in touch with others. And they found startups based on advertising and tracking, they work for them and generally support analytics as an inalienable right of software development.
Pretty much. I don't use Facebook at all, and give in to Google only on technical searches (Which DDG still isn't good at), mapping and when forced to by work (GCE etc.), so you're preaching to the choir, but lets not try for an overnight coup here.
NSA, GCHQ, BSI/BND, etc. aren't the "bad guy" in theory.
It's within a nation's interest to, within the extent of law and respect for human rights, try as thoroughly has it can to know what's going on in the world. Electronic intelligence is part of that, and a growing part.
In practice, the permissive reactions many/most/all governments have to allegations (or proof) that a comms intel agency has broken the law, that's what the trouble is. That these groups have been allowed to break the law or ghostwrite laws that allow them to violate what would generally not be approved by a citizenry, that needs to be addressed.
I'm not sure how ruining the careers of software developers and computer scientists who've worked for these organizations does anything other than remove from circulation some brilliant members of our community.
Ostracize the middle managers, bureaucrats, politicians that allow the trampling of our rights.
But don't arrest the guy designing the home theater system for El Chapo's vacation house and tell me you've taken down the Mexican drug cartel.
Why must we accommodate their subservience? Following orders is no excuse.
Theory is not really relevant when the practical reality is monstrous. The five eyes are not redeemable.
It's easy for us to sit at our desks and churn out our work and be mad. And there's things to be mad about for sure. The wanton disregard for civil liberty and protection is simply irredeemable. And to be sure, I've been a fan of your country's very public responses over the last few years to personal privacy. I hope the US legislature can learn a thing or two.
But It's not the "five eyes". It's the entire world. Any country with an interest in protecting their sovereignty also has some form of information gathering operation.
When that operation gets big and exposes itself, folks get upset because, yeah, being spied on isn't a comfortable thing. Do some countries go about gathering this information more morally than others? Something tells me we'd have to be in the secret inner sanctums of the biggest opponents to really know, and I think the answer would be "a spade is a spade."
Does it help our countries protect themselves? I honestly don't know.
But I do know that "grey hatting" in the general development community doesn't garner this sort of bile and venom. I don't know why being a grey hat for a government should be treated differently.
If the NSA would like to declassify it's employees job history in a credible and verifiable fashion, I'd be keen to take this on a case by case basis.
It is possible that the NSA is hiring engineers for benign reasons. If that's the case, I assume they are not classifying that work. After all, the government should not be withholding information from the public without cause.
As it stands, it is reasonable, in light of the revelations of the last several years, to suppose that any engineer who cannot reveal their work history with an intelligence agency has performed unethical work. This is what's known as "preponderance of evidence".
>Who are you associated with that I (or society) doesn't like?
What a fascinating question. So many insights to your character. I'm entitled to associate with whom I like, and your seedy, toad-like implication that for even suggesting such a thing as personal accountability, I must be a state enemy has the delicious tang of a low quality KGB-centric TV serial. You know, the kind in which the plucky all-american hero fights back against the oppression of a totalitarian russian government. It would be funny if global law enforcement wasn't looking towards digitally predicted thought crime as a genuine goal.
So speculate freely, because the real answer is terribly dull: I'm a normal citizen, posting my opinions on an internet forum and associating with you.
I don't actually care who you associate with - it was a thought game - you are saying that you are for guilt via association (or at least I assumed you were) which I was I tried to throw that principle back in your face. Jeez, do you need jokes explaining to you too?
The fact you think my inability to talk about some secret work I have done means that I've done something unethical is ridiculous.
The evidence we've seen so far says no, it's not ridiculous. Prove otherwise any time :)
Alan Turing helping to crack the enigma code comes to mind.
You're now assuming all intelligence work is unethical? You're childlike arrogance and ignorance is tiresome.
The idea that we could get the majority of the industry to agree on ethics is pretty far-fetched when a large portion think surveillance is making their country safe.
For instance, I find a user control that prevents the user from changing focus whenever the input is invalid to be unethical, or at least severely impolite. It's the equivalent of grabbing someone's face while you're talking to them. Me: "The control you propose is hostile to the user." Customer: "Do it the way we want, or your company loses the contract."
As it turns out, the customer would love to grab someone's face, not just while they talk, but also as they yell, with a light rain of spittle falling gently onto the target's visage. That's because they assume everyone is a complete idiot, whose only salvation is absolute obedience to those officially certified as more capable. They fervently believe that you can order someone to not make mistakes. So it should be no surprise that my ethical objection was meaningless to them.
The people paying for software and hardware enabling Panopticon-style universal surveillance have a completely alien system of ethics, and more than enough money to ignore your personal morality. There will always be someone around in desperate enough financial straits that they will quash their own opinions and take the paycheck.
A cartel enforcer for software workers is the only way to significantly slow down technologies (you can't actually stop progress) that the majority of those workers find to be unethical. That enforcer has to be able to tell its members that they cannot do such work, no matter how well it pays, because otherwise, the buyers, for whom budget size is no obstacle, simply pay the higher price to those who need cash now more than self-respect later.
As long as there are mouths to feed and rent to be paid, the guy with deep pockets will be able to pay another to do his dirty work.
It isn't the ethical training that makes the difference in medicine, but the ethical enforcement. Doctors and lawyers can be decertified by their peers and elders, such that they cannot be rehired as a member of that profession. That means that an employer cannot demand unethical behavior, unless it is willing to compensate to the tune of all the money those people could theoretically make over all the remaining years of their careers.
I would hope that enough software workers could agree that it is unethical to casually collect and retain information from anyone without their fully informed consent, which is diligently confirmed, and revocable on demand. I further hope that we could agree that it is unethical to gather information to support any criminal investigation without reasonable suspicion that the target has actually committed a crime. Those people who believe that adding more hay to the stack makes the needles easier to find can form their own cartel.
I happen to believe that ethically-limited surveillance is more efficient and effective than the heavy-handed dragnet approach. I also think it is unethical to use an O(N^3) brute-force algorithm when an O(N log N) alternative is available. But most customers only care whether something works, and is delivered on time and under budget. They won't ever care about our opinions regarding quality, ethics, or best practices until after we are capable of making them pay dearly for not caring.
“How do you know the chosen ones? ‘No greater love hath a man than he lay down his life for his brother.’ Not for millions, not for glory, not for fame. For one person. In the dark, where no one will ever know, or see.”
— Babylon 5, Season 2, episode 21, Comes the Inquisitor, 1995
"The Earth belongs in usufruct to the living; the dead have neither powers nor rights over it." --Thomas Jefferson, to James Madison, Sep 6, 1789
For all the influence exerted by people like Mahatma Gandhi and Martin Luther King, Jr., they effected change with their lives, and not their deaths. One should not choose to die for a cause, or against one. Rather, live for your own principles, and teach them to those others who wish to learn. Those who sacrifice themselves, expecting no reward, grow no greater in my eyes. They become memory, and immediately begin to fade, except to the extent that they are renewed by those who still live.
What manner of scoundrel would I be to suggest that another to sacrifice for my benefit, that I may treasure the memory of it? What sort of fool would assent? That is the mentality of the beehive, where the workers die to protect their queen. In a society of equals, for anyone to die unnecessarily is a tragedy. For someone to choose to die, it is a horror.
Attributing some nobility to self-sacrifice is an ethic for hierarchies, to convince the lesser people, against their own interests, to hurtle headstrong into situations where they may be killed. It makes pawns of people who might otherwise be greater. It is not fitting to convince anyone to believe they are so unworthy that the best way they might serve others is by throwing themselves into fires that need never have been lit.
I would argue the most important changes they affected were for themselves. You don't risk your health by helping someone who gets attacked to earn their gratitude, but to be able to look in the mirror. That's the only thing that gives enough energy to sustain certain things for years and decades. And Rosa Parks for example didn't plan to end segregation, she was sick of putting up with it. Nothing more, nothing less. How great other people are in your eyes is does not matter for what value their own acts of moral hygiene have to them, and people don't need "expect" a reward for such things because the deed itself IS the reward. They already have it. And since you brought up MLK:
I say to you this morning, that if you have never found something so dear and so precious to you that you aren't willing to die for it then you aren't fit to live.
You may be 38 years old, as I happen to be. And one day, some great opportunity stands before you and calls you to stand up for some great principle, some great issue, some great cause. And you refuse to do it because you are afraid... You refuse to do it because you want to live longer... You're afraid that you will lose your job, or you are afraid that you will be criticized or that you will lose your popularity, or you're afraid someone will stab you, or shoot at you or bomb your house; so you refuse to take the stand.
Well, you may go on and live until you are 90, but you're just as dead at 38 as you would be at 90. And the cessation of breathing in your life is but the belated announcement of an earlier death of the spirit.
> In a society of equals, for anyone to die unnecessarily is a tragedy. For someone to choose to die, it is a horror.
Here's a secret: everybody dies, either way. The only choice you have is how you live. From John J. Chapman's commencement address to the graduating class of Hobart College, 1900:
If you wish to be useful, never take a course that will silence you. Refuse to learn anything that implies collusion, whether it be a clerkship or a curacy, a legal fee or a post in a university. Retain the power of speech no matter what other power you may lose. If you can take this course, and in so far as you take it, you will bless this country. In so far as you depart from this course, you become dampers, mutes, and hooded executioners.
> It is not fitting to convince anyone to believe they are so unworthy that the best way they might serve others is by throwing themselves into fires that need never have been lit.
People who are great don't need to be convinced of anything. People who aren't are impossible to convince. And it's not "fitting" to justify stoking fires because otherwise others would do it, either. Then let those others do it? And hey, for all you know, they all might be doing it because otherwise you would do it.
And who is actually sacrificing? People who aren't sacrificing their ideals and their morals, or people who sacrifice them for some food and a few decades more?
Just because I mentioned specific individuals does not mean that I agree with them. I only acknowledge that they produced an effect that propagated beyond their own deaths through the actions of the devotees they acquired while living. I might also have mentioned prophets of various religions, though I may not follow any of them.
Skilled as I am at seeing the fnords, in the MLK address you quoted, under the obvious text, lies this subtext: Is my cause not great enough that you might be willing to die for it? If you are not, and have no greater cause to hold your loyalty, then you are more a walking corpse than a living man, and unworthy of my regard. It is very similar to "Crouch down and lick the hands which feed you. May your chains set lightly upon you, and may posterity forget that ye were our countrymen." It is a recruiting speech. And every time a young black person gets "the talk", it is contradicted. According to MLK, every time black kids submerge their will in a police encounter, and come away from it alive, but humiliated, they will be dead inside until their bodies finally catch up. According to me, they will live long enough to either vote in comprehensive reform or to organize and rebel from a dearth of it.
Nonviolent resistance depends in whole upon the oppressors' general unwillingness to murder nonviolent protesters. Willingness to die only works insofar as the opposition is unwilling to kill. Gandhi's protests worked only because British forces in India were unwilling to massacre Indians wholesale. MLK's protests worked only because the segregationists were unwilling to kill in public, before the typewriters and cameras of nationally-published journalists.
If you are willing to die, and the other is willing to kill you, you would be prudent to arrange your affairs in advance, such that other people are positioned to impose meaningful consequences as a result. Otherwise, you are gifting your enemy with a tiny victory.
If you quit a job in the military-industrial complex for which you have some ethical concerns, such as one which enables dragnet surveillance, what is the meaningful consequence? Every failing of the project in recent months is scapegoated to you. The contractor hires a replacement butt-in-seat. The work goes on. Your sacrifice yields nothing. No one rises in gratitude to pay your bills. When you mention in job interviews that you left due to ethical conflicts with the former employer, you never seem to be a good "cultural fit".
Why then would anyone choose to do that?
I'll take the food and the decades. I won't go willingly to my grave, if doing so wouldn't be more meaningful than what I believe I could accomplish with the entire remainder of my natural life. Sometimes, you can't avoid it, but you should always try to not die as you work towards your goals. Don't fear death, but don't ask it out on romantic dates, either.
> According to MLK, every time black kids submerge their will in a police encounter, and come away from it alive, but humiliated, they will be dead inside until their bodies finally catch up. According to me, they will live long enough to either vote in comprehensive reform or to organize and rebel from a dearth of it.
Right, so when does the rebellion come? Why would you rebel ever when "someone will do it anyway", like that is some law of nature? According to you, hypothetical black kid should snitch on others when threatened to get beaten or arrested, and why wouldn't they -- if they don't snitch, someone else will do it, and the only difference would be their life being worse. Leaflet #3 of the White Rose comes to mind: "Do not hide your cowardice under the cloak of cleverness!" And I think we'll have to agree to disagree.
> If you quit a job in the military-industrial complex for which you have some ethical concerns, such as one which enables dragnet surveillance, what is the meaningful consequence?
I already said what it is for me and in my opinion, personal moral hygiene. The consequence is that you are no longer part of that. That is plenty meaningful to me. As Frankenstein said in The Death Race, (paraphrasing), "You can't save the world, you can maybe save a part of it, yourself". Well, I don't remember the exact quote, but that's how I feel about it. I don't even believe in something like a soul, but still, I would say saving your soul, retaining what little remains of our innocence, is the best anyone can achieve.
And as many found out, death doesn't always immediately follow making a stand. George Carlin found himself entertaining people he didn't like, the establishment, with cute things, and he pivoted. Had a long career, had a family, was heard, never sold out, never compromised. Noam Chomsky also has plenty haters, and I'm sure plenty who would love to see him hurt, but he is still rocking on.
> When you mention in job interviews that you left due to ethical conflicts with the former employer, you never seem to be a good "cultural fit".
Then either don't mention it, or don't interview for jobs with assholes. Get another job, and help take the assholes down. Do whatever you want, of course, but I don't see the dilemma here. It's not that black and white, i.e. either you go along or you're screwed. Actually, plenty people get screwed even though they're very obedient and have no flavour and no stance of their own. And as Lily Tomlin said, "The trouble with the rat race is, even if you win, you're still a rat." And you know, I don't quote this to put anyone else down, it's how I feel inside. Man, it's not just a feeling, it's a pretty solid thing. I had a lot of shit broken for me for trying to do the right thing, and had a lot of frustration and sadness for not just "popping soma" and going along, for questioning things. Yet I would not do it differently, given then the chance to do it again. I might be smarter or more patient about some things, but in general, I feel I got way more out of it than I lost. It's not just what it does to how I feel inside, it's also what it does to my perception, which is muddled, but less muddled as it would otherwise be. I see and speak with people who made and are making different decisions every day, and I don't envy a single one of them.
> Sometimes, you can't avoid it, but you should always try to not die as you work towards your goals.
Nobody (or hardly anybody) just keels over dead and thinks that advances any cause or does any good. It's usually "doing something or saying something, and then not stopping to do or say it even though others threaten you". You can hardly say "don't fear death" after arguing it's fine to fear quitting a job over ethical concerns, which is so much less than death.
I don't think there's any need to rehash the debate here. Simply, I and many others do not believe that any western government is going to use information gathered by tech companies to preempt threats to entrenched interests and the status quo. I've seen the same arguments made here for years, and none of it is convincing.
It's admirable that you are so certain in your beliefs. If you don't like what the tech sector is doing, please by all means continue to advocate. Shout it from the mountain tops, go to work for the EFF. But don't discount people that legitimately disagree with you as being irresponsible. At least some of us have made the effort to understand your point of view. The least you could do is to try to understand ours.
Which sector is building startup after startup for data mining, tracking, building profiles? This in addition to the already established companies.
Then you're trying to downplay the issue to trivial actions such Facebook likes or tracking of IP addresses, a toy version of the state of the art.
Finally, the sarcasm, showing how reasonable you are and putting the OP in a bad light for not being "more understanding".
It's quite simple: the topic of privacy is central to a free society and it's enshrined in the Universal Declaration of Human Rights. In the past, we have seen a rich history of abuses, lies and deceit from huge organizations with massive resources at their disposal. Private or not.
The majority of people go on with their lives without caring, as long as they have their basic needs met. The very few that take a stand, pay the price.
Otherwise, some criticism of the behavior of these organizations can be found online, but not much because of:
1) Chilling effects. Funny how I had to think before posting this message, living comfortably in a democratic country, with freedom of thought and freedom of speech.
2) "Helpful people", quick to jump to the defense of said organizations, explaining away abuses, making up excuses, muddying the waters, asking for fairness and understanding their point of view.
So thanks for keeping the balance karmacondon. They might have mountains of money, lawyers, shills, PR people and most resources imaginable really, BUT we wouldn't want to unfairly hurt their feelings. I do apologize for that.
You talk about it like it's necessarily a bad thing, by default, for everyone. Why?
Let's assume for the sake of argument that the above events are unlikely though. When a few actors have access to the information of tens of thousands to billions of people though, this has an impact on a societal level. As jaquesm said, information is power and when one has so much information and lots of money to boot, they can begin to covertly influence policy and behavior and harass and marginalize their opponents. And they can do that directly, or by using the information of a third party, like a doctor, lawyer, religious leader, or even someone insignificant which happens to be a relative, etc.
Moreover, companies can be sold, together with their databases, they can be forced to hand them over or they can be hacked. A treasure trove of data held by an otherwise principled company, might end up in the hands of an unsavory party.
Why is this a bad thing? History has shown again and again how such imbalances of power are abused. Here's a rather harmless example of data mining a mobile device + social network combined with social engineering to scam people out of money: http://toucharcade.com/2015/09/16/we-own-you-confessions-of-... If a game producer can do this, what are the pros doing?
RE regulation on software engineers, Its impossible. For a software written, its PURPOSE and AUTHORS are subjective interpretations. It is much much harder to get common consensus if the software is surveillance, malware etc. So any regulation would do nothing but increase the already-so-complex-and-huge set of laws.
Don't you see any logical problems with this line of reasoning?
There is only one positive outcome of concentrations of power, and that is efficiency in execution. Which is extremely scary when combined with huge power.
This is really just the democracy discussion with different terms. It is well known that dictatorships are much more efficient at executing their plans. The inefficiency we voluntarily introduce when establishing and maintaining a democracy (and if you have ever been involved in democratic decisionmaking, the inefficiency can be really frustrating) is the price we pay to insure us against the efficient abuse of power as we have witnessed it countless times in human history.
In the modern era it is information asymmetry that we should worry about. How to prevent such a thing pragmatically is tricky.
This only works in the US and even there I have no illusions at all about the ability of a present day militia being able to fight off a trained army, it's a pacifier for overgrown toddlers. The only people that have to fear from citizens with guns are other citizens (with or without guns), the military would have absolutely no problem whatsoever dispatching those if it was decided that their lives and the resulting PR fall-out are less important than whatever objectives they were given.
> In the modern era it is information asymmetry that we should worry about.
Note that there are always provisions in the law to protect the lawmakers from having the laws applied to them.
> How to prevent such a thing pragmatically is tricky.
I think it can't be done unless you simply outlaw it wholesale and are prepared to follow up on it. And from a practical point of view this is now a rear-guard action, fall-back bit by bit and try to push back the point in time where we will have to conclude the battle was lost. This is not a problem that will simply go away, it has already gone way too far for that.
I'm less pessimistic about that. I'm a big fan of gun control laws but I also think that the one positive thing that has come from the ongoing middle-east conflicts is that a determined militia can be genuinely problematic.
> Note that there are always provisions in the law to protect the lawmakers from having the laws applied to them.
To my original point about asymmetry, this is what we should be devoting our energy fighting.
> simply outlaw it wholesale
Outlaw what wholesale? I'm personally of the opinion that the long term end state will fall more on the side of honesty (combined with increased acceptance) than secrecy.
Any kind of abuse of power. The penalties for that should be severe. It's one of the few cases where I think that the penal system should be used as a means of discouragement rather than as one of education and rehabilitation.
This sounds more like a uncompromising proclamation instead of thorough analysis.
The ancients had it as 'power corrupts', the abuses are plentiful and that every company that engages in these practices (and the government agencies as well) do this to ostensibly make our lives easier or keep us 'safe' is very well known and advertised. If you have evidence to the contrary feel free to share it but that's where we currently stand.
Well, then logical thing would be not to give anyone any power, ever.
My point is, if you take general principles and blindly apply it with "no analysis involved", you're likely to get to a pretty ridiculous state.
Just like any other tool such insights can be (and are) abused but it need not be like that.
The conclusion to reach is not to give anyone any power ever, clearly that's not feasible. The conclusion you're supposed to reach is that you can give power to people but you'll need oversight in place. Effectively you'll end up with checks and balances, pretty much the way most governments are set up.
And what history tells us - again - is that this isn't always sufficient to prevent abuses and our newspapers and other media seem to tell us that our current set of checks and balances have outlived their usefulness in the information age.
This flows from 'power corrupts' because it appears that those placed in power have - surprise - again abused their privileges.
Think of it as a warning beamed down from historical times to our present day that does not need more embellishment and is all the more powerful for its brevity, it is something so inherent in human nature that we need to be vigilant of it at all times, no matter who we end up placing trust in.
It's simply hard to take your stance as one made in good faith.
The US government has a long history of using its national police, the FBI, to infiltrate and subvert domestic political movements that the powers that be found unpleasant -- including using their police powers against modern groups such as the Occupy movement.
Further, we know that the US government has used records held by tech companies to create massive cross-referenced databases of people, including domestic activities. The recent leaks about surveillance programs has made that abundantly clear.
Your position is literally that an organization with a history of doing this kind of activity won't use the technology we already know the government possesses to keep doing the same thing.
So I think there is a need for you to rehash the debate here, because it's not clear how you sincerely hold that position.
Because rather than a rational view, what you describe sounds like irrational denial.
The people that I know in real life that hold views like these are best described as scared, rather than ignorant. They feel that they price they pay is a small one as long as it gives them an un-specified increase in perceived security in return.
Fear is a very powerful tool when it comes to getting people to choose against their self-interest.
I understand what you're saying, and I think I get where you're coming from. But like the GGP post, you're begging the question and assuming that your beliefs are so correct that anyone who disagrees with them must be insincere.
I don't think there's anything inherently wrong with the government monitoring potentially criminal groups or building databases. That's what we pay them to do. If they get out of hand then we, the people, will deal with it.
A better paraphrase would be "We should suspect that the US government will act in a way similar to how it has acted repeatedly over the span of decades."
I think this is a perfectly fair standard, and actually am held to that standard all the time, including professionally. If I had a continual, systemic habit of flaws in my work, for instance, I would be fired.
Your phrasing suggests that these are things that "just happen", instead of a pattern of decades of intentional programs with the same kinds of aims and behaviors.
> But like the GGP post, you're begging the question and assuming that your beliefs are so correct that anyone who disagrees with them must be insincere.
I actually think you're insincere because you're minimizing and denying a pattern of sustained behavior as a few mistakes, rather than an intentional, continuing program.
That insincerity can be directly seen above when you switch from "did things I didn't agree with" to "made mistakes". No one is talking about the US government making mistakes, and decades of intentional programs operated with similar strategies is hardly "making mistakes".
Your entire analogy was insincere and meant to elicit an emotional response.
> If they get out of hand then we, the people, will deal with it.
I'm actually very skeptical that we'll deal with it in any meaningful way, and find it much more likely that we'll surrender a great deal of control over the country to an autocratic government with a good social control program, precisely because people like you don't want to sincerely discuss the likelihood of that happening by stages.
It's not about mistakes. Mistakes are - usually - a sign that someone needed to learn. They do not as a rule include wanton intent.
And if a person were to make too many mistakes then they probably should not be trusted.
> I understand what you're saying, and I think I get where you're coming from. But like the GGP post, you're begging the question and assuming that your beliefs are so correct that anyone who disagrees with them must be insincere.
No, that's the opposite. You have beliefs that you state are so correct that they stand on their own, in spite of a bunch of historical evidence to the contrary, starting roughly at the time that we invented writing going all the way into the present. That's a pretty gullible position.
> I don't think there's anything inherently wrong with the government monitoring potentially criminal groups or building databases. That's what we pay them to do. If they get out of hand then we, the people, will deal with it.
Potentially criminal groupls: everybody.
You're apparently one of the people where the 'fear' button has been pressed, don't let your fear get the better of you.
Btw, I note that you write all these 'reasonable disagreement' things from the position of an anonymous coward which makes me think that maybe you do realize the value of your privacy after all.
While this may be true, certain crimes are seen as worse than others. And, as un-PC as it may sound, certain demographics are many times more likely to commit certain crimes.
Also, some government monitoring can be "for your own good":
Youth Risk Behavior Surveillance:
But, maybe the CDC is different than the NSA.
That's one of my favorite author quotes. The greatest evil in this world is done by those who can see their own work and tell themselves that it is good.
It looks to me like the US agencies, and the Five Eyes in general, are capable people who are just doing their jobs. They aren't bothering me and I'm not bothering them. The past actions of the US government or hypothetical scenarios based on historical examples just aren't very convincing. Anything could happen. But I'm not going to concern myself with it until I see some evidence.
So, you are drinking battery acid until you see evidence that it's not good for you?
Or do you maybe take the evidence of other people's experience into account?
If so, how about you take into account the evidence of hundreds of societies that have dealt with massive surveillance (where "massive" still was "almost none" in comparison to today's and tomorrow's technical possibilities) and with oppression (those two empirically tend to go hand in hand).
If those are your sincere beliefs, I really would recommend you pick up a few books about recent German history. How Hitler came to power, how the state functioned once he was in power, how people tried to get rid of him but failed, and what it took to finally remove him. And then continue with the history of the GDR, how surveillance by the Stasi influenced everyday life, how people tried to reform the political system but failed, and what it took to finally reunite Germany.
The history of other countries might teach you similar things, but Germany is a good example because it is culturally a rather "western country", so it's easier to recognize similarities.
It's also quite foolish to try to evaluate people in a vacuum... would you extend the same privilege to a member of a criminal gang or jihadi group? No.
"Campaigns operated by JTRIG have broadly fallen into two categories; cyber attacks and propaganda efforts. The propaganda efforts (named "Online Covert Action") utilize "mass messaging" and the “pushing [of] stories” via the medium of Twitter, Flickr, Facebook and YouTube. Online “false flag” operations are also used by JTRIG against targets. JTRIG have also changed photographs on social media sites, as well as emailing and texting work colleagues and neighbours with "unsavory information" about the targeted individual."
There's your evidence-- it's been here all along. These programs are targeted at US citizens, some with the explicit aim of discrediting them, blackmailing them, or propagandizing them. These are not the actions of a friendly nanny state but rather a malevolent surveillance state.
I'm not making that mistake. I fully realize that the US government is comprised of many arms that even though some of those arms might have our collective best interests at heart this may not be the case for all of it.
> We're talking about hundreds or thousands of individuals spanning multiple generations.
So what. That only increases the chances of abuse, it does not diminish them at all. Just like in Nazi Germany there were plenty of people still fighting the good fight and at the same time employed by government. No government will ever be 100% rotten. But it does not have to be like that to do damage.
> I'm not going to worry about government metadata collection because of something that happened during the Eisenhower administration.
Because, let me guess that was too long ago and now it's different?
> Each person and group of people should be evaluated based on their own behavior and merits, not the reputation of the organization that they are affiliated with.
This is where you're flat-out wrong. Governments (and big corporations) have a life-span much longer than that of the individuals that are making it up, and as such we should look at them as entities rather than as collections of individuals.
If you'd be right then North Korea would not exist today as we know it (and neither would China, Iran and a bunch of other countries). The way these things work is that the general course will be slightly affected by the individuals but the momentum in the whole machinery is enormous. Think of it as a cable in which individual strands are replaced but the identity and purpose of the cable remains. Eventually you have a completely new cable and yet, nothing has changed. And in this case the entity has a huge influence on which parts of it will be replaced by who.
> It looks to me like the US agencies, and the Five Eyes in general, are capable people who are just doing their jobs.
That's a very very scary thing to say. "Just doing my job" has been used time and again historically to distance oneself from the responsibility taken when performing certain actions. Just doing your job is not the standard that needs to be met.
> They aren't bothering me and I'm not bothering them.
And most likely they never will.
"The Only Thing Necessary for the Triumph of Evil is that Good Men Do Nothing"
> The past actions of the US government or hypothetical scenarios based on historical examples just aren't very convincing.
Of course they aren't. After all, it's not you that is personally inconvenienced in any way.
> Anything could happen. But I'm not going to concern myself with it until I see some evidence.
And none that will convince you will ever come. Because if it did it would be too late for you to change your stance anyway.
Yes, you would be unhappy. But this is not about whether you are unhappy, but whether you should have control over military, police, our tax money, and thus everyone's lives.
It simply is a very well established fact that concentrations of power are extremely dangerous, and that they are extremely hard to break up once you recognize they are heading in the wrong direction. Just look at what the problem is in countries where people are doing badly, both historically and right now, and why things are so extremely hard to improve once they have gone bad. Which is why we have built structures that try to prevent such concentrations of power from forming. That is essentially the whole point of democracy and the separation of powers: To build distrust into the system. Dictatorships are the opposite of that (only one power, and no mechanism to remove the person in office). Yes, democratically elected officials certainly are unhappy when they are voted out - but that is the price we pay to prevent concentrations of power from forming.
And surveillance is undermining democratic decisionmaking. Having a democracy now does not guarantee you a democracy tomorrow if you aren't careful in who and what you vote for.
Yes, "we" will. If history can teach us something, we can expect that it will take about a decade at least, with many unhappy lives, maybe millions of deaths, until foreign military gets into it to "deal with it".
Sure, maybe that won't happen. But given the prospects, wouldn't it be wise to use our experience from history, to try and make predictions where things will lead, and to then try and prevent things from happening in the first place?
You are aware, for example, that Hitler was democratically elected into office, and all his powers were given to him democratically? And you are aware what it took to remove him from office afterwards?
They don't just monitor criminals-- that's why the anti-surveillance folks are anti surveillance! They monitor everyone, and create criminals as needed, and nobody can question them for fear of ending up on the chopping block.
They are currently very far out of hand, and "we the people" are doing somewhere between jack and shit because of how little the people understand the problem.
Very hard to suggest they aren't supporting the police state.
It's unquestionable that the tech sector is directly culpable for supporting the cops and the politicians to spy on us... to affirm otherwise is counterfactual. The moral high ground belongs to the people who don't collaborate with those who would rather have us dumb and controlled.
It's pretty hard to respect the pro-surveillance view because it seems flatly head-in-sand ignorant of reality time and time again. We have evidence of surveillance state wrongdoing in hand, and no successes to point to while simultaneously experiencing multiple terror attacks, and yet the pro-surveillance types are steadfast in their position, as though it's a religion.
The Snowden files showed us explicitly that disrupting political groups is actually done via GCHQ! This is very far from protecting the citizens, and is instead stifling them purposefully.
I actively am discounting the opinion of people that do not understand this threat realized, currently unfolding threat to our democracy. An informed opinion doesn't sound like one passed via the government through the media.
Why not? You may disagree, that doesn't mean you can't be flat-out wrong. Having an opinion does not automatically give that opinion equal weight when history has proven to us again and again that that particular opinion ends up with making society either dangerous or at a minimum uncomfortable.
I'm sure there were border guards in former East Germany that were entirely convinced that their state was the greatest and that's why they had to keep people in at all costs, including shooting them if they persisted in believing otherwise and tried to simply leave. After all, that was best for them. But that particular opinion turned out to be very wrong in the long term.
People can rationalize the most absurd stuff to themselves and to others, especially when their pay-check depends on it, but that's not a requirement.
All those that try to pretend that there is some kind of 'reasonable disagreement' possible about the erosion of privacy and that directly and indirectly help to rush in the surveillance state have quite possibly not thought as carefully and have not considered these things with the degree of gravity required as they claim they have. Having a mortgage to pay may factor in there somewhere too.
Usually this is a combination of being too young, too optimistic and in general living too sheltered a life to know what can happen to you, your family and your friends when the tide turns. And the tide always turns, nothing is forever.
> Simply, I and many others do not believe that any western government is going to use information gathered by tech companies to preempt threats to entrenched interests and the status quo.
I hope you're right but history is not on your side in this case.
> I've seen the same arguments made here for years, and none of it is convincing.
Yes, it isn't going to convince you any more than that border guard would be convinced that his job is a net negative to society. Every stream, no matter how reprehensible will always have its fans and cheerleaders. And later on they will never remember that they had agency all along and were perfectly capable of making a different decision. Responsibility is weird that way.
> It's admirable that you are so certain in your beliefs.
It is not admirable that you are so certain in yours. May I suggest a couple of talks with some holocaust survivors to get a better feel for what the true power of information can get you?
Or maybe the family members of some people that were killed while trying to flee the former SovBlock?
Or maybe some first generation immigrants to the US or Canada or wherever you live to give you some eye witness accounts on what it was like to live in those countries before the wall fell down?
'It can't happen here' is an extremely naive point of view.
Agreed with your advocacy advice.
> The least you could do is to try to understand ours.
That's 'mine' not 'ours', you speak for yourself.
I can't think of a case where stable and mature democratic bureaucracy has ever used surveillance to influence the majority of its populace. Germany in the early 20th century was a very instable government in a bad economic situation. Soviet East Germany was communist, which isn't quite the kind of democratic that I meant. It's true that any government could turn bad, in the same way that anything is possible. But there's very little evidence for that in the current context.
So my position is this: Given that I live in the United States in 2016, I'm not worried about the government randomly deciding to screw with me by looking at my electronic communications and acting on them. It just doesn't make sense. I'm not significant relative to the scale of the US government, the government itself just doesn't work that way and all of the negative scenarios I've heard seem to be very contrived.
If you really think that it's possible that the government of a modern western nation could turn into communist East Germany, then it seems like your problem might be with governance, not privacy. If it's possible for the government to go all Walter White and just turn evil over night, then no amount of personal privacy is going to save any of us. And until it seems like that's a thing that's actually possible, I'm going to make practical decisions about my own privacy.
So far, it seems pretty much like your belief that surveillance is not a problem for you is unfalsifiable, that you will believe that it is a problem for you only when the secret police is actually coming for you or maybe your family.
There are lots of bad things that the government could do. But it just hasn't happened. They've had mass surveillance technology in place for over a decade now. The world hasn't fallen apart, Hitler hasn't risen from the dead and everything is pretty much the same as it was before.
I guess we can check back in another ten years to see if your apocalyptic visions have come to pass yet.
> Something like "Private Citizen X criticized the government and embarrassing information about his life was revealed as a consequence."
You mean like that?
> There are lots of bad things that the government could do.
Does, not could do.
> But it just hasn't happened.
It happens, but it just does not manage to cross your threshold for worry because you personally are not inconvenienced.
> They've had mass surveillance technology in place for over a decade now.
For longer than that, and it has been abused for longer than that too.
> The world hasn't fallen apart
It will not 'fall apart' because of this. But it will change because of this, and not for the better.
> Hitler hasn't risen from the dead and everything is pretty much the same as it was before.
Yes, we still have willfully blind people that would require things to get so bad that they would no longer be able to avert their eyes before they would consider maybe things have gone too far. But by then they would have indeed gone too far.
> I guess we can check back in another ten years to see if your apocalyptic visions have come to pass yet.
It will never be a moment in time, we will just simply keep on creeping up to it, just like the frog in the pot of water.
What fascinates me is that there are people that are obviously reasonably intelligent that manage to actually see the pot, the stove and all that it implies and they still tell other frogs to jump in, the water is fine.
(Also: "random citizen" or "private citizen"? A citizen who criticizes the government is barely a "random citizen".)
They don't have to. History only happens once, if you refuse to learn from history because it is not an exact repetition of the past then you can never make any progress.
> Saying "A thing happened in the past" can be instructive, especially to people who didn't even realize that the thing was possible.
You seem to think it isn't possible because of "insert magical reason why everything is different now here", not just that it can't happen again for whatever reason. That's an impossible position to argue with. All the weight of history would not be able to sway you from that position because nothing can counter magic.
> But what's much more practically useful is to say "I think what happened before will happen again in the current context and for these reasons". An example from the past is only useful if it can be tangibly connected to the current situation, right now in the present.
No practical examples will counter your magic. You will either say 'that's not the same exactly' or 'that's too long ago to be relevant' and so on.
The only thing that will convince you is when you're lifted out of your bed at 3 am and we never hear from you again. By then it will be a bit too late, but you too will be a believer in government abuse if and when that happens.
Until then you're going to head straight for the last stanza of Martin Niemoeller's most quoted lines. The vast majority of the people living in the former DDR were never lifted from their beds at 3am for interrogation. To them life was just a-ok.
> I can't think of a case where stable and mature democratic bureaucracy has ever used surveillance to influence the majority of its populace.
That's not a bad thing per se. Meanwhile, you're trying hard to change that number from '0' into '1' by allowing the present level of abuse to spread unfettered, which invariable leads to an escalation. Each and every click that you hear is one of a ratchet, it will not voluntarily click back again, it can only go forward until on that scale between '0' and 'police state' you've gotten close enough to 'police state' that there is no relevant difference.
It can't happen here is a very dangerous line of thought. See the movie 'the wave' for some more poignant illustrations of how that thought is a dangerous thing all by itself. It can happen here, it might happen here, and it likely will happen here unless we're vigilant.
> Germany in the early 20th century was a very instable government in a bad economic situation.
> Soviet East Germany was communist, which isn't quite the kind of democratic that I meant.
Yes, and like that there will always be one last thing that is not quite the same which will allow you to look the other way.
> It's true that any government could turn bad, in the same way that anything is possible.
I would consider that progress, hold that thought.
> But there's very little evidence for that in the current context.
That depends on where you are looking. There is plenty of evidence that pressure is being applied, but the pressure is applied subtly enough and in places far enough away from the focal points where change is effected that you'd be hard pressed to connect the dots. That's the beauty of having a lot of information at your disposal.
A nice example is the Iraq war, the run up to that saw massive world wide resistance in the populations of the countries of the 'coalition of the willing' whereas later on this was described as the coalition of the 'gullible, the bribed and the coerced'.
> So my position is this: Given that I live in the United States in 2016
The United States does not hold a privileged position in the world, and it does not matter whether it is 2016, 1938 or 1912. For everybody living in the past in places where these experiments went wrong they could have written "given that I live in X in Y" and they'd be accurate about that.
> I'm not worried about the government randomly deciding to screw with me by looking at my electronic communications
They might have substituted 'electronic' with 'written'.
> and acting on them. It just doesn't make sense. I'm not significant relative to the scale of the US government, the government itself just doesn't work that way and all of the negative scenarios I've heard seem to be very contrived.
They again would not have used US government but whatever place they lived in. And they would have been dead wrong, and in some cases, when the fog lifted they'd have simply been dead.
What seems contrived for you, living in a country that has never seen actual war on its own soil (sorry, your civil war does not count), that exports war on an ongoing basis, that uses IT to kill people by remote control, that used telephone taps, burglary and threats to affect they inner workings of its own government to me seems to be willful blindness.
For some reason it is more convenient to you to re-write all of history up to and including the present rather than to see that maybe your government is not all that benign, neither on the world stage (where they are a bit more overt about their intent) and internally (where they are out of necessity a lot more cautious). Have the Snowden relevations really not managed to at least peg your evidence meter that maybe not all is as it should be? That your constitutional rights were trampled and that the protections afforded you appeared to be of no value whatsoever?
> If you really think that it's possible that the government of a modern western nation could turn into communist East Germany, then it seems like your problem might be with governance, not privacy.
No, I think that we may be reaching a stage where influence can be wielded subtly enough that someone like you could convince themselves that there is none of it at all. And that's the true prize, to wield that power but in such a way that it can be applied selectively enough that as long as the bread is on the table and the games keep going nobody will notice how rotten the core has become.
> If it's possible for the government to go all Walter White and just turn evil over night, then no amount of personal privacy is going to save any of us.
It will never be that overt. It will be more along the lines of parallel construction and other nice little legal tricks such as selective enforcement. Never enough for you to cross that threshold.
> And until it seems like that's a thing that's actually possible, I'm going to make practical decisions about my own privacy.
You're more than free to do that. Unfortunately, those of us living outside of your beautiful country don't even get to have a vote in there. Your personal well-being trumps the rights of everybody that is not you, and like that we race ahead down the hole.
The U.S. government has several orders of magnitude more information about the private lives and communications and beliefs and activities of its citizens than East Germany ever had. This is also incontrovertible and undeniable.
How can either of you talk about abuses that happened in the past as if those were the only abuses? Why would you need to?
Yes, it is. But these are not the people that the argument is about and that's precisely the problem here. They don't feel that it concerns them at all, it is always others who need to worry about what is done with that data, they have nothing to hide and absolutely nothing to fear.
> Literally billions of dollars will be spent on that purpose this year alone.
10's to 100's of billions of dollars.
> There are also the government agencies of a dozen or two other countries which the U.S. government agencies work with and share data with to a greater or lesser extent.
> Literally thousands of newspaper articles have been written about this.
Indeed. But since this has not yet resulted in mass arrests on US soil this evidence amounts to nothing in the eyes of those that see it as a 'good thing', these people are keeping us all safe and are merely doing their jobs. Incredible to you, to me and lots of others but still that's a position that quite a few people hold and not much that you will say or do will persuade them from that point of view.
So, I don't need to use the past as a reference. But it is strange to see a person that would refuse to learn from history to be able to apply the lessons to todays environment. I'm working on a second part of that blog post about 'if you've got nothing to hide' that concentrates on the present (I think the past has been dealt with), but I still feel that those are such enormously important reminders that they serve as a good backgrounder for why all this stuff matters.
So this is a simple choice grounded in the 'those that refuse to learn from history are bound to repeat it' line.
> The U.S. government has several orders of magnitude more information about the private lives and communications and beliefs and activities of its citizens than East Germany ever had.
This is true. But the mere possession is not enough to sway a die-hard denier of danger and supporter of the surveillance state. All that data by their reckoning is in good hands it is there merely to protect them from unseen dangers.
Obviously I disagree strongly with that position but that's probably because (1) I've lived for a bit in a country that was a police state by most definitions and (2) I've seen how the various layers of that society would deal with this (the majority were just like karmacondon here, only a very small minority dared to take a stance, the rest saw the whole thing as essentially beneficial, which retrospectively may seem very hard to understand. In fact even today there are still those that yearn for the communist days when life was orderly, everybody had a job and everybody had a pension waiting for them at the end of the line).
The ability of power or authority to lock you up, take your property or worse your life is protected by rule of law and due process. Having a debate of the rule of law or due process is similar to having a debate on privacy or a surveillance state. The consequences are negative for the individual and society as a whole, even though they may benefit some stakeholders in the short term who will of course advocate for it but on the whole it's not a social good.
The only thing we have to come to this conclusion is history, a wide body of knowledge and reason.
We can thus say with some degree of confidence that a society without rule of law or due process is not a good thing similar to a society with surveillance is not a good thing. We don’t use the ‘moral high ground’ but reason and historical experience to make these conclusions. This is not a moral issue but a practical one that has consequences for our societies. The ethical issue is the social good for the people who build these systems.
Since we are discussing the social good the alternative view needs to be backed by reason on how surveillance can be good for society as a whole, beyond offering naive presumptions suggesting people are good and will not abuse the power, or how knowing details of everyone’s activities may be beneficial to an individual or company because while that may be true they do not address the social good.
And the only thing we use in these discussions is reason, let's not make it personal.
I'm worried about both, and I can't say which I'm worried about more. What are the reasons you are concerned about one more than the other?
If the government is not corrupt, I have optimism in being able to get through a personal attack.
So in regards to privacy we treat it like property. Governments which don't support good property rights are not going to care one whit about privacy and those who come after privacy will eventually tread on property rights.
People need to understand it as something to be protected just as you would your physical stuff and work towards having it treated similarly in government.
But I think it's already too late. The genie is out of the bottle and it's already doing it's darker things in many parts of the world.
One of the great things about technology, is that it knows no boundaries.
But that is also it's biggest danger.
Because powerful technology in the hands of weak people can lead to disaster.
Intelligence is the ability to create something from nothing, while Wisdom is the ability to choose (wisely) how to apply intelligence to reality - what to create and what not to create.
If intelligence is the engine, then wisdom is the driver.
We have trained lots of engines but very few drivers. And that is what worries me most..
Yes, I believe we should have an "Hippocratic Oath" () for technology workers.
If you subscribe to the 'presumed innocent' premise of the law (https://en.wikipedia.org/wiki/Presumption_of_innocence) then the burden of proof is on the inquisitor.
Either you believe in presumed innocence or you don't. Pick one.
That puts them on the defensive. Since now they have to justify their fear for the unknown. And then we're onto the real topic: fear.
That also puts them on the defensive side of the argument.
I don't live in the US, but the stuff the government whether local, state, or federal gets away with is very scary to me. What scares me even more is how the United States encroaches on everyone else's legal system. That's the underlying problem. Under such governments that are actually out to get people at times without much cause breaking all sorts of rules, that's what's scary.
The type of soft totalitarianism that exists and passes as common place is very scary. And that's really the people you should be scared of, and that's who you really want to protect your information from. Your run of the mill government that's actually trying to do a good job and not break its own rules, that sort of government like my government, scares me a lot less. Despite the fact that they encroach on my privacy. I know heads are going to roll if it comes out that they do things that are blatantly wrong or abusive with the information that they are collecting.
Not so in the US. They always have a half-ass lie that still somehow passes muster.
Isn't that the problem with all governments? Like they say "Eternal vigilance is the price of liberty”
Privacy is a necessary tool for opposing those abuses of power and organizing a change.
For example, imagine someone convinced by the argument "nothing to hide nothing to fear". Would this example convince them that in fact they do have to fear something? "You might think twice about contacting or meeting people (exercising your freedom of association) who you think might become “persons of interest” to the state". I do not think so, after all, average Joe does not know such people.
The solution, in my experience when talking to sceptical people not convinced of the risks is talking about money. Imagine someone with the kind of knowledge we are talking about with mass surveillance. And imagine this person could inform your insurance companies. Do you still think that you have nothing to hide? One then must only show that data is never "safe" and could always be "leaked" to make a very simple, everyday example of why it is not in my (average Joe's) interest to be continuously monitored.
I'm really still waiting to hear a convincing argument as to why I have something to hide, ideally something practical as opposed to hypothetical or philosophical.
And what if one company does this - what will other companies do? Will they keep the same price for that group as for everyone else? Then that group will leave for the company that's cheaper for them, right? Leaving the other companies with the higher-risk customers, right? So, they will have to pay out more for damages, right? Now, will they just go bankrupt? Or will they increase premiums to cover the costs?
Noone says that insurance companies aren't making inferences from data. It's just that the more data there is available to them and the more powerful computers and algorithms get, the better they will be able to model risks. And individual companies won't be able to ignore that, even if they want to. And it's the exact opposite of what insurance is intended to do: It's intended to distribute risk. The more exact insurance companies are able to model risks, the more insurance will become unaffordable to those who need it, and the cheaper it will become for those who don't need it.
Personally, I would prefer if car insurers could price discriminate more based on data. I think this would lower rates for me personally, both in the short term because I try to cultivate safe driving habits, and in the long term because it would create an incentive for everybody to try to drive more safely.
Well, that very much depends on how you look at it. It might be imaginary in so far as regulation in the US possibly prevents those consequences, which is great. One might also see it as evidence that collection of personal data has risks--and that regulation might be one way to deal with those risks, at least in some cases. After all, this kind of regulation is in effect a prohibition on collecting certain kinds of personal data, even if the collection in itself is permissible, as companies won't collect data when they can't use it for anything anyway.
> Personally, I would prefer if car insurers could price discriminate more based on data. I think this would lower rates for me personally, both in the short term because I try to cultivate safe driving habits, and in the long term because it would create an incentive for everybody to try to drive more safely.
Are you sure that it would? Remember that for the insurer, it doesn't matter whether they calculate your risk (and thus your premium) correctly, what matters for them is that they aren't worse at calculating risks than the competition (i.e., the competition can't outcompete them on price or cause them to be left with a non-representative sample of their risk model), and that on average, their criteria match reality (i.e., they don't take on risks that are actually larger than they can pay for). Even if you in fact do drive more safely than the average driver (as in: at the end of your live, you will have had fewer/less severe accidents), it might happen that their predictive models group you in a different category, because characteristics of your driving behaviour that they use to categorize you are correlated with high-risk drivers. If insurers don't know how to (economically) measure why your driving behaviour is safe, it doesn't matter whether it actually is.
Also, the incentive can actually be a problem, exactly because risk models employed by ensurers tend to not be an exact representation of reality. If you have an incentive structure that does not align with reality, the incentive can end up promoting harmful behaviour. For example, one obvious proxy for safe driving habits could be lack of sudden decelaration. It's easy to measure, and generally, if you pay attention to traffic and drive with foresight, you usually will not need to brake suddenly as much as a reckless driver. So, it's probably true that both, incentivising people to not brake suddenly would have as one consequence people driving with more foresight, which should reduce accidents, and also that people who don't brake suddenly generally are a lower risk for the ensurer than those who do. However, this proxy can not distinguish whether you brake suddenly because you didn't pay attention--or because someone else didn't pay attention and surprised you. In the latter case, though, the thing to do to avoid an accident might be to brake as hard as you can. But that will be seen by your insurance as risky driving behaviour (which it most of the time is) that comes with a higher premium, so you have created an incentive for the driver to let an avoidable accident happen. Note that the driver in question won't think about this for an hour before deciding what to do, it's a gut reaction that might well be influenced by having internalized "braking hard costs money".
And also: What if you actually are a really good driver but you enjoy braking hard? What if you brake hard just for the fun of it, in situations where it's completely harmless. Is it fair if you have to pay higher premiums for that? Such incentives that work via proxy measurements of the actual risks tend to force adherence to a standard of behaviour. I find the idea frightening that insurance companies might get to dictate "safe behaviour", where the specific behaviour is not actually necessary for safety, it just happens to be easily distinguishable from risky behaviour, so behaving differently costs you money, simply because it's difficult to figure out that your behaviour is not actually risky.
What you might or might not need to hide cannot be reliably determined in advance. It is not a constant, it is a variable and you don't get to pick which way it goes. Consider the plight of the gay Russian blogger using LiveJournal, which was later sold to a Russian company.
In short, my argument goes something like this:
1) It is possible to manipulate or influence people. This extends to their memories, perceptions, emotions, actions, even complex beliefs. And it can be more or less direct. Humans think of themselves as special snowflakes, but we are actually quite simple.
2) The degree to which one can manipulate or influence other people depends on
___a) the effort and intelligence one expends on it,
___b) the degree to which one has knowledge about the target, and on
___c) how close one is to the target (i.e. are you "under their skin", in their house, or 10 km away; what are your intervention options).
3) As social animals, we have always been subject to the influence of other people. Usually, this influence has been local, fuzzy, costly, relatively obvious to the target and "controllable" (in the sense that knowledge about target was strongly negatively correlated to (generalized) distance; if target became suspicious of you, target could simply cut you off or increase distance).
4) Technology and science currently change the rules of the game, and in profound, basic ways:
___a) We learn more and more about how to influence people, both by physical means (regulating temperature, lighting or noise conditions; psychotropic substances or, you know, food; changing the color of a button or playing with the timeline of events; etc.) and psychological means (e.g. using the right words or framing to elicit a certain response or evoke a certain emotion; exploiting properties of the social graph; etc.). This knowledge is, of course, still very imperfect but it is also cumulative.
___b) More and more of our interaction with the external or social world becomes mediated by technology ⇒ the options to intervene in the life of someone multiply as technology becomes a more integral part of life. As a result, it becomes very cheap to make targeted interventions in someone's life. Example: Today, ranking of search results or filtering of news; tomorrow, entire articles machine-written for you (personally). Automated homes. The mind boggles with the possibilities of augmented reality and/or immersive experiences. Farther out: Optogenetics.
___c) Deep and very detailed information about people can be collected in real-time and stored cheaply (no memory decay). The more ingrained the tech, the more detailed the data. For example, real-time monitoring of blood sugar and, in the future, perhaps even stress hormone levels.
___d) Physical distance becomes meaningless.
5) Due to the tendency towards natural monopolies in the sector, all this information and power accrues in very few hands ⇒ strong and unprecented centralization of both fine-grained knowledge about individuals as well as the means to intervene in their world without regard to distance or cost.
It is not hard to see that, to indulge in some hyperbole, "mind control" of a large population undermines traditional means of checking power. Who cares about elections if I can control whom people like? Why bother with competitive markets if I can make people want whatever I have to give (and make them pay reservation prices)? No more need for violent suppression of dissent because I can detect and change inconvenient ideas surgically.
To be clear, I am not saying we are already living in a mind-controlled society.
What I am saying is that collecting data (or rather letting someone collect data) about us is an integral part of this scenario. If data became more compartmentalized and limited, say, this whole thing wouldn't work (or be far less effective).
In fact, because technology and science progress anyway, how we handle our data may be the only way we can influence the course of events in this respect.
At least to my mind, this is the real issue of privacy. Alas, I seem to be alone so far. It's really hard to see for me why this is not totally obvious to everybody. I should finally write that essay that I've been meaning to for the longest time. Then someone can at least attack my argument. Sometimes, in your weaker moments, that nobody seems to see what you see can make you question your own sanity...
PS: I am also not claiming that Larry Page or Mark Zuckerberg are rubbing their hands gleefully ("hihihi") at the prospect of world domination. I think concrete persons are incidental to this scenario. Heck, the one who ends up controlling, in this scenario, might not even be human. It just doesn't matter who. If it is technologically possible, it will be done. Loss or neglect of privacy makes it possible.
No need. Orwell already wrote it.
I just don't get why you think you are alone.
I would hope I'm not but it sure feels that way.
Maybe I'm not reading the right stuff or talking to the wrong people.
Can you point me to some (contemporary) arguments along the line I propose?
I was thinking more about something along the lines of Chomsky, McLuhan, and Orwell updated to the current day. But we'll see…
Another good example is the use of shredders at home, perhaps you should suggest people leave their sensitive data in a box on the pavement outside rather than shredding it.
I do this frequently and while it can be a bit awkward when dealing with marketing or PR types, as long as you are polite about it things work out. And anyone pestering you with repeated requests for data or an explanation can receive a less polite response.
"Please strip yourself to underpants. Yes, this very moment. We (society) need
to make sure you don't have any concealed weapons or explosives. I will then
assure these other people that are totally unrelated that you don't pose any
threat to them. What? Your privacy? Safety is more important than privacy,
right? And you do have nothing to hide, so why do you fear stripping?"
It's not that they aren't trying, or that we are not letting them. The problem is that technology is very "stubborn", "subordinating" only to those it chooses to, and those are always almost impossible to predict. No elite was ever successful on that kind of coup.
I personally do not care about privacy. I see no reason why I should.
It's just my opinion. I know other people do but please don't generalize.
Assuming that you don't care about privacy because you're apathetic, do you also not care about free speech because you don't say anything controversial? Do you care about your right to assembly even if you don't protest anything? As an extreme example upon which to build a baseline, would you mind if a neighbor had unmitigated access to watching you lounge in your underwear, take a shower or have sex?
Why do you not care about privacy? Do you feel that you don't need it because you have nothing to hide, or are you willing to sacrifice it for some greater good (e.g. terrorism etc.)? Are you merely indifferent or do you aggressively oppose the concept?
1.) Free speech is a completely different topic. Snowden's quote on this page makes no sense to me no matter how often I re-read it. If free speech didn't exist I wouldn't be able to express my opinion about privacy :)
2.) Privacy means hiding the truth. Hiding what really happened. Hiding who you really are. I believe it is a flaw of the human personality that makes us want to hide information and eventually lie about it.
I don't care if Google or the government knows that I'm searching "[insert embarassing keywords for you here]" or if Facebook knows my location, or if Twitter knows what I like based on the people I follow.
Who is the government? It's people. People like you and me. If people decide to make assumptions based on data they collected and the assumptions aren't correct it's their own fault for assuming something in the first place (because...you know...it's an assumption...it can be wrong).
I am not aggressively opposing the concept of privacy. I respect other people's opinion.
Who said anything about lying being a part of a desire for privacy?
I don't care if Google or the government knows that I'm searching "[insert embarassing keywords for you here]
Would you care if a prospective insurer knows you're (hypothetically) searching for "atrial fibrillation management" or "opiate addiction"? Or a prospective employer who knows you're (hypothetically) searching for "corporate firewall security exploits"? Or a prospective romantic partner who knows you're (hypothetically) searching for "genital rash"? Any of those searches could be legitimately borne of pure, unadulterated curiosity, but taken out of that context by people with whom you're hoping to establish some kind of relationship, they could easily doom that relationship before it begins. Hell, those searches may not even be made by you but by someone in your household, but if decisions are made and opinions are formed based in that information, you've suffered an unnecessary loss.
Who is the government? It's people. People like you and me.
Indeed, people like you and me, except those people have the authority and/or power to incarcerate you, or impinge on your rights in other (less direct/more insidious) ways. Privacy isn't about hiding the truth from those who have a need to know it, it's about controlling the context of that truth, or at the very least, having a say in any response that comes from the truth being discovered.
Like you said, someone trying to get information about the topics you mentioned could simply be doing this out of curiosity. Now person A from the government says you are X. However you are not X, you are Y.
Think again, what is the actual problem? The actual problem is not the data which is 100% correct.
The actual problem is people's prejudices and assumptions. This is what we need to fix. If someone searches about topic Z we should think very carefully about the consequences of drawing an assumption.
However, this view is very ideological. Your view on the current state is more practical. I do not disagree with your statements, I simply wish that we can address the real issue here in the future. Even if it takes us centuries.
Right, so the whole premise of your indifference or opposition to the privacy argument is that people should not have prejudices or (wrong) assumptions. Isn't that too idealistic and to rid people of the prejudices and figure out right moral standard for behaviour - will it not take many more generations, if at all it happens? Till then; till we figure out the right _prejudices_; till all of humanity naturally elevates to the right moral standard, shouldn't we be wary of those bad agents who can abuse others by breaking into their private matters?
Your premise, in short, assumes an ideal world where none is troubling others for their private acts, which unfortunately isn't the case yet.
Or more generally, you can't choose how people interpret data they gather about you and that can adversely affect you.
Like you, Snowden's freedom of speech line never impacted me... until I read this article. It suddenly hit me. The reason I was missing his point is because I was framing it in terms of what's in it for me rather than looking at it as what's in it for us. Someone who doesn't care about freedom of speech doesn't care because he doesn't see what's in it for him. But I doubt you'd argue the benefits of the first amendment.
Similarly, privacy is very important. You might not care (even though you really do), but defending privacy is about ensuring security. Privacy is important for all of us, just like freedom of speech is.
As for what the actual problem is, the problem for the most part is ignorance and a failure to quench it. We need more privacy / cyber-security advocates who can educate people on why they ought to care. It's like teaching people why it's important to lock their doors at night or why they should put their letters into envelopes instead of just using post cards. It's why my mom had to drill into my brain the importance of not giving out my social security number willy nilly. Are you so liberal with your SSN? You don't care about privacy, so would it bother you if Facebook or Google asked for it. After all, they just want to make sure you are who you say you are.
Things aren't obvious to us until they're obvious, and then it feels like common sense. DUH, lock your door! DUH, encrypt your messages!
I think the negative effects there are largely due to how private we are. If we were constantly confronting these things that seem embarrassing or concerning, we'd come to realize how normal they are.
It would require a completely shift in how we view privacy, one so large I doubt it would ever happen, but I think those are ultimately a symptom of the current system, where we often keep things private for the sake of societal or cultural norms, sometimes to personal detriment.
I'm not particularly arguing that either way is inherently right or wrong - but I do think the consequences you speak of are only meaningful in a world where a large measure of privacy, at least between most people in their day to day interactions, exists.
It seems common that the arguments for privacy trumping other values depend on bad behavior by state actors. In which case, reforming the state by whatever means necessary would probably do more good than advocating for philosophical concepts.
The situation is very complex because privacy has been implicit in our daily lives for so long, it's really difficult to map out the ways it would reduce our personal freedom. If we want to remove privacy, then we need to make it impossible for anyone to keep anything private from anyone else.
If privacy isn't important; then we should all live in proverbial glass houses where everyone can see everyone else's lives. Why should we trust the government with that power, why not everyone?
At the very basic, we want to hide things because other people do not like it (which leads to reaction from shaming to prosecuting and stoning). Fundamentally, the only way for it to not happen is to have a completely homogeneous society, or all human to turn into saints. I will just assert the former to be bad, and the latter to be impossible .
We need save (and private) spaces. At least in my view of the world, where I am on your side, not believing all people will turn into saints. Not even most of people.
So killing privacy and upping surveillance of everybody, we as society will shoot ourselves in the foot and killing new ideas before they are even thought i fear.
And when The People incorrectly decide that based on data you raped a 15 year old, you will be in prison for the duration of the trial, you will be on the sex offender list forever, and you will be inconvenienced with anything requiring a background check. You, not The People.
Ideologically, I agree, privacy is a lame side-effect of how groups of people work. Pragmatically, please don't take it away.
The lack of privacy may very well reduce the amount of false convictions. Sure, you looking up pix of teen boys might look suspicious. But the lack of privacy might catch the real criminal too.
If we had accurate gps for all people all of the time, it would probably reduce false conviction rates.
Plus, the way the system works now is that once you are a suspect, you really don't have privacy anymore. That's how the Constitution works. Once there is probably cause, the state will rifle through your stuff, ask your friends and family, etc.
On the mistaken conviction issue, I'd probably rather live in a privacy free state than a state with privacy. Assuming I was innocent.
Though I prefer privacy for other reasons.
I believe when this happens Hacker News won't exist anymore because the intelligence of human beings will be comparable to that of a fly.
Luckily...this didn't happen yet because I can still have intellectual discussions, even on the internet.
I like your separation of "ideologically" and "pragmatically". I agree, it's not a pragmatical approach.
And that's IF the internet or real lynch mob doesn't decide to go after you. If it does, then the being proven innocent part is the least of your concerns.
And hey, if it is in the news, it has to be true - doesn't it?
We will never fix these idiots (myself totally included). Because even if we do not believe these things we will have them forever at the back of their heads, when presented with a name of someone because: "maybe they did do the thing non the less, even if the court acquitted them".
This is just human nature. You cannot actively un-know something you heard and this will sadly inform your inherent biases non the less - even if you intellectually know it to be untrue.
> Free speech is a completely different topic.
((James Madison rolling over in grave))
Oh, but freedom is not a different topic. These two types are enshrined in the US Constitution after centuries of experience in the old world.
Imagine that you are too young or too lucky so far to have information used against you or your family. Yet history shows that it happens again and again, and will again.
If you were living in western China, you might care, because you might end up an involuntary organ donor. Or if you were searching for gay porn in Saudi Arabia.
What if the assumptions they make raise the premium on your health insurance because someone sells your data? People (or, more likely, algorithms) making wrong assumptions, even if it is their own fault, can affect you negatively.
There's also the chance that those algorithms and assumptions are "correct" from a business standpoint (it would cost more to "fix" them than the monetary benefit of fixing them) even if they're not correct for consumers, meaning nobody that's actually in control of them has any motivation to fix them.
When late 19th century Germany started recording census data, a clerk made the suggestion that they should also record each person's religion. No one objected. What could be the harm, right? They were already collecting age, gender, occupation, etc. so they might as well collect one more thing.
Half a century later, the Nazis were able to use those same historical census records to identify whose grandparents were Jewish, and therefore who must be Jewish, which greatly aided in rounding up those people.
So imagine: a piece of information commonly believed to be harmless to reveal about oneself became the primary method that facilitated one of the greatest atrocities in human history.
The lesson here is that tomorrow's government may turn out to be very different than today's. Information you willingly reveal about yourself today, or don't mind others (such as the government) finding out about you, may be used against you, your children or their children.
That is why privacy is supremely important.
As Schneier puts it: two proverbs sum it up: "Who watches the watchers?" and "Absolute power corrupts absolutely."
I'd love to know why you don't care.
1) Do you really not care if people watch everything you do? Would you live in a Big Brother type house where everything you do is recorded and shown to the public? That includes you showing, shitting, picking your nose, having sex, watching porn, every conversation you have etc.
And maybe you really don't give a fuck about that. I guess some people could stand naked in front of a crowd and shit themselves an not think twice about it.
But then on the other hand, there is one more thing to consider. What if the current government, or another one elected in the next 5 to 20 years is really evil? And let's say they are against the things you believe in, and they believe anyone who believes what you believe should suffer unimaginably? Or what if you like to complain about things? You post negative restaurant reviews Write angry letters in the newspaper, stuff like that. And this new government doesn't like complainers, because they don't want to be challenged. They could just look at all the stuff you have complained about in the past and decide you are too dangerous. So they arrest you (and your family, because why not?) and then they put you in a concentration camp. Where you and your family are worked to death. Starved to death. Beaten to death. Raped. Experimented on. Burned alive.
I know it's an extreme example, but things like this have happened before. Except now it's easier than ever for governments to round up the people they don't like, and to find out who the people they don't like are. And the less you care about privacy, the easier you make it for them.
Would you still say you don't care about privacy? Or do you just believe something like that would never happen, and so it's not important that we take steps to prevent a future government to do such things?
To phrase it differently, in a world where individuals have perfect control over their personal information, so can still post any of your own personal information to the Internet if you want it to be public. It's basically the difference between saying "I'm not personally a Muslim" and "I think practicing Islam should be illegal."
There are plenty of cases where privacy are important, even if you think they don't currently apply to you.
10 years from now a new government is voted in, and they severely dislike people with your political views based on some online comments you made in 2016. They could do anything from just making it really hard for you to vote, all the way to getting you fired, pulling your mortgage out from under you, etc. etc.
You really have no idea what a future government or corporation will do with so much data about you. Based on history, I think it's best to assume it won't be good for you.
In the naive sense, there is a lot of value in knowing location. Restraining orders, for example, could become effective. "How did your cell phone get to the bank that was robbed if you weren't there?" stuff like that. Furthermore, politicians pander. being able to answer how many people showed up for that protest is valuable. what were their demographics? Perhaps this is an issue that matters.
But the asymmetry is horrible. You want real time access to my location? ok, but make the location public for the police. and NSA employees. and senators. Having a record, that's made public after a few weeks or months seems pretty reasonable to me. Having a record that's secret and controlled by a handful of powerful people, which is what i think we have now, is much more frightening.
You might think that, but it's just a matter of time really before someone asks you a question you don't want to answer. The users of Ashley Madison certainly had something to hide. They weren't doing anything illegal, so why should there have been any concern about privacy?
It's somewhat humorous when politicians use the "nothing to hide" argument. The governments and politicians seems to have the most to hide. It should be any problem for any government to release all information about any decision ever made, at least not after the fact.
Guns don't kill people. People kill people.
The value of data by far surpasses any risk involved with it. If people were more honest, transparent and straightforward, we would have a much more tolerant society. Everything would be much more efficient. Crowd sourcing of health data would be facilitated, and we probably would have greater understanding of health and be able to cure many diseases.
All arguments in favor of privacy naturally lead to the conclusion that we should get rid of the internet. And probably communication in general. Heck, make people blind and deaf, that'll solve the privacy problem. People are deluded.
Just one of the many instances where so-called progressive people blindly support an inherently conservative cause.
A right to privacy is similar to a right to lock one's car doors when traveling through a bad neighborhood.
Now you can decide whether you want the courts to have any ability to connect that crime to the medium that you were communicated via, or whether that information should simply vanish in the ether, leaving anyone to blackmail or extort anyone else from the safety of their own home and behind an anonymous Internet connection.
you've, in fact, made the opposite point to the one that you meant to :) when crimes start getting committed, sometimes society needs recourse.
But we were talking about privacy, not anonymity.
My argument is that even if you don't have secrets, someone with power over you (politician, judge, general, CEO) may, and you should want their secrets kept.
Maybe that's what you think. Let's see if you're willing to post the names, phone numbers, SSN or other ID numbers if you have them, and addresses of all your friends and loved ones. Got some weasel words about how that's not "personally"? Ok, start with your own info. Not going to do it? Didn't think so.
If you think you don't care about privacy, you probably haven't thought things through as others have.
If you apply the "nothing to hide" principle to states' own actions, I can think of two possible conclusions:
1) It is not true that if you have done nothing wrong, you have nothing to hide (i.e., there is a legitimate right to privacy)
2) State actors have something to hide, so they must be doing something wrong.
"Chilling effect" has always been a profound term for me, because I imagine the "cold" (numbness really) sensation a human body often senses when something truly awful (disembowlment/dismemberment) occurs. The body's way of protecting itself is to go "cold", and in many ways that's exactly the effect taking place here, as well.
There's also an undeniable part of this conversation that rarely gets addressed simultaneously, and I'd like to see it sussed out more in concert; what about the folks who are doing Evil in these private channels? It's unacceptable to me that TOR gets used for child pornography, and it's unacceptable to me that my government finds out I'm gay before I come out to my family.
I don't want to provide those who would do Evil any safety or quarter. I also want to give people a powerful shield to protect themselves against judgement and persecution from the public and sometimes the law.
We should talk about achieving both of these goals, but we generally don't.
It's infrastructure, so it's all inherently neutral. SSL is used by banks, protesters, and criminals alike. You can't weaken it for one group without weakening it for everyone. It's also global: you can't backdoor an IRC client only for marijuana users in the US, for instance.
So if you get to surveil pedophiles in the US, it means that Saudi Arabia gets to surveil homosexuals. We're on the same infrastructure.
Also, it's important to recognize that illegal behavior is a critical part of Democratic change. If SSL could discriminate based on your intent to break a law so we could arrest them all, people campaigning for marijuana legalization would all be in jail, and the law would not be changingl. So would people in the 60s campaigning for civil rights, and every homosexual in the country. There is always a grey area period of time in which people break a law because they don't believe in it. That period of civil disobedience is how laws end up getting changed. Even (especially) morality laws against things like sexuality, drugs, or alcohol. It's important to a living democracy that the police are not a perfect force.
There might be a way to stop pedophiles and kidnappers, and rapists that we haven't thought of simply because we're not willing to talk about how we could do it.
Highways have police. Where are the digital police? I'm not sure I prefer such a thing, but why don't we even discuss it?
Evil is agnostic of location. Your question is of no significance. You might as well be perturbed over "Evil" people living in houses or eating food. Unsurprisingly, evil people are people and will tend towards the same activities people generally engage in.
We need to talk about how to create "no Evil/privacy", or at least how to approach something of that kind, even if an absolute version doesn't exist.
You have not established the slightest bit of an operational definition, and resort to pathologizing neutral transmission channels as hosts of "Evil". This is a complete non-starter and not worthwhile to deliberate. "Evil uses Tor" is as useful as "Evil uses paper".
I could rob whole countries of their future (say Greece) with paper treaties.
I can rob people of most of their democratic power (see lots of tries at treaties like TTIP or some things like this).
All on paper and in my eyes tending to the evil side.
The problem with evil though is, that it isn't an objective term, it is not empirically measurable and it has so many definitions and perspectives, that it has none.
Evil is a weasle-word, is propaganda, nothing more. So we really need a better word, a better definition. And breaking the law or something like that does not work either, as for example the laws in Germany, the US or Saudi Arabia or China tend to differ massively. They are ideologically tainted and do not provide an objective framework/reference either.
So we should work on a actionable and helpful definition first and not go partisan on the medium to be controlled.
Quick test - is looking at a photo of a naked child evil?
Or rather, we could if you were being intellectually honest.
Most people, if given the choice, will nearly always pick safety over privacy. It's simply not enough to say you can't have both, because privacy will eventually get thrown out by the electorate, of any country.
If that was an available option, it would be fine. I'd choose no evil and no privacy because I'd know that my information, and more importantly the information of public officials and company leaders; folks who wield power; wouldn't be abused.
But that's not an option that's on the table, and whatever it is they intend to pick it is not what people accomplish when they endorse surveillance. Our governments do some pretty evil stuff and those are the people who end up with the power in these sorts of arrangements. What people are being given the choice between is 'a dubious promise of safety; an evil that doesn't offend them personally; and no privacy' - or 'a perceived higher rate of evil that does offend them personally; you're under threat, they're coming for your kids, hate freedom etc; and privacy.'
You seem to be taking the no evil part of things as essentially solved. But it's not. We do not have systems we can trust not to abuse this.
This isn't a conversation about absolutes, it's a conversation about shifting degrees. I'd like to shift towards the "less Evil/more privacy", but no one here or anywhere wants to try to come up with ways to do that, because everyone just assumes privacy gives Evil room to grow.
I considered it, but I didn't grant it since it was not clear that it applied - nor for like discussions would it be clear in the future. You adopted an extreme position which is not analogous to the discussion you wished to have, and that is not an act of rhetoric; discussion as an art and a skill. It is simply poor communication - as evidenced by how widely your claims would imply you to be misunderstood.
Rhetorical license doesn't cover that. The charity of understanding others extend a speaker is not an unlimited effort.
> I'd like to shift towards the "less Evil/more privacy", but no one here or anywhere wants to try to come up with ways to do that, because everyone just assumes privacy gives Evil room to grow.
It does. You gave such an example yourself: TOR and child pornography. More widely speaking, it's possible to lessen the 'evil' in society through a great many means - better school, better welfare provision, stronger community links. More to the point many, even on this site, seem in favour of taking those steps: Strong encryption? Please. Better schools? Yes please. Better community organisations? Yeah. There are of course people who'd say no, but that does not change the fact that people in favour of them are easily found.
Many of the steps that could be taken in that regard seem orthogonal to the larger discussion at hand, (encryption security - as structured by reasonable context,) mind.
The principle that governments should have covert back doors into our information and communications channels is no different from saying they should automatically get a copy of all of our physical keys, a way to secretly remotely activate and use every camera we own, or remotely activate and listen in on every microphone in our houses.
In fact, as everything moves to electronic, always-connected internet of things platforms these things become increasingly not just equivalent but identical. Soon electronic privacy will be the foundation of every kind of privacy.
However, the true wins will come by doing real-world police work, educating parents and children on how to protect themselves and what are the potential offender profiles (hint: not guys in an ice cream van). They will come on a diplomatic level by negotiating better laws in countries where such materials are produced (Japan was a recent success AFAIK) and where sexual tourism is rife.
Finally, those individuals that haven't abused anyone should receive support from a mental health specialist if they come forward and admit to their urges, like in e.g Germany.
I have the feeling that this is a taboo subject, that is not discussed frankly in most societies. The authorities focus mostly on harsh punishments instead of prevention through education and mental health treatment.
Thinking about what happens on tor is mostly a reactive policy that doesn't do much to treat the causes.
It would be better to ask whether someone, who is not a willing participant, is harmed by the activity. That's easier to establish.
The fact is, our collective idea of good and evil is internally inconsistent. It's still skewed by our repulsion to sexual deviancy and our fear of being judged by our peers. We really can't distinguish right from wrong. We aren't even able to apply the simple test you proposed.
I agree, but it's nearly impossible to have the best of both worlds.
About the first one, physical, what king of stupid incompetent secret service lets an entire terrorist organization train their people on physical training centers, physically dispatch them to their targets, get their hands on weapons and explosives, and successfully commits their crime, while doing nothing because they are really focused on breaking Tor?
Universal surveillance is not just useless for fighting terrorism, it's actually harmful, in several ways. And most of those apply to almost any use you can came up for it.
About the last one, digital - inside of your mind (that is, inside of the criminal mind), those people need help, not punishment. Stop punishing them and they'll seek you.
Now, the real problem is the second set, digital - real Evil. I have no good answer for those, but the surveillance people also lacks this answer, and are almost completely useless against that too. I'm not willing to accept an argument claiming that mass surveillance needs to exist so those people may start doing that work in the future, when they discover some way to do it.
Secrecy does not exist any more
Privacy is the politeness of your neighbours.
Now everyone is your neighbour
As such we choose to not be polite to paedophiles
However only a limited number of us choose to be polite to people still in closet. Others will impolitely sell pink insurance, others scrawl hate messages
I am not sure where I am going other than politeness is hard to enforce.
Because it is a logical fallacy. To enforce something implies using violence which is considered mean and not polite.
The same logical fallacy applies to gun control. Oh you want to ban or restrict access to guns? How are you going to do that? With guns of course.
Seatbelts, airbags, traffic lights, food safety, drug safety, pollution control, national defense and many more benefit pedophiles just as they protect good people.
The crux of the debate then is where to draw the line between safe and unsafe amounts of power?
1. It must be granted through democratic means.
2. It must be under strict oversight by an independently elected or appointed group that's free from both private conflict of interest and popular pressure.
3. There must be reliable mechanisms to quickly and efficiently strip said power away from the authority if they are determined to have used it irresponsibly.
Bush administration is a good example. They just did warrant-less wiretapping. It wasn't hard or particularly expensive. No public debate. And that was a government that more or less followed the rule of law. Imagine one that no longer follows the rule of law.
You don't even need an apparatus. Just send an FBI analyst to pick up Sundar Pichai and Zuckerberg. Have the companies run queries on their own database.
Our government already has the power to mass murder. Obama can order entire continents destroyed. The air force can drop a JDAM on any house in America. The police can arrest any political enemy of the state. The government already has immense power.
Comparatively the sort of privacy issues we are talking about are smaller powers. And like I argued above, they are easy to acquire.
Any government willing to abuse the power of surveillance would be willing to flaunt the law to create a surveillance program overnight (well not literally, but they could do it in months).
I'm not arguing that there is no downside to surveillance power, just that it's not as dangerous as many make it out to be. For example, there is still risk in official abuse by government employees acting rouge. There is risk of data leaks. And smaller scale abuses that can be covered up or that the public wouldn't care about.
But I think the fear that we shouldn't give the government power to surveil because they might go full nazi/communist/theocracic/etc. is silly.
Also, when I send an email to my friend "firstname.lastname@example.org", sure the data captures the send-to email address. But the data doesn't know who laserpants actually is, nor does the email content get saved. I'm not saying laserpants can't be found if the law decides to investigate, but I doubt it's a matter of pressing a button to bring up the real name of laserpants. Especially if laserpants uses different email addresses and a shared internet.
My take on it is privacy is dead or nearly and we have to have good legal protections of who can use what data and when. The privacy arms race will mostly be won by big government with lots of resources and enough willing/foolish patriots (Depending on your point of view).
It is just a shame that we have to go through the whole cycle given that we just know it should happen. But in a way, the more extensive the surveillance is, the quicker this cycle will happen. And right now we are in a pretty bad place already. So let's be optimistic!
If you regularly send emails back and forth with a specific doctor, I have a pretty good idea which condition you have. If you regularly call a specific company at specific times, I have a pretty good idea that you work there. ...
Also, private surveillance is not about "finding you", but about influencing groups of people. For business purposes. Or maybe for political purposes. If I know that you are likely to be receptive to a specific kind of emotional message, I don't care what your birth certificate says, I care about how to get that message onto your screen in front of your eyes.
And finally, as other have mentioned: Yes, it is a matter of pressing a button. That is the essence of what Snowden revealed, if you will.
If your friend uses a separate, privately maintained email address for everyone and every service, that will help a lot. Then we only know his identity because email is sent in the clear, even between his own server and home computer. Of course, everyone uses separate email addresses for everyone, right?