I think it is up to us to figure out how to make this brave new world work.
As Bruce says, the real issue now is not surveillance or even information, but power. Who gets to watch whom, and (of course) who watches the watchers?
I see four strategies for coping with the issue of power --
1) Default - maintain current trajectory - this is what most of us will do.
2) Withdraw - lower one's profile. This seems to be Bruce's plan.
3) Expose - bring buried secrets to light, and the watchers under surveillance. Expect harsh repercussions. The US govt has been increasingly secretive. It has been removing whistleblower protections for some years now. It has been punishing exposures of information more harshly than in the past (though I suppose the exposure of Valerie Plame might pass as an exception, unless your name is Scooter Libby.)
4) Disrupt -- frustrate the collection of information through jamming, feeding false info, deletion. But info seems to be cheap enough that bad info doesn't really hurt those using it.
There is a fifth option, which is to construct through the democratic process a new set of civil rights and policies designed to confront and limit these new powers.
For example: we can seek to make it illegal for certain information to be tracked; we can require that any information collected be disclosed to the individual at risk of being tracked; we can expand fourth-amendment protections explicitly, forcing a warrant to be issued to collect any of the information made available through these systems, even if the data is stored on a corporate server (doing away with national security letters and administrative subpoenas, and prohibiting disclosure without a warrant); and we can impose criminal penalties on government and corporate officials that violate these rights.
And more: we can make exposure of all but the most sensitive government "secrets" mandatory by law, rather than relying on legally questionable whistle-blowing for disclosures; we can ensure that encryption technologies (and other self-protection techniques) always remain legal, and remove any requirement that backdoors be added to new communication technology; we can work to decentralize corporate power in telecom by aggressively enforcing anti-trust laws; we can improve transparency and oversight of telecom services by explicitly encouraging that high-speed data service be provided as a local public utility.
And we can reduce the incentive that exists at every level of law enforcement to undermine privacy and civil liberties (and in turn further centralize power) by making fewer things illegal--most importantly, drugs.
While I'd love for those things to come to pass, it's alarming that you elide methods and obstacles entirely. The democratic process has been purchased by the entities you're proposing to regulate, and the entities that you propose enforce these new rules, are beneficiaries of the current arrangement and would be greatly inconvenienced by the changes you propose. The beneficiaries of the current arrangement will resist changes to that arrangement with utmost vigor, so it doesn't speak well of your proposal that it supposes that they will cooperate with changes that are against their interests by handwaved methods and tactics.
I think the first step towards change is to envision the world that you want, and find people that agree with that vision. And if you can't agree on everything, find that subset of policies and objectives that people can passionately agree on, and forget everything else. Then, get organized.
Start by going door to door; find people who care enough about the issue to help out; have in-person meetings and events and build connections; give people the tools--online and offline--to recruit participants and stage their own meetings; orient people towards a focused set of objectives, and ask them for money to help you broadcast those ideas with counter-propoganda; ask people to contribute time and money to candidates that support those objectives; always keep everything above board, to avoid being attacked; rinse and repeat. If enough people care, you can win; if not, then maybe "money in politics" wasn't the real hurdle to reform--maybe it's just that the voters didn't want the reform you were preaching.
Democracy today seems broken, like it doesn't work any more, but it's always been "broken," it's never "worked". The progressive movement and the labor movement of a century ago both fought an uphill battle against big money in politics. It took decades before they experienced lasting success. When voters want changes in government that cost profitable businesses money, a lot of those businesses will spend at least as much money as they stand to loose to prevent that change from happening. It was as true one hundred years ago as it is today.
The fact is, though, that absent massive violence or election fraud (which itself is only possible when the polls are tight) with all the money in the world you still need people--individual people--to vote your way. Money in US politics has influence only insofar as it can be used to convince the masses of voters to vote a certain way. The thing is that money and propaganda have limits, especially if a message of reform resonates. That's what social movements are all about--coalescing around a message of reform that makes propaganda sound unconvincing.
Some people don't mind being tracked, and some people prefer it (because it can increase the quality and relevance of services offered to them). I can't think of a reason we should outright deny such services from existing. Additionally, such regulations would probably end up applying only to corporations (who are usually not malicious, and only want my money) but not applying to governments, who can cite "national security" concerns to justify tracking people anyway.
As for mandating government transparency and protecting encryption, I think those could have a lot of potential to help without unfairly limiting any person or group. And I agree, we should make fewer things illegal, which is another reason it might not be smart to make collecting consumer data illegal.
I think this is the correct course of action. No matter what amount of privacy we have, and I think we are entitled to a lot of it, nothing stops someone who has more power from exploiting someone who has less. All the encryption in the world doesn't stop someone from breaking down a door and breaking the code with a rubber hose, so to speak. What does stop people from doing this is a very strong social stigma against this kind of behavior. Or at least it was a very strong social stigma, it seems to be breaking down if you are of a certain color or religion now.
Only by maintaining the social stigma can we really protect ourselves.
This is, at its core, a social issue, not a technological one.
Additionally, we should be voting with our money on services that embrace privacy and plausible deniability as a core design tenant. Unfortunately at the moment there aren't many good examples that I'm aware of, the most noteworthy would be Mega(assuming it lives up to what they claim).
It's nice because in the next update (to be released this month, v3.5), you'll be able to do things like encrypting your email and having it in two different versions of it (one for you, and one for those you don't want knowing what your actual email is). You can already do it in the current version, it's just not as user-friendly.
It will only protect data that's kept encrypted on your local drive though, so with the example of email, it doesn't protect you from the government if your email is also stored on Google's servers.
I think breaking down those strategies is actually really useful. It's interesting to think of these strategies from a product development perspective as well; how will products react to new consumer strategies?
1) Default - most products will (I think) continue to collect as much data as they can
2) Withdraw - I doubt most companies will withdraw from the data collection frenzy that is the current marketplace
3) Expose - There's definitely room out there for more services that expose patterns in public data, or more mechanisms for publicizing private data (the New Yorker Strongbox, for example)
4) Disrupt - One example in this space that I love are location spoofers like MediaHint, that allow users to access content that is designed to be location-locked.
> As Bruce says, the real issue now is not surveillance or even information, but power. Who gets to watch whom, and (of course) who watches the watchers?
I couldn't agree more.
"Fox News Reporter James Rosen’s Private Emails Given To Justice Dept. By Google"
Why? Calling members of Congress is incredibly effective, especially in large volumes. Organizing politically has always been the way to create real, durable change in the law. It's how the U.S. got labor, civil rights, and environmental protections enshrined in the law despite well-funded opposition.
I know it's very cool these days in Internet-land to consider the system too broken to engage, but that seems to me to be a self-fulfilling prophecy.
It's effective on an individual level, but just telling people "call your congressperson" doesn't scale. Most people are too busy to individually monitor each bill, vet it on privacy, and decide whether or not a call is needed. So to really effect change you need an organized movement or interest group that can pool the voices and resources of all those people together into something so big it can't be ignored.
Currently the only group like that I know of on technology & privacy is the EFF, but so many corporate giants have an interest in eroding your privacy that you'd need to bulk up their funding base considerably to make them able to take those giants on directly. (Or go the other direction, people-power rather than money-power, and develop a base of activists who are willing to march for privacy.)
I think you are right, but organized movements only work with grassroots efforts. It's up to organized movements to mobilize a grassroots base to effect change. We've seen how successful this is in political campaigns (the Republics were hurt in numerous elections by their lack of "ground game"). Additionally, and I mention this elsewhere, this is really a social problem, not technological. If the government (or corporation or powerful individual) wants to get me (via surveillance or anything else) the only thing that's stopping them from doing so is social norms and the laws those norms reflect. Without those, no amount of technology will keep me safe. With them, we can hope for a normal existence with all its requisite benefits (i.e. technology).
The only way for us to protect ourselves is through political and social action.
We don't have to use Google, nor Facebook, and we can still buy prepaid mobile phones with cash, and it's up to us just how much information we share online.
Maybe I'm a hermit, but I'm online every day and images associated with my name are quite scarce.
Add to that the fact that there are literally dozens of Americans who share my exact name, my privacy worries are negligle within the context proposed by the author.
That said, I do agree with the premise that surveillance is growing increasingly intrusive, thus my habits outlined above.
Why does he always have to be such a pessimistic grandpa? Privacy as he knows it doesn't exist anymore because majority have made a choice to give it up in exchange of new value created by new technology. I'm unsure of what Bruce Schneier has added of any value to the society lately except his rampant skepticism. To me he's turning into a Richard Stallman of security/privacy.
The majority have made a choice, but not a conscious choice. Not only do many people still chase the latest shiny thing or hand over their details willy-billy without thinking of the consequences, but it's quickly becoming the default to do so, making it even less of a choice, and more just what you do in everyday life!
There are plenty of things I've found worth trading my privacy for, but it's been a conscious choice every time.
Some things you really need to sign up for, like say signing up for an internet connection, but you have no choice about what happens to your data. You know almost all ISPs will roll over if any authority wants your data from them. You just have to accept it if you want the service, and often things are vital services. Its not much of a choice.
Funny you say that. A lot of retail chains in the UK, ask for your postcode as you make a purchase, and basically identify you. I'm never comfortable with that arrangement, but hand out the data anyway, basically because I don't want to make a scene in public.
I like the way you've phrased it: not a conscious choice. I wouldn't say that ignorance is an active part in choice making. But it certainly feels less complicated.
Make up a fake code, or memorize a couple of random postcodes which aren't yours? Grocery stores in the USA routinely ask for your phone number for their "rewards program", and I just punch in one of any number of commonly-known numbers, which other people have conveniently registered for me. This occasionally leads to hilarity - Safeway's database shows that (206)555-1212 belongs to one "Hardman Dick"...
This practice is relatively common in the US as well, and it's never turned into scene if I refused. I've also been asked for (and refused without making a scene) my phone number and email address. I just say I'd prefer not to give it out, and they go on with the process. Obviously can't vouch for the UK, but I'd say at least try it once.
You are right of course, I doubt there would be a scene, only me embarrassing myself by probably dragging out some kind of justification of not wanting to hand over my details. Perhaps it's down to being schooled in implicit compliance, and it feels a little weird!
It drags you down though. Every supermarket you go to asks for your loyalty card - which I refused to opt-in to for years, but you still are confronted with the question everytime - it gets tiresome.
I do my shopping by proxy, through a partners' loyalty card, and I've been pretty surprised at how sophisticated these systems have become.
There's a desperate battle between outlets now for custom. Loyalty cards now lead to offers (coupons) on items from the weekly shop, and our shopping basket is quite anormal I'd say. We are actually recouping some worthwhile savings, for once. Rather than being offered some promotional discount on something I have no interest in. I feel a little wrong about it, but I no longer can resist the enticement.
> "I doubt there would be a scene, only me embarrassing myself by probably dragging out some kind of justification of not wanting to hand over my details. Perhaps it's down to being schooled in implicit compliance, and it feels a little weird!"
I've been asked and politely declined probably hundreds of times by now. It's never led to any further interaction beyond my simply smiling and saying "no thank you".
The cashiers don't question it, or stare, or even miss a beat. (sometimes a 'new' person will hesitate for a second; thrown off their muscle memory pattern). But surely they hear it from more people than just me.
And I find making a shopping list beforehand, based on what I've actually used since the last trip, is more effective than coupons or loyalty cards. The coupons did start getting more properly-targeted, but they also entice purchases I hadn't previously had on my list. Which calls into question the notion of having 'saved' any money.
Similarly when I would be 'saving' money on things that are only useful when I purchase its non-special-/non-coupon-priced complements (e.g. a coupon for hotdogs leading to a purchase of regular priced buns; or a coupon for peanut butter leading to a purchase of regular-priced jelly).
Not to single you out specifically, but you guys sound like a bunch of pansies. Make up a few fake people with names and addresses and use those. It's more fun to do so on the spot.
Ted Billson. There...use that. His email address is ted@bill.com if you get asked. ;)
His email address should really be ted.billson@example.com.
Another 'fuck you money' project: a set of domains with an open SMTP server that does nothing but flip the headers around and re-send the message back to the sender.
Be careful when doing this. Some systems require an email address and employees will make one up to get to the next screen. If that address is valid (or not bounced), someone can get control of your account with something as simple as a password reset.
I think you are overestimating the majority's ability to make an informed decision on these issues. Most people are still shocked by the idea of web servers keeping logs, let alone the privacy implications of their decision to use GMail or Facebook.
I think the issue is that people outside of our space often fail to fully understand how information can be collected on them and how much analysis is possible.
For example, people don't necessarily really understand basic stuff like client/server architecture and believe that their facebook profile just "is" without thinking that every time they do something on facebook a row is being added to a relational database on a computer that is somebody elses private property.
And that row will probably never be deleted regardless of how they toggle their privacy settings.
And we need people like Richard Stallman to provide a contrast. That said, I think the guy provided a fair article on how things might turn out based on how they are currently progressing.
I work in the IoT space and I try to temper my enthusiasm for it when I talk to non tech people. After giving some cool examples I tell people,
"Soon, Moore's Law will make it cheap enough to connect everything you own to the Internet - it is not a question of if, just when. All your things may or may not talk to you; but they will definitely be talking to each other about you."
Skepticism _is_ valuable when people otherwise blunder blindly forward. Although, I would like to see proposed solutions with his account of the problem.
I read the implication that there isn't a solution; his point seems to be that connected devices are becoming so pervasive that even if you want to avoid them you can't.
The implication, at least to me, is that the only way to avoid it would be to go mountain man (I'm aware that may be a bit non-sequitor). But even that would be a wasted effort should the government, for example, decides it wants to surveil you. I've recently finished reading The Triple Agent (http://www.amazon.com/Triple-Agent-al-Qaeda-Mole-Infiltrated...) and Manhunt (http://www.amazon.com/Manhunt-Ten-Year-Search-Laden-Abbottab...). The capabilities held to detect and surveil are staggering and the abilities developed over the last decade to find and capture/kill are impressive and a bit scary.
At this point, I think this may be the only way in which Schneier can get any attention. Everything else he does is too bizarre to the rest of us to be relevant. Edit: I believe I was incorrect about the email thing.
As Bruce says, the real issue now is not surveillance or even information, but power. Who gets to watch whom, and (of course) who watches the watchers?
I see four strategies for coping with the issue of power --
1) Default - maintain current trajectory - this is what most of us will do.
2) Withdraw - lower one's profile. This seems to be Bruce's plan.
3) Expose - bring buried secrets to light, and the watchers under surveillance. Expect harsh repercussions. The US govt has been increasingly secretive. It has been removing whistleblower protections for some years now. It has been punishing exposures of information more harshly than in the past (though I suppose the exposure of Valerie Plame might pass as an exception, unless your name is Scooter Libby.)
4) Disrupt -- frustrate the collection of information through jamming, feeding false info, deletion. But info seems to be cheap enough that bad info doesn't really hurt those using it.