(disclaimer: I work for Google. My words are not theirs)
At first I thought "oh. Google fan fiction. This is what we have come to." After a moment, though, I realized that this article isn't really any different from the other hyperventilating blog posts that have appeared all over HN recently. I'm not saying that these aren't important topics to discuss, but everything I've read recently has come off as a prurient privacy daydream. Whether it's people writing polemic screeds about Glass despite knowing nothing about how the devices actually work (which they make up for by imagining a host of capabilities and features that it doesn't have), or things like this that manufacture a Lovecraftian monstrosity that has as much in common with the Google of today as a pineapple, they're not really saying anything interesting. Every argument is trivial to win if you first convert your opponent into Mecha-Hitler.
Is privacy a central, unsolved challenge for the next decade? Yes. On the one hand Google (and Apple and Amazon and ... ) need to innovate, or we'll soon see posts on HN describing how "Search has stagnated" and "Google is done for". Keyword search isn't good enough anymore, but to do anything more you need to start understanding the user's context. The "Star Trek computer" interface that everyone wants can't function if it doesn't have a sense of the world and the person it's talking to. I would love it if there was a distributed way for people to provide this information without having it live in a centralized datacenter somewhere. Sadly, no one's really talking about that.
(there's an equally interesting discussion to be had regarding public privacy and cameras, but villifying Glass isn't going to make that problem go away.)
I'd actually be quite satisfied if the current state of the art for general-purpose web IR stayed about where it is now- it's pretty dang good! I can't remember the last time I needed to find something using text search and was unable to. And when Google folks talk about making search "better", what they seem to generally be talking about these days is making it more "personalized", which strikes me as a questionable goal in terms of costs and benefits. Cost: living in an electronic panopticon the likes of which Orwell could never have envisioned, having my private information used as a raw material for somebody else's monetization strategy, etc. etc. Benefit: marginally better IR?
Note that I'm not writing this as some sort of technophobic Luddite- I'm actually an academic researcher who works on IR problems. I guess it's just that I, as a potential end user, don't see enough value in hyper-personalized search to be worth the inevitable tradeoffs in terms of privacy and information centralization.
Plus, I'm absolutely not anti-Google, or full of conspiracy theories, or anything like that- but I still find it more than a little creepy to think about just how much information they have about me. Just sayin', is all.
Sometimes it's the technologists who are the most resistant to technological change. I sometimes get annoyed when Google displays results other than the 10 blue links, and tries to correct my searches when I'm searching for info on APIs, but I recognize that I'm in a minority and most people would benefit from more hand-holding.
Put another way, you may be comfortable enough with computers to remember what directory your files are in, and handling backups manually, and all that. But my mother, despite decades of my attempts to the contrary, is not and never will be. And far more people are like her than me (and, I suspect, you). I think this is what Larry Page is in part talking about when he invokes the idea of the 1-percent-of-what-we-could-be-doing.
Take a look at my colleague Sharon Vaknin's test of Google Now vs. Siri. They're both remarkably capable by the standards of even three years ago, but unlike Siri, Google understands context. "How tall is President Obama?" <answer> "By what margin was he reelected?" <answer>
http://howto.cnet.com/8301-11310_39-57584898-285/google-now-...
I'm echoing what ender7 wrote above, I think, but you can't have a useful conversation with 10 blue links.
>>Sometimes it's the technologists who are the most resistant to technological change.
Because it's the technologists that truly understand the pros and cons of a proposed technological development. Everyone else tends to be starry-eyed daydreamers who are living in an alternate reality.
Good point, and there are a lot of factors that go into it. r00fus made a similar point in a sibling reply, and I absolutely agree that users like us are quite different than 99.99999% of Google's other users- it definitely sounds like we have similar family members. And as I replied below, experts and novices have very different needs in terms of their information behavior.
I still don't think that lets Google off the hook, though; it just makes it more important that their personalization strategy be backed by very flexible, granular, and technically meaningful privacy controls. It means that Google should be setting an even higher bar in terms of educating their users about precisely what data gets collected and exactly how it gets used- and they should empower their users to control how that personal data impacts their experience using Google's products.
The number of their users who actually care about this stuff right now is basically a rounding error when compared to their total user numbers, but that doesn't mean it isn't important. And hopefully, as more and more of our life is conducted online, the number of people who care about this will increase.
I agree on "personalized" search. It can be handy (I search "Backbone" and get BackboneJS), but I worry that highly personalized search and content can lead to a feedback loop in which your pre-existing perceptions and biases are perpetually reaffirmed.
Imagine someone who takes Fox News and InfoWars seriously only being presented information from those sources. They could never escape!
Or take someone interested in mindless topics like celebrity gossip and whichever Kardashian has recently lost weight. A customized information feed will only enable them to fritter their life away in the pursuit of mindless, inane entertainment. They might come to think that these things actually constitute attention-worthy news!
Granted, these scenarios apply a bit more to customized news feeds like Yahoo homepages or Google News, but to some degree it applies to search as well. Sometimes I worry that I am already trapped in a closed information loop. Maybe the U.S. is an oppressive dictatorship and North Korea stands for freedom and innovation! I may never know.
I find that when companies say "better" usually they mean "more profitable for us".
Sometimes it involves end-consumer benefits, but often not. Google used to be one of the companies where this rule didn't apply as much. In their shift to becoming Microsoft2, this rule seems to apply more and more.
This isn't a critique of Google insomuch as it's a critique of any sufficiently powerful corporation.
As a programmer, you're probably a lot better at understanding what a search is/does and how to pick good keywords or which keywords you can swap and refine to get better results. Most of the general population isn't there in my opinion (just based on what I've seen around other people trying to search thing). The general population doesn't use site: to restrict results, - to remove a word or even " to search for an exact phrase.
I think those are the people who could benefit more from a more context-knowledgeable search, and I think that's still a huge majority of people.
That's definitely a fair point- "experts" have very different search behavior from "novices", and could potentially get more benefit from personalized search support.
I don't really mind the personal data collection, except when the results are too obvious and exactly opposite of what I need. For example, after reading some articles on Social Security disability (and how some areas have as much as 25% of there population drawing it), I got curious and started doing some basic fact checking. Now, for the next several days, every web page I visit has a Binder & Binder ad on it.
On the opposite end of the spectrum, I have a hobby collecting and studying old slide rules -- including E6B "flight computer" circular slide rules. This included searching for, and reading up on the Breitling Navitimer watch, and similar watches. Well, now I get ads for various luxury watch brands (not interested in buying a luxury watch), along with Binder & Binder ads (not even close to disabled).
I'm still not quite sure why this annoys me so much. I think it is the idea that someone is making a judgement on me / what I want, based on certain actions that had a completely different purpose. Probably similar to how urban photographers feel practicing their hobby, and getting harassed as potential terrorists or something. Yes, this is a minor thing to nit pick, and it isn't a major deal, just a bit unsettling at times.
Folks in real life make judgements about you, and what you want, all the time in most cases it's entirely acceptable. Maybe it's disconcerting because online matches are often based on more tenuous information and less context. It can be ham-handed.
As someone who has been harassed for urban photography, by law enforcement, I think that's a very different situation. Having someone in a position of authority mis-interpret innocent actions is a much more threatening thing than someone doing it for commercial reasons. A vendor can bug you but they're not going to detain you based on their guess that you might be interested in what they're selling.
I raise this primarily because it seems like conflating an innocent mistake with overzealous / threatening law enforcement devalues the latter problem.
Yes, it is on a different level -- that was just the first example that popped in mind (and I hesitated posting it, worrying that I could offend someone else). But you have the same problem with any analogy; there are always differences between the two items being compared.
I think the problem for me is that I see it as a software bug, and since it is a bug that comes to light based on personal actions / data, it gets under my skin. Kind of like how Clippy got annoying back in the day. Or when Tivo makes inappropriate recommendations because you happened to watch a particular show (such as "My Tivo thinks I'm an 80-year-old retired grandmother").
Part of the issue, I suspect, is that we aren't as forgiving of computers (and in particular, ad servers) as we are of people, due to different expectations.
If you went to a video rental store (back when those existed) and the guy recommended 3 videos to you and 1 was a clunker, you wouldn't be annoyed. But when your Tivo does the same you may not be as patient, even though it's making rational decisions based on your past behavior.
If you went to a video rental store (back when those existed) and the guy recommended 3 videos to you and 1 was a clunker, you wouldn't be annoyed.
Can you speak for yourself? You're kinda proving the point here. Know yourself, but don't pretend to know or speak for others... that goes for everybody, orders of magnitude more so for corporations.
I really don't see that happening, unless it was done under the table. And even then, it would most likely be with lower-tier companies that believe they could fly under the radar and get away with it. Of course I'm a bit burned out on what-ifs and conspiracy theories in general.
> Every argument is trivial to win if you first convert your opponent into Mecha-Hitler.
The problem with this metaphor is that Mecha-Hitler's early studies were in rhetoric and formal logic, making him a skilled opponent in arguments of all kinds. It was only later, after the infamous radiation-and-remechanization incident during which his personality module was irreparably damaged, that his resulting monstrous disfigurements led people to believe him a simpleton, easily defeated in arguments.
But I suspect that what people are concerned about regarding Google is whether it's best represented by the powerful and happy-go-lucky pre-incident Mecha-Hitler or by the powerful and scheming post-incident Mecha-Hitler.
Without examining the personality module, how can you tell for sure?
The "Star Trek computer" interface that everyone wants can't function if it doesn't have a sense of the world and the person it's talking to. I would love it if there was a distributed way for people to provide this information without having it live in a centralized datacenter somewhere.
That makes sense. Why can't we have both? If Google needs to store all my context data off in the cloud somewhere, why not absolutely guarantee that no one will get my data without my permission or a court order? And then earn my trust by defending my right to data privacy?
2. Google is lobbying to change federal law to require search warrants backed by probable cause and signed by a judge for stored cloud email (and Google Drive, Dropbox, Flickr, etc.) files, a privacy protection opposed by the Obama DOJ:
http://news.cnet.com/8301-31921_3-20123710-281/google-facebo...
3. Google began requiring police to obtain search warrants for email after the Warshak decision nationally, even though it was binding only in a few states. So did (from memory) Facebook and Microsoft.
> why not absolutely guarantee that no one will get my
> data without my permission or a court order?
Google is a US company, and current US law permits the government to demand private user data without having a court order.
Options to solve this are to either move Google HQ out of the US to some country without such laws, or lobby for changes to US laws. The first is not practical, and the second takes a long time.
> The "Star Trek computer" interface that everyone wants can't function if it doesn't have a sense of the world and the person it's talking to.
This does not make sense to me. And actually, it sounds a lot like employee cool-aid. Not everyone wants such an interface and it's not given that you need to know anything about the person it's talking to. In fact, I want the AI that I interact with to be perfectly anonymous and give the same answers to everybody.
In fact, I think I would trust search more if it were totally anonymous. What I searched for last week may or may not impact my search this week, so don't go assuming it will. I want the "pure" results, not filtered by what some algorithm "thinks."
Example: a friend runs a car repair shop and sends me email to my gmail account. When I search for car repair shop, should the AI prioritize my friend's shop in the results? What if a car repair shop sent out spam that didn't get filtered out of my inbox yet? What if I gave my email to the shop a year ago when I entered a sweepstakes?
The more I think about it, the only reason I can see to gather personal info is to benefit Google by selling more expensive advertizing. If google provided "pure" search results without using any personalized info, that would be the best in all situations.
Technically, when an AdSense client targets their ads only to North American users who expressed an interest in <certain medication>, Google didn’t “sell that user’s personal information,” but… whomever is on the other side of that ad, once clicked — and their partners — have just paid Google to find a user who likely has <medical problem>.
Isn’t that selling personal information? Actually, if not legally?
> Technically, when an AdSense client targets their ads only to North American users who expressed an interest in <certain medication>, Google didn’t “sell that user’s personal information,” but… whomever is on the other side of that ad, once clicked — and their partners — have just paid Google to find a user who likely has <medical problem>.
> Isn’t that selling personal information? Actually, if not legally?
No. Its selling the service of advertising to people for whom some fact holds.
The rest is assumption on the part of the party purchasing the advertising based on the fact of that purchase and behavior in response to the advertising. The response to the advertising is what provides personal information, just as it is in the case of nontargetted advertising in offline media where the advertising includes a call to action which is directed (through the content of the add rather than user-specific targeting) to people for whom certain facts hold.
Unless you are going to argue that everyone who sells advertising space to people who include a call to action that involves a response to the party purchasing the advertising is "selling personal information" since they are selling advertisers a mechanism by which a message to can be sent to which the response can be used to deduce personal information, then, no, what Google is doing isn't selling personal information.
"People for whom some fact holds" IS the very definition of personal information.
What's missing from the GP's example is showing the ad for medication to the same user on a later unrelated search. On any given search, I would expect an ad-sponsored search provider to give me ads related to the current search term. If the seach provider stores my search history and serves me ads from previous search terms, that is personal information being sold, in my opinion.
> "People for whom some fact holds" IS the very definition of personal information.
Yes, but selling the service of showing something to those people isn't selling personal information, because the entity selling the service keeps the personal information and uses it on behalf of the entity purchasing the service.
Selling personal information is disclosure. Selling a service in which the seller uses personal information to provide the service is use. There is a meaningful difference between an entity using personal information and an entity disclosing personal information to third parties. There can be legitimate grounds to be concerned about either, but its not helpful to conflate them.
No, you’re saying that filtering people == having a call to action.
That is a false equivalency.
I’m also not sure if you are attempting to attack me for only discussing Google. Of course I consider my criticism to be broad enough to apply to similar situations involving other ad networks (and to Facebook), but we are discussing specifically Google’s privacy policy.
> No, you’re saying that filtering people == having a call to action.
No, I'm saying that the personal information that the advertisers get is what they deduce from the response to the call to action. Filtering who sees the ad (among other functions, this isn't actually the most important) serves a mechanism to attempt to shape the pool of people who respond to the call to action, but its not really different (in terms of the advertiser being able to derive personal information) from the use of the actual content of the ad to influence which people who are exposed to it will respond.
> I’m also not sure if you are attempting to attack me for only discussing Google.
Pointing out errors in your argument isn't attacking you. And, to be sure, the errors in your argument have nothing to do with the fact that it references Google, but with the fact that it posits a false equivalency between selling personal information and selling target advertising placement based on the fact that advertisers can derive personal information from the active responses of advertising viewers to the calls to action in targetted ads.
Everything Google does is about ad sales. Self driving cars? What are you going to do when your car does the driving for you? Consume media of course, with ads all over it! Etc, etc. Which is not a good or a bad thing in and of itself, it's what they do, like McDonald's sell hamburgers. The question you must ask yourself is do you want Big Mac and Fries for every meal?
Could you explain exactly how paid web services that do not embed advertising that Google provides (paid Google Apps, and backend services like App Engine, Computer Engine, Cloud SQL, Cloud Datastore, etc.) are "about ad sales" rather than about leveraging Google's core technical competencies in search, cloud storage, and web services to generate non-advertising revenue streams?
Sure. These things are about getting new and interesting companies to build on Google's infrastructure. Then when the time comes to acquire them and monetize them through advertising, all the integration is already done.
Interestingly I signed up for the trial of Google App Engine. The only thing they wanted to approve me was for me to enter my GMail password. In other words, they'd already scanned my email and decided if I was a good candidate. Worrying, no?
> Sure. These things are about getting new and interesting companies to build on Google's infrastructure. Then when the time comes to acquire them and monetize them through advertising, all the integration is already done.
That seems to be a strained explanation, particularly in the case of paid Google Apps.
> Interestingly I signed up for the trial of Google App Engine. The only thing they wanted to approve me was for me to enter my GMail password. In other words, they'd already scanned my email and decided if I was a good candidate. Worrying, no?
The trial period for App Engine was, I think, limited for the purpose of controlling rate of growth on the platform until it was ready to be thrown wide open. I don't think there was evaluation involved at all, which is why they didn't need anything other than your login information.
When Google has rolled out products with limited previews with evaluation criteria, they've had applicants provide explanations of their intended use and other information.
> Everything Google does is about ad sales. Self driving cars? What are you going to do when your car does the driving for you? Consume media of course, with ads all over it!
Of course, you make perfect sense. I guess we know why if Google gets into private spaceflight.
Every argument is trivial to win if you first convert your opponent into Mecha-Hitler.
Not everything is an argument that needs to be "won" or "lost" though. Sometimes an article is just an article, that's meant to spark a discussion around a topic. Even if the article seems to overtly take a position, that's kind of irrelevant if you think that it's the resulting discussion that matters.
Given some of the discussion we see in this very HN post, I'd say this article succeeded in that regard.
What sucks is that there was already a topic to discuss, and the author changed it. The original idea was simply a place with fewer restrictions, and the author instead decided to talk about a fantasy island anarchy with no rule of law and ridiculous scenarios.
So we can talk about that, if we want. But it seems a lot less interesting and much easier to judge that the actual original idea.
What if we really did have a place where we could use driverless cars on the road right now? Where maybe we didn't have to wait for 10 year trials to start using potentially groundbreaking drugs? Where tax money that would otherwise go to the war on drugs, instead goes to health clinics, and drugs are fully legal. Where maybe the education system isn't set up so that kids forcefully progress at the same rate, with kids of the same age, for the first full 18 years of their lives, without a break, instead of being able to learn at their own pace.
I think there are so many kinds of interesting things to be discussed, which are realistic and apply to our world today. But publications like Wired are apparently not interested in discussing them; only discounting the idea wholesale so they don't have to put any real effort in.
> Even if the article seems to overtly take a position, that's kind of irrelevant if you think that it's the resulting discussion that matters.
I'm a big proponent of this notion. Often I'll quickly dismiss an idea I have because the discussion it created led to a much better one. However, my original idea was still crap. And saying in hindsight that I only offered it because of the discussion it would inevitably lead to is just... bad.
>The "Star Trek computer" interface that everyone wants can't function if it doesn't have a sense of the world and the person it's talking to.
Sure it can, at least the sense of the person. I don't need your life story to know how to handle simple instructions like "what year was movie X released", or "Gimme a list of all the elements on the periodic table that contain the letter U in their names".
Sure things like "What time is my flight" will need details, but "What time is flight QF204" doesn't.
The other thing is that the Star Trek computer doesn't have to be in the cloud, it can be on your device. Or it can be both, with the local one keeping your private stuff.
"what year was movie X released" is an easy query to answer. It doesn't require much context, it doesn't need any information about the user, and movie release dates are widely available in many public sources.
But there's no reason that search engines should be limited to trivial searches of public data. A modern search engine needs to be able to handle situations like:
* User has a reservation at a local restaurant, so they ask [where's lunch] and want to receive the time and location of their reservation. And they want driving directions, unless they don't own a car, in which case they want directions from the nearest public transport.
* User just got back from Hawaii and wants to show their coworkers a photo they took of a cool wave. So they search for [wave picture] and want the picture they took to be in their search results.
* User is writing a PhD thesis on tree rings and found a useful article last month, but now they want to cite it and can't remember what it was called. So they search for [article about tree rings] and want that particular article to be the first result.
In other words, the 2001 model of a search engine as "grep for the web" is far behind what people expect in 2013.
Even with those use cases, there should be some technologies and controls who help keep privacy who help filter, encrypt , store locally, anonymize and transform all kinds of types of information about a person's searches and surfing.
A simple example would be: give me an easy option to disable recording only my health related searches.
And those are relatively simple technologies to implement.
> Sure it can, at least the sense of the person. I don't need your life story to know how to handle simple instructions like "what year was movie X released", or "Gimme a list of all the elements on the periodic table that contain the letter U in their names".
Then that's not the Star Trek computer is it? That is a personal organiser that responds to voice. Google are clearly aiming beyond this.
> Whether it's people writing polemic screeds about Glass despite knowing nothing about how the devices actually work (which they make up for by imagining a host of capabilities and features that it doesn't have)..
So, obviously, that wasn't this article, as it didnt mention Glass (and if it did, it was a fanciful story anyway). Do you have such an example? (I ask, because I auite possibly haven't read enough articles, but the ones I've seen do not seem to make this mistake, even while I can easily imagine people making the opposite one of not taking into account that it is an open platform. As an example, the killer feature mentioned in the other big article this morning--taking a picture with a wink--is a real feature Google hid that didn't even need to be coded from scratch when Mike DiGiovanni released the hack to turn it on.)
Occasionally on HN, there are comments on Google-related topics that I suspect come from Google employees/affiliates, or waves of votes that may be coming from those with a financial attachment to Google. But often on social news/discussion sites, you don't know how closely the comments/votes are entangled with the issues being discussed.
The mention about not having a centralized datacenter is interesting, so here are my two cents. A decentralized datacenter could be modeled after the open-source model. The data would not belong to any one single entity, but would be able to be accessed by interested parties via an API. (How to enforce which "third parties" get access to this data is another interesting question.) The algorithms for fast, efficient access and retrieval would then be improved upon by the open-source community so everyone could benefit.
I like how search is becoming more contextual, but I don't feel comfortable when this data belongs to just one entity. My high school history teacher often used to say "power corrupts, absolute power corrupts absolutely", which is why I believe user data should somehow be decentralized. How to go about this is the million dollar question.
I was unable to find a Watson based search engine using Google. Are you talking about the highly specialized application of asking Jeopardy questions? Recall that those searches are already constrained to being trivia, having exactly one solution and belonging to a certain Jeopardy category, context and difficulty.
Hmm... as a Libertarian / Voluntaryist / Anarcho-capitalist / whatever-you-want-to-call it, I am sympathetic to what Larry Page is saying. But there are definitely aspects of this story that are fairly unappealing (Larry Page naked, for starters).
I'm not sure what the answer is though: By now it should be clear that "government" as a tool to social engineer a perfect world, isn't working. Corporations are always the villians in this cyberpunk'ish stories, but you don't have corporations without government. But you can have technology without government or corporations, so what happens when the tech itself becomes so powerful that it changes the basic nature of society? Getting rid of government and/or corporations won't help, and if you try to counter technology with more technology you just have an arms race.
> By now it should be clear that "government" as a tool to social engineer a perfect world, isn't working.
Perfect is unattainable, but engineering a better world seems to be working over the long term (though "engineering" might be overstating the process) however chaotic and back-and-forth things are over any short time scale, and government seems to be a pretty darned important tool in that process.
>but you don't have corporations without government.
Who says you don't, or won't in the future.
Many corporations are now bigger and wield more power than quite a few (maybe most) governments. Corporations are often international and don't really care about governments or countries as much as possible.
We live in a world where China is having fights with Google over censorship. That's a government having issues with a corporation, not another country.
In the case of China things are a bit more interesting, with communism in the mix it seems like the .gov and companies are basically the same. There the government tells the companies what to do. Else where it seems the opposite is on the way to becoming true.
Note that I'm using "corporation" here to refer specifically to the legal fiction, created by the State, that isolates the owners of a corporation from liablity. The "organization" itself might exist organized through some other means, of course. But using the modern, legal definition of "corporation", they depend on the State for their existence.
Now, you might ask, "mindcrime, aren't you just pedantically arguing about semantics then?" To which I'd say, no. The difference being that the modern, legal definition of "corporation" is a big part of what allows companies to grow very large, thanks to that ability to pool capital and dodge liability as an individual. If we didn't have the corporation (as we know it today) as a legal fiction defined by the State, it's very possible that we wouldn't have the huge mega-companies that you speak of.
Of course that raises the question of whether or not we'd be better off without those, but that's a whole other topic.
Technology can never solve social problems. The answer is to simply work on fixing the problems we have in society the only way we actually can: through discourse, activism, democracy.
Technology can never solve social problems. The answer is to simply work on fixing the problems we have in society the only way we actually can: through discourse, activism, democracy.
I'm not actually sure anything can truly solve social problems. I tend to believe that human nature is such that we will always have some level of strife and conflict in our world. I think the best we can do is to minimize it, and create structures that at least respect the sovereignty of the individual.
But even what I just said gets to the point of why "we" will never be uniformly happy. Different people have fundamentally different goals and principles and there's objective way to resolve those disputes. My foundational principle is freedom from use of force or aggression, and the primacy and sovereignty of the individual. Other people have more of a utilitarian "the most happiness for the maximum number of people" as their principle. And unless the former inevitably leads to the latter by coincidence, there's really no way to reconcile those two positions, in terms of "how do we solve social problems" since those two people don't even agree on what the problem is in the first place.
As for democracy: "Democracy is two wolves and a sheep voting on what's for dinner. Liberty is a well armed sheep contesting the vote".
Even in a representative republic like in the US, democracy is nothing special if you're the under-represented minority who has to suffer the "tyranny of the majority".
While its very interesting to explore and write about the implications of Larry Page's idea of experimentation unfettered by government laws, this reads like a poorly written sci-fi story by someone interested by technology, but not knowledgeable enough to write something remotely credible. It lost me at Google Being, 'stitched from photographs'. It focuses mostly on unfettered data collection, which is the main fear of journalists writing about Google.
The biggest blunder of this article is how they convey Google. Do they know a little more about me than I like? Sure. But so does Facebook, Microsoft, and OkCupid!. The difference here is that if I want out of Google, I simply go to the dashboard and erase my history. I don't know how deep their erasure goes, but it is certainly more comforting than what any other company offers.
Facebook holds your info for a week and if you sign back in, the week restarts. During that week they goad you to come back. Not exactly cooperative.
And lets vilify Google first and foremost (/s). I run Ghostery. I am much more scared by the number of unique tracking companies. I don't know anything about them. How could I even begin to tell nearly 1500 known tracking companies to leave me alone? Simply telling them to leave me alone gives them data about me, which they certainly must keep if I am to be left alone.
> Facebook holds your info for a week and if you sign back in, the week restarts.
I think this might be intended as protection against attacks on your account. If someone gets ahold of your password and deletes your account, it's nice that you'll be able to restore it for some time after, isn't it? I agree that the wording of the e-mails they send when this happens is unfortunate.
Lately I have been feeling we as builders had this responsibility to build the Internet that the world needed, and we failed. We were distracted, we got rich, we ignored or misread the needs of our fellow humans.
The walled gardens that we now find so insidious and creepy are due to our own failure to empower the users. We made HTTP, SMTP, XMPP protocols. Large companies brought these to the masses, in ways the masses can understand and interact with in their limited capacity... for a price.
Can we reclaim humanity's birthright? Can we build a vision of the world we wish to live in, that is accessible to and understandable by many? Or is our entire collective fate to become a monetized click stream of suckers?
This article names Google, but to me that is beside the point. Google is a large system set in motion by shareholders and market forces that has equilibrium. It consumes click streams and subscriptions, and excretes money, like others of its kind. Can such an organism ever serve the best interests of humanity all the time?
If you find yourself hating Google, better to look within yourself. Do you have the courage to walk away from these kinds of services and build an alternative, however humble it might be, that empowers and liberates your fellow humans?
I am still working on this in myself. My email is still gmail, I would miss some personalities in my G+ circles, but I am uncomfortable, and I find current trends unsettling.
"in practice social convention among the Minds prohibits them from watching, or interfering in, citizens' lives unless requested, or unless they perceive severe risk"
One of the cute things about the Culture is how damned nice the Minds are - even when they are effectively gods to their meat friends/pets/therapists.
> One of the cute things about the Culture is how damned nice the Minds are
Were the Minds (originally) designed to be "nice"? Or have they evolved to be nice because that is an effective way to influence humans? (I've only read one Culture book.)
I always get a chuckle out of the very idea of "post scarcity". Like there won't be some guy who wants to shovel an entire solar system (or two) into his replicator to make a really cool Dyson yacht, or yacht fleet. Or just flat out enjoys going "Dark Phoenix" and destroying a few stars like shooting cans for target practice.
Humans will never have all they want, even though we have enough for subsistance. Only a few would be happy to spend a few years rummaging around in the library between meals.
It is heavily alluded in the novel that culture citizen have been genetically engineered somewhat to smooth their personality.
Except for extreme shyness, they would seem like charismatic well balanced individual (source wikipedia I think). Even the language they speak has been engineered.
So in culture world, nobody would be deviant enough to want to do those things. It is a bit like star trek, there must have been a few dark ages (they must have had eugenism gone wrong at some point) before reaching the fully evolved post scarcity society that is the Culture.
Interesting, have to read some of it some time. I liked Vernor Vinge's Fire Upon the Deep and Charles Stross' Singularity Sky. In those, there is a spectrum of of societal levels from Beyond/Escaton to "Zombies"/Unthinking Depths, allowing comparison between levels.
I am scared, intrigued, worried, and overall frightened.
Not frightened by the article, but by the fact that I can't decide if I am for or against this.
I wonder at what phase 'Don't be evil' would break down, or alternatively, when their definition of 'evil' would be changed to exclude what they were doing.
Not frightened by the article, but by the fact that I can't decide if I am for or against this.
That's sort of the same reaction I am having. I'm all for getting rid of governments and most of what we call "law" today, but this particular vision of a possible outcome seems a bit disturbing in some ways. But, then again, it's just speculative fiction, not reality...
But when Google Island is somewhere where are no laws, then anybody with enough muscle can take it away from poor Larry. And nothing Larry can do, because there are no laws. Taker keep.
Quite. In the Star Trek example, the Captain can say "Computer, locate X", and sometimes he says "Computer, deactivate X's devices" and the computer just does it. It's very unlikely, because he's an idealized fictional hero, that anyone with that kind of power in the real work, is as wise and benevolent as Picard - and even Picard makes mistakes.
1. Other characters can and do ask for locations. This is apparently considered non-private information.
2. As open and utopian as the Federation may be portrayed to be, Picard is still the commander of a military vessel, so I wouldn't be surprised if in other contexts, computers give out less personal information and grant less power to administrators.
Ultimately, I think that this article fallaciously anthropomorphizes Google, and that is the reason why the situation it posits seems so scary.
Google is a machine, designed by people. It is true that they have lots and lots of data about individuals, but it is being handled by vast amounts of software and hardware alongside so much other data about so many things about people that there is somewhat of an anonymizing factor. If Google were an individual, what they do would certainly be creepy, but they are not. To put paranoia to rest, it might be in their favor to enact transparent safeguards of some sort that assure consumers that their data is in general not being accessed by Google employees or nefarious third parties (the most notable nefarious third party being, of course, the government).
It seems like I may have missed the point or that this struck a sensitive nerve for a few people but I just found it mildly funny. I'm not sure if the piece was supposed to be anything more than a bit of satire and a friendly jab at Google.
We haven't passed through the digital renaissance yet. Shouldn't be too long (10 years? - right now it's just a party) but until then lines will be gray and rules will be befuddled. You need to have the visionaries to get us through it, though. There aren't many.
“It also has thousands of micro sensors which are now swarming through your blood stream.”
This metaphor of the electrolyte solution is nice. Obviously an allusion to a seemingly innocuous service which ends up tracking every aspect of one's life.
I do believe we need a place like that, except I don't think its feasible to do it on Earth - there is simply too much baggage here.
Luckily we could be less than a generation away from colonizing Mars - and what better place for the adventurous experimenters to go? Imagine an entire planet where you can do anything you want, but also an incredibly harsh one where the need of survival will drive experimentation and adaptation.
At first I thought "oh. Google fan fiction. This is what we have come to." After a moment, though, I realized that this article isn't really any different from the other hyperventilating blog posts that have appeared all over HN recently. I'm not saying that these aren't important topics to discuss, but everything I've read recently has come off as a prurient privacy daydream. Whether it's people writing polemic screeds about Glass despite knowing nothing about how the devices actually work (which they make up for by imagining a host of capabilities and features that it doesn't have), or things like this that manufacture a Lovecraftian monstrosity that has as much in common with the Google of today as a pineapple, they're not really saying anything interesting. Every argument is trivial to win if you first convert your opponent into Mecha-Hitler.
Is privacy a central, unsolved challenge for the next decade? Yes. On the one hand Google (and Apple and Amazon and ... ) need to innovate, or we'll soon see posts on HN describing how "Search has stagnated" and "Google is done for". Keyword search isn't good enough anymore, but to do anything more you need to start understanding the user's context. The "Star Trek computer" interface that everyone wants can't function if it doesn't have a sense of the world and the person it's talking to. I would love it if there was a distributed way for people to provide this information without having it live in a centralized datacenter somewhere. Sadly, no one's really talking about that.
(there's an equally interesting discussion to be had regarding public privacy and cameras, but villifying Glass isn't going to make that problem go away.)