Totally OK for me Google. I respect that people have different privacy thresholds, but I think the fact that it's different for everyone is being lost in articles like this.
So no, it is not just my problem or your problem, it's everyone's.
I think this example serves both our points. To your point, it's totally leaked this other person's info into my "google world" because I'm on gmail. On the other hand, that person is leaking his information directly to me just because of typos when he fills out online forms. Perfect privacy requires a lot of vigilance in a digital world, with or without google/gmail/hotmail/yahoo/etc.
Fun story there. Because Google's internal privacy safeguards are so strict, the people working on features like that can't look for example emails to train their ML models with.
They can only look at emails that were explicitly sent to them in order to improve the feature (and almost no one forwards along positive nor negative examples). What they can do across the email corpus is run jobs that return aggregate stats, where each stat must be coarse enough that it is infeasible to trace back to original users (often 100k+ users per data point).
So, AFAIK, training & testing models under these safeguards is more or less done blind. Build a model with the few examples you do have, and then run it against the corpus. If you see numbers change, you have no idea if that's good or bad, since you can't actually inspect the run.
(at least, this is the way it was a few years ago)
One time, when I marked a bunch of incorrectly-classified emails as "not spam", the Gmail web UI asked me if I wanted to send these emails to the Gmail spam team. Was this what you meant, or something else?
(That's different than just marking an email as spam, though)
It's not clear whether these AI models have much incentive to correct anything. If 99% of people with attributes x,y and z are bad candidates for a job, will you even get an interview? Is there any attempt to account for the fact that attribute x is something you were born with? Or that you are actually in the 1% and really are a good candidate? Or that you don't actually have attribute y, and it was just inferred from something else or some kind of mixup like an email address typo?
Machine learning is also very opaque.
If you verify the race field, then now you're in the business of enforcing racial definitions.
Why would that matter to the company? People are born with stupidity.
I receive a ridiculous amount of other people's email - for serious things like email account reset, to banking info. When the Ashley Madison hack happened, my email address was there multiple times! Imagine if my then-partner had bothered to look, what a mess that would be.
I've called American Express a couple of times to report errant emails with account info ending up in my email. THEY didn't care that one of their customers had an issue, they thought it was a great time to have me fork over MY info "to check".
From time to time I have had messages relating to a government planning committee as one of the members got the domain of someone's departmental email account wrong and the messages come instead to the domain I administer.
In another instance I was getting emails about an account someone created with American Express using my email address. I sent multiple emails to their customer support to get them to stop sending financial information to the wrong person with no results. In this I also found it difficult to even figure out what agency to report them to for failing to take action when notified. Eventually I took the time to call them. It took around 20 minutes (not counting time on hold) of talking to multiple people to get them to remove the email address from the account. This included them asking me several times for my social security number - which I flatly refused to provide since I had zero business relationship with them. This refusal actually seemed to confuse them.
Many of the people I know who have very common gmail addresses have set up canned replies to let senders know they've reached the wrong person.
I often get some misdirected emails because of a very banal name in my country.
When the email contains a thread history, I sometime noticed that the address was simply corrupted by a recipient, such as numbers getting dropped from the genuine address in their reply.
I guess some systems can't handle correctly numbers or other characters in email addresses.
The former doesn't fix the issue at all, and the latter is unworkable because the guy reliably giving out the wrong email address will absolutely not remember his public key.
Have the browser suggest the key IDs from `gpg --list-secret-keys`.
I was somewhat radical about privacy in the late 90s (only person I knew that read every EULA) and am still a supporter of the EFF but I don't really understand the issue here.
When you visit you friend's house, the data is not under their control, it is under Google's control.
Therefore, Google has a WAY bigger responsibility than most people realize, once they decided to collect this data.
Ultimately, whenever you visit any place, business or home, the data is under the control of the owner and anyone they abdicate those rights to.
Some interesting scenarios to consider: If I visit a friend's house, and I start getting targeted ads for a service I didn't subscribe to without prior consent or my knowledge, can I sue her/him? What about a scenario where some service collects my data, said service is hacked, and someone commits identity theft on me, who is liable for damages? Do I need my buddy to sign a waiver when he visits to play some Xbox for a bit?
But it was your friend's decision to delegate that control. So, Google having that control is still a consequence of your friend's control.
With LTE, I don't need wifi.
I would imagine if you walk into a home with google's AI doodads all over the place, you're gonna be picked up.
Rumor is that even when your phone has been turned off, it can be turned back on remotely.
I have a couple of cameras, and at least one visitor has been uncomfortable with their presence, despite the fact they're not sending the data to a third party.
A business is typically located in a public space, where there is no expectation of privacy. In contrast, a home is by definition a private space where there is a strong expectation of privacy.
Aren't there some wiretapping laws around this sort of thing?
(IANAL, YMMV, etc.)
Tl;dr there's privacy in decentralization.
I'm very optimistic about Google making my life better in lots of little ways. I have virtually no concerns about my openness to Google causing me trouble.
I think that's a common goal and widespread belief and find its implication of having given up on "bigger ways" troubling.
Our industry was going to change the world. Rewrite the cultural rules of socializing. Rebuild the economy in a new form. Encourage major cultural and political shifts by empowering the little guy. Bring knowledge to the ignorant, companionship to the lonely, power to the powerless, jobs to the jobless.
What we have now is, well, maybe, with some luck, timidly, sometimes google can analyze the last time I went to a gas station, the number of miles I've driven at various speeds since then, the time of my next appointment, then it can adjust my google now card to encourage me to leave home earlier so I'll have time to fill up the tank and it'll find the cheapest advertised price along the way. That's nice ... but wheres my revolution?
Even worse in context of the article, OK well say I have to give up all privacy and go hard core 1984 telescreen big brother is always watching, to save endangered species and house the homeless and feed the starving and bring peace to the victimized. Well OK I'll think about it. Oh wait, we don't get any of that, all we're offered is slightly better appointment scheduling. Eh, no thanks.
Nothing is ever really new, and this era is likely similar to pre-quantum era Physics around 1890 where nothing is left to discover other than adding a few more decimal places here and there. The meme and speech patterns are all the same (although I'm not quite that old)
For centuries, people have navigated using stars and maps, yet now you have turn by turn navigation in your pocket with real time traffic and crowd sourced traffic incidents. You can reach most of the world population instantly by dialing a few numbers or even sending them a text message or mms, no matter where they are.
In the context of the article, a couple of years ago, Google Now told me I had to leave for a meetup in my calendar that I completely forgot about, gave me transit directions for a part of a city I've never been at and got me in exactly at the meetup start time.
While that all seems like modern conveniences nowadays, even thirty years ago if you'd have told people that you couldn't get lost anywhere in the world and could get ahold of just about anyone at anytime instantly, they'd think you'd be talking about science fiction, not today's reality.
Example: "Intuit’s TurboTax stores highly detailed financial data for millions of users who import their W2s, their banking data, info about their mortgages and more. Right now, all of this data is locked into TurboTax, but the company is now thinking about how it can do more with it by giving its users the option to share this data with reputable third parties." ... https://techcrunch.com/2016/09/22/intuit-wants-to-turn-turbo...
True, I don't mind Google today, but what about tomorrow? What about N years from now when they have failed to hit their financial targets 2 years running. Will that company have the same set of standards as the one today?
The reality is that once you've given up your privacy there is no getting it back.
Google might handle the data it collects better than other companies, but it would do much better still if it didn't collect any personal data at all.
Hands off my data, Google!
Going further, given that an encrypted email to Gmail will simply be unencrypted and then available to GMail, include in the protocol authorized (via both white and black list means) agents of the recipient. So, if you are hosting your own email but the intended recipient is expected to be not hosting their own email, the sender can blacklist "agents" such as Gmail and Yahoo! Mail, or blacklist all except for those chosen to be white-listed such as Proton Mail.
For almost all users getting a message from someone you personally know containing a message that they would legitimately send would be sufficient to trust the referenced key. If you are a little paranoid you could ask them over the phone or in person to send a specific message that you would then trust, or just ask for the URL itself. I believe that in-person key exchanges utilizing large trust networks are overkill for the vast majority of people sending everyday communications.
Make it so simple as 1) one time key creation and copying to each device 2) one time trust per other person that is completely streamlined by the software. If the gmail team implemented this for example other email providers would soon do the same and it would spread like wildfire. Very quickly a huge amount of email would be encrypted. Of course this would require an open design that anyone can implement.
Maybe there's a fundamental flaw with this idea that I'm not seeing. If so, please say so because otherwise I'm going to just be disappointed in five years when email is still 99% unencrypted.
Granted there may be a place for regulations to help us restrict what companies are able to do(perhaps making it easier for you to identify a region that is being recorded, right?), but, at some point society can't help the fact that you'd prefer if machines were unaware of your existence. That's just something you have to solve for yourself.
GPG doesn't hide that one is emailing, or when, or with whom; it only hides content.
Personally, I would have downvoted you because your original point seemed to condescendingly assume something about a large proportion of the community and their intentions - then you assume again that it's "The Google People" (and only them)?
Unless you can show that your downvotes came because of a specific reason, and from specific people...
I also fear you will be unable to notice in which areas of life and information the distinction between choice and reaction is harmless and which it isn't.
Of course, I'm not talking about "You" you, but just people. Me as well. I feel we are widening the field of unconscious decisions and I see that as inherently bad - in my fellow humans as well.
You could say that Plato wanted us to make easy things simple (link for distinction: https://www.infoq.com/presentations/Simple-Made-Easy).
I believe this to be a move in the opposite direction. We should have a care.
It's like saying we shouldn't use prescription glasses, or medication, or cars, because it's not "us".
Humans invent all these tools and systems to improve and optimize our life. Make our vision better. Make our health better. Make us move around faster. In the case of AI, make us do perform certain things more efficiently.
Imagine it wasn't an actually computer. Imagine it was a personal secretary you had that gave you the EXACT same information. Gave you your flight information, turned on the light when you asked, gave you the weather and your schedule. Would you think that was wrong? That this isn't "you"? No, it's just optimizing your life, but now available for a wider population rather than rich people.
In your metaphor, you are implicitly paying the secretary, so the secretary is incentivized to maintain your interests.
How much have you paid Google for its free services?
Your metaphor is inapplicable. You don't have a secretary telling you these things; you have a salesman trying to sell you things, and the salesman is getting smarter every day while you aren't. Not the same thing at all.
It seems to be a theme here today... a company can't serve both advertisers and customers. In the end, one of them has to win, and given the monetary flows, it's not even remotely a contest which it will be. https://news.ycombinator.com/item?id=12644507
It's funny how bad of a stigma ads have gotten, but at the core, if you think of it, it's not necessarily a bad thing. Think of a friend recommending you a restaurant, a new game to play, a movie to go watch. In that case you'll be super interested, but now if this AI who probably knows your taste better than your friend suggests you something, you are instantly turned off and annoyed.
I think the root cause of this is that there is so much mediocre ads out there that ruin it for all. Your mind just blindly blocks all ads now.
When you aren't paying anything for something of value, YOU are the product.
No, that would be slavery, which is illegal.
Google is selling advertising space on various channels that you provide in exchange for Google services to advertisers.
> When you aren't paying anything for something of value, YOU are the product.
No, when you aren't paying money for something of value, you are probably paying something else of value for it; often, something that the person with which you are trading is then selling for money, making you a supplier of an input to the good or service they are selling for money.
And it's not a simple product like glasses where you pay with money and then they improve your vision. It's a product which goes far beyond your understanding and for which you don't pay money.
Google isn't interested in making your life better. What they are interested in is getting you to believe that they want to make your life better and to then recommend going to that bar, because the bar owner has given Google money to advertise for the bar.
Yes, you might actually like that bar, but Google isn't going to recommend going there in intervals which are beneficial to you. They'd rather have you go there a few too many times. Because that's what makes them money. It's not improving your life, which makes them money. Their AI will always work against you, whenever it can without you noticing.
First, there is a way to tell it to not do that. With Google Now, you simply tap the menu and say "No more notification like this". With the assistant, you will probably be able to ask directly.
Second, let's be honest, humans fail pretty often too, so that's just a weak argument.
Lastly, I think it's unfair to dismiss a new technology just because it could maybe fail, without having even tried it.
The awful success cases are far more interesting than the awful failure cases.
Your noose example is pretty contrived, however.
How about sleeping pills? Opiates? Local extortionist cult?
There's not only the "morally reprehensible" metric ("Don't be evil"); there's also the "absolute PR catastrophe" metric that printing such an ad for a rope would mean.
I think the real issue is the casual deception which you just fell for: It isn't "your" electronic secretary, and the thing it just did might actually be a "good job" from the perspective of those who control it.
I'm not saying we shouldn't use AIs. We should, however, think about how we use them.
To build on your example, what are the dangers of having a personal secretary on the payroll of anyone but you?
What I am expecting from this is a super devious filter bubble - because that's how you make money. Google's old slogan "Don't be evil" is long gone. "For a greater good" might be more on point.
What does the Google Assistant help me do more efficiently? In all honesty, I can't figure it out. I don't need or want a secretary, and I can do written planning for myself.
I need less paperwork and fewer web forms and identities, but the Google Assistant only promises more of that crap.
I'm never buying one. It's a sacrifice of privacy for zero to marginal gains in convenience.
If you do not care what I say, why even reply?
Google is designed to sell ads, and subtly influence your behavior towards the most profitable results. Please do not confuse a fact-based tool with an ad generator.
This is the very common theory that a company will (shadily) try to offer you a worse product to make more profit. It fails to account for competing companies that would jump on that opportunity to offer their better product, and get the market share.
But what's funny here is that the suggested alternative is to not get any product at all. As in: "Poor OP, didn't realize that it wasn't really him who was enjoying that burger he was enjoying."
Maybe Play Music is the best thing. Maybe it is not. Neither of us can answer that. But if a definitively better product comes along it will have no way to make a foothold because Google is still pushing everyone to their own product, from their other product (Search), and even when people try your product, if they use Google's other products, they'll tend to stick to other Google products.
Honestly, the worst problem with companies like Google is vertical integration. The ability to provide a wide product line where you integrate best with other products your own company makes has an incredibly chilling effect on competition, and therefore, innovation.
And if your theory that companies prioritizing results for profit would lose to companies that always prefer the best products, why is DuckDuckGo still in what... fourth or fifth place?
You'd need to argue that DuckDuckGo's search results are better; I don't think they are. That's what made Google first among many competing search engines, before there was even a clear business model in it. Today the incentive to outperform is bigger.
If a product Y definitely better than X comes along, and only Google Search fails to rank it higher, people will start thinking "I rather search on Bing too, as it finds better products in this category".
"I ordered this burger because I was hungry and it tastes good" vs "I ordered this burger because Google was able to successfully predict that I would be receptive to having burgers, or the idea of burgers, placed in my environment"
Other people's happiness metrics work differently, and all popular web services are popular precisely because they satisfy the unconscious desires of the majority of people.
I wonder what is then causing inefficiency when we read restaurant's menu and can't decide what we will have.
I'm with those who think we make choices and decisions far less often than we think, but that we still do make them.
this is like absolutely full on plugged into the matrix world. and we're living right in it.
these guys are like the ones who've taken the red pill, and gone on to find out how far the rabbit hole is going.
(edit: i'm even more intrigued by the possibility that the future is not just the matrix singularity, but an oligopoly of several large singularities, all fighting to plug us in)
We already see what happens when peoples decision making is coloured by mass media advertising. An obese population trapped by debts taken out to fuel consumption.
It is in other peoples best interests for you to work like a slave, be addicted to unhealthy habits & run up vast debts in order to buy their products.
We keep allowing those with power to distort the markets gaining themselves more money and more power at the expense of the little guy. I don't see any reason why AI in the service of the powerful will do anything but accelerate that.
But I think it would be a problem if every Monday and Thursday night Google Now started providing information about AA meetings in the area, instead of bar information. It's up to the user to make the choice, Google Now just detects trends and then displays information based on those trends.
I go to the gym every Monday, Tuesday, Thursday, and Friday morning. And each of those mornings Google Now tells me how many minutes it will take me to get to the gym from my current location. Should Google Now start giving me directions to the nearest breakfast place instead? No, not unless that starts becoming my pattern.
Google may not have a responsibility to be a good friend, but personally I'd prefer not to have a bad friend always following me around, thus I'm a little less excited about this feature.
> Should Google Now start giving me directions to the nearest breakfast place instead?
That may depend on how much Waffle House pays for advertising, and that is the problem.
If you replace "AI" with "marketing" would you still make that statement?
should I just be a actor playing through a set itinerary of vacations and movies and burgers and relationships? maybe you think its that way already, except less perfect than it might be, but thats a pretty frightening notion to me.
My culture, education and skills limit what work I can do.
Our culture places limits on a vast number of experiences. On the road and the only thing is fast food? Welp, eating fast food. Live somewhere that only has one grocery store or cable provider?
I don't really see AI in the form Google is peddling as really all that much different. We're just 'more aware' that the world around us is really guiding us.
I may be somewhere new, and can only see the immediate surroundings without a lot of exploring. And let's be real, in the US, most cities are the same when it comes to restaurants/hotels and such. There are differences in culture but we don't usually see them if we're just visiting. Not in a way that matters.
Google will let me know that the things I prefer back home? there are equivalents nearby.
Fencing ourselves in is what we do. Who knows, perhaps a digital assistant would help us stick to our personal goals and decisions better. Rather than just having to accept what's there.
I'm curious why you think this is bad. I don't necessarily think it is good but I also don't necessarily think it is actually happening
Which news-sources are you going to learn about?
Which news-sources are you for some reason very unlikely to encounter?
Now apply a real-time AI filter-bubble, able to also include government policies in its decision-making, onto those questions.
I believe the most important thing in life is thinking. I believe a key element of thinking is looking at "easy stuff", the stuff we just live with every day and don't think about, and for some reason be forced to think about it and make it simple.
Take the Snowden-leak. We lived a nice life being the good guys and that kind of surveillance was publicly thought of as conspiracy theories. Suddenly we were forced to look at what was going on. How much of it are we okay with? On the grounds of what principles and tradeoffs? This is all very unpleasant, but we're all better off for facing those questions and work towards new principles. We take a chaotic gruel of cons and pros, and try to hammer them into a few simple principles our societies may function by. For instance, the separation of power into 3 has served us well.
I fear that we end up in a world where raising such unpleasant questions becomes almost impossible - and we'll never even notice. Not because of AI (I believe AI to be inevitable and fascinating) but because of the way AI is used.
Living a life assisted by an AI, made and paid for by someone else, seems like the epitome of naivete to me.
Maybe the illusion is that it was a choice . . .
As example, the old weather widget from Google's "News & Weather" was replaced by Google Now. That provided a similar experience for some time but then stopped working with another update that required search history to be enabled and/or some other setting in privacy control.
Also the launcher integrated Google with a system update(Moto G line of phones). I have since replaced launcher, browser, search app (all with open source replacements), weather app (with a paid service). Convenience has suffered...
At that point I turned it on but deleted* the search history each day, until such point as they changed the delete controls to be more of a nuisance.
I now use DuckDuckGo and an iPhone instead.
* Or so their site claims.
Your location history tracking is paused..
It can probably save them from a legal mess if they 'resume' it in future updates.
The more specific work I need to go through to set up my privacy, the less inclined I am to do it. If I didn't think I was able to be manipulated psychologically in this way, well, I wouldn't worry about advertising at all! If I were to ever do something politically dissident/personally embarrassing on the internet (not that I ever have of course) I'd go to the trouble of ensuring encryption and being hard to track, but I think it's important that I'm able to say to Google "Hey, I'm cool with you telling me when I should check off work to hit the bar, but it's super weird that you know what I should get my Mom for Mother's Day."
Of course, the simplest way of making a system that's both fine-grained and intuitive might involve... more AI, so I'm not sure how to crack that issue.
This is by design, so that the majority of users are confused and leave the defaults as is, enabling Google to do whatever they like.
I never thought of that before. But, what a subtle way for Google to dissuade people from using a tool that could impact their revenue.
And it annoys me that on maps, when you turn off all the spying capabilities there's no fallback to local history. You either share it with us or you get none.
> ... it was creepy as all hell that the people [...] knew enough about me
Speaking of which, anyone following today's stories about the Yahoo email scandal, the pressure on the folks who own Signal, and recent litigation from Microsoft against government gag orders?
But, let's go back to talking about none of us have free will and talking about how clever Now is.
GPS navigation devices with much less storage than a phone have been more than capable of what Google Maps offers for a long time. There's essentially no reason for it to do anything with the Internet except getting map updates.
I use Google Maps every single day to get to and from work, simply because it knows how to avoid traffic. 10% of the time it saves me half an hour on my commute.
1- There is no way to set your privacy level.
2- Things that Google/Siri/Alexa know about you are not limited with the name of the bar you go frequently. They know much more about you. And you don't know what they know. The sky is the limit here.
3- Things that they know are not limited with you personally. They know about you, your family, your friends and all their interactions. They know very much about the whole society.
2 - My point is that I personally am OK with Google's AI knowing more about me. I respect that others aren't. I'm not naive in my acceptance.
3 - I don't really have a response here.
The privacy control where I disable location tracking and half a year later when I look in Google Dashboard I see months of travel history?
I respect that others aren't. I'm not naive in my acceptance.
So what do you do in a situation where your use of Google's data collection also affects people who do mind it? I would not be comfortable visiting a friend with an always-listening device like Alexa or Google's equivalent.
I nuked my paid Google Apps account a couple of months ago. I had enough of their total disrespect for privacy. E.g. conversations that I had in Google Mail (which is protected by the Google Apps agreement) were used for suggestions, etc. in Google+ (which is not covered by the Google Apps agreement and uses data for targeted advertising).
I'd turn it off if/when they ask. I don't think that's unreasonable in the least. I'm not responsible for enforcing everyone's privacy preference, but I also respect them and will accommodate guests in my house.
It could have been a visit to a website.
You can monitor the traffic of the Alexa and see that it is only sending data when you ask it to do something, and furthermore, Amazon gives you a log of everything you've said to it and it recorded.
Others may be reviews left, reviews voted on, prime video consumed, audio/video/book samples consumed, kindle activity, how long you spend on a product page, how you scroll on it, the breadcrumb of how you got there, and surely dozens more.
My impression also is that most early adopters of this kind of technology are younger people. (again: mostly)
So this brings up an interesting question about the future. As the young early adopters age, what will happen?
a) their privacy thresholds will also increase and they will have a "oh holy crap" moment in the future, where as a middle-aged or older person, who has lived a now much richer and problem-laden life, they will realize that google (and/or other co's) have what they consider now, as too much personal information about them,
b) they will keep their young-ish privacy thresholds as older people, and in general, across society, people will have lower thresholds than exist nowadays. In other words the world will change.
My money is on a)
My impression (in the main) is that younger and older people have different views on privacy. Older folk might be creeped out by Google knowing their schedule, but okay with the NSA or FBI or whomever reading their emails because "because terrorists" whereas younger folk are more likely to balk at the latter, but very much okay the former.
Do you think my opinion is accurate? I'm curious because to some extent I completely agree with you.
Of course it also once told me how long it would take to get to an ex's house from my current girlfriend's place.
How about an 'OK, funny once' command?
(eugenics through large scale suggestions anyone? ;>)
There are even apps to that... It wouldn't surprise me if someday this tracking was learnt by Google
This isn't funny at all. "They" (or Skynet) could totally do this and who would be the wiser?
> How about an 'OK, funny once' command?
My whole point in posting originally was to bring up the fact that the interesting discussion of different thresholds and having a spectrum of options is being lost in binary "privacy vs complete corporate control". It's not binary, and it does everyone a disservice to act like it can be for everyone.
Your phone's microphone can be turned on remotely and listen to what you say (I know several startups that do this). Security/traffic/drone/satellite cameras are everywhere. You are being watched literally all the time, but to think the watchers actually care about your personal life indicates a pretty inflated view of self-importance. We're starting to complain about it about 20 years too late.
If i'm on gmail, then you need to talk to gmail to talk to me. It's a tradeoff you'll need to make. If you want you can GPG encrypt the email, but there is nothing from stopping me (legally, morally, or otherwise) from just decrypting it and replying with the contents, or saving the decrypted message in my google drive.
So, by the same token, does Google have the rights to profile people even if they haven't consented to that? One answer, as with copyright, is to see what the law says; and it's very possible that the law says no, especially in the EU. (I've not researched it; others will doubtless know much more than I do.)
Or Google could just seek to do the right thing and not profile people unless they've opted in. But my definition of the right thing may well be different to theirs.
1. finding someone who is not using any Google service at all, and
2. finding their data in a Google database.
I hope that there will be a day when Google and Facebook will combine forces and work on a sequel to "The Lives of Others" .
Yeah, that's a great movie.
Pretty much your only privacy is in your head at that point.
I'm not sure that is a "threshold" of privacy but rather a "I am okay with 24/7 surveillance of all of my activity."
is this story unrealistic, or has it already occurred?
One Wednesday afternoon, at work, I got a notification saying "Travel time to the Lion & Crown". The first thing that ran through my head was "oh my god, I'm living in the future".
The problem is that I want to use Google Maps so what choice do I really have?
Sure I use a dedicated gmail for my phone but that really does not help much.
"Hey, it's a been a while. Why don't you go to ... today? Traffic conditions are favorable too."
Given that, and the existence of pervasive surveillance and data mining, the above is inevitable.
"Oh you wanted to go to the bank today? Fuck you, we're going over by the mall" doors lock shut
"I was feeling bored, so I drove down to the mall. Look what all did I get!"
And then prepares to rush the owner to the hospital.
I think that it is different for everyone is completely and utterly obvious. It's clear the author doesn't think it's OK, but that it's his/her opinion.
Extremely annoying. This sort of thing should not be acceptable, an honest mistake results in every place I've been being logged in such a way that anyone with access to my Google account, access to Google servers or with a subpoena can have my full location history in a matter of seconds.
This needs to be a big red option every time you add an account "we're gonna log everywhere you go and hand it over to whoever we feel like, you cool with that?". It'd be different if the log and analysis were done only on my device, but doing this on Google's servers is completely unacceptable by anyone with even the weakest standards of privacy.
In your Settings app Settings, scroll down and tap Google.
Open a separate app called Google Settings Google Settings.
Tap Location and then Location History.
At the bottom of the screen, tap Delete Location History.
Also https://maps.google.com/locationhistory allows for delete all history.
Schrems described the file obtained through a legal request as
a 500MB PDF including data the user thought they had
deleted. The one sent through a regular Facebook request was a
150MB HTML file and included video (the PDF did not) but did
not have the deleted data.
EDIT: Okay, found it. Follow your instructions then open that "Manage Activities" -> Menu -> Settings -> Scroll down -> Delete All Location History.
You _must_ do this for every account if you have multiple.
You _must_ do this for every account if you have multiple.
At least for turning it off I expected it to be phone wide, not account specific. Deletion yeah, I suppose you're right.
Do you think you're setting a behavior of the phone, or a behavior of your account?
Our expectations of how "Star Trek AI" would actually be implemented were completely different than how highly connected cloud-based services like Google Assistant work today.
Anyway, the point being, if the assistant lived entirely in your own computer, it would entirely different. Most people are not concerned about what their "computer" knows about them, they're concerned about what companies and their employees do.
The trope-namer (Star Trek AI) was a ship-wide AI - when considering the ship sizes, it definitely is closer to the "cloud" model and not limited to a private instances on officers' bunker/bridge terminals/tricorders. Perhaps a hardcore Trekkie could answer this question: is there any canon that defines the AIs scope? Is restricted to just one ship, or could it possible be a Federation-wide presence with a presence/instances on ships?
There are some cannon exceptions to this (such as in Nemesis where the subspace communication interruption affected the star charts), but even then the functionality of the ship was not impacted.
The Star Trek ships are very analogous to our own ocean-bound ships, where satellite communication is possible almost anywhere, but they don't rely on it.
So, yes, the AI is completely confined to the ship.
Was there ever an indication that their AI-level data was transferred along with their personnel file? For example did the replicator know what food to offer them on day one?
If so, then it's seems reasonable to assume that the Enterprise's AI data was backed up at Federation HQ during routine maintenance, and that the "IT department" at Federation knew exactly what you liked to do on the Holodeck.
Through specific indication from the user. Recall the constant utterance of "tea, earl grey, hot"?
Ultimately, I imagine the user's information (documents, etc) was passed directly between ships, or through (as you say) Federation HQ.
> Enterprise's AI data was backed up
Ultimately, I think this is where AI will differ from ML. An AI won't have data that isn't a part of the AI - i.e. you couldn't separate out information specific to Picard from the rest of the AI code. An AI might be able to "scribble" down some notes about interacting with Picard and pass them off to another ship's AI, but the second AI would never treat Picard quite the same way as the first, even with those notes.
This stems from my belief that how ML interprets data is different from how an AI would. If you were to copy all of the data used to build a ML model and apply it again, you'd end up with the same ML model. An AI, on the other hand, if built twice twice from the same data would end creating two separate AIs.
for example, the Star Trek universe didnt seem like a universe where you had to shop around for a trustworthy mechanic, who wouldnt overcharge or over diagnose. (e.g. headlight fluid)
Maybe the implicit trust of other people was integral to the AI being successful in that universe.
I also think the comparison isn't perfect, because Federation vessels (in my mind) are similar to today's Navy vessels. All onboard systems are connected to other onboard, but opsec demands the ship systems not be influenced by external actors.
* Don't start that argument. Seriously.
The clearest example of extensive off-starship monitoring (within the Federation) that I can think of in TNG is a civilian (though, to be fair, a civilian in a role analogous to a "defense contractor"), Dr. Leah Brahms.
> I'd argue enough episodes are strong on fundamental individual rights, that it's hard to imagine Federation life for civilians being a surveillance state.
Actually, I'd say that its quite plausible that the Federation is a "benevolent surveillance state", that is, one with pervasive monitoring but a very low incidence of "serious" abuse (that is, the kind that substantially limits practical liberty -- casual intrusions on privacy may be more common.)
While the Federation seems keen on "fundamental individual rights", it doesn't seem to exactly mirror, say, some modern views on what those rights are -- and not just in terms of privacy.
And arguably, if she was working at Utopia Planitia Fleet Yards on the Galaxy class project, she presumably worked on a Starfleet orbital facility (technically, a number of facilities) over a span of years, where certainly enough data would be collected to make a poor replica of her personality, as in the show. I don't see anything suggesting specifically outside of basic biographical data, that she was being monitored in her civilian life.
La Forge, having interacted with that hologram extensively, and having surely read Starfleet's records... apparently didn't know she was married.
We're starting to lose that ability now.
Where technology has failed us most is in the utterly stagnant evolution and maturation of secure private networks. The following is a utopian notion, but had private networks seen as much R&D as the public clouds, they would be significantly less cumbersome than today's clunky VPNs. Imagine all of your devices collaborate directly with one another and with you on your own secure private network—no central cloud servers needed. Your personal assistant is software running on a computer you own rather than a third-party's centralized server.
I still feel this ideal will eventually be realized, but for the time being, no large technology company is willing to take the necessary risks to buck the trend of centralization.
The biggest fiction propped by up centralization and cloud proponents is that it would be impossible to provide the kind of utility seen in Cortana, Siri, Google Assistant, Alexa, et al without a big public cloud. A modern desktop computer has ample computational capability to convert voice to text, parse various phrases, manage a calendar, and look up restaurants on Yelp. Absolutely nothing the public clouds provide strikes me as something my own computer would struggle to do (to be clear, I would expect a local agent would be able to reach out to third-party sites such as Yelp or Amazon at your command in order to execute your desires, but they would do so directly, not via an intermediary).
A few years back, when Microsoft was at the beginning of its Nadella renaissance, I had hoped it would be the first technology titan to disintermediate the cloud and make approachable and easily-managed personal private networks a thing. Microsoft's legacy of focusing on desktop computers would have made it well-situated to reaffirm your home computer as an important fixture in your multi-device life. They could have co-opted Sun's old bad tagline: "Your network is your computer." But they elected to just follow the now-conventional public cloud model, reducing everyone's quite-powerful home computer to yet another terminal of centralized cloud services. Disappointing, but I think it is ultimately their loss. I suspect a lot of money is on the table for someone to realize a coherent easy-to-use multi-device private network model that respects consumer privacy by executing its principal computation within the network.
Not just secure private networks, but secure and programmable personal computing in general. The amount that I can actually do with my workstation PCs, let alone laptops or mobile phones, is now thoroughly restricted compared to problems that require a full-scale datacenter.
I originally enjoyed computing because, so to speak, it was an opportunity to own and craft my own tools, rather than being forced into the role of consuming someone else's pre-prepared product. Now we're being boxed into the consumer role in computing, too.
idk man. Computers are powerful. I like seeing what I can do with them.
I think that as far as the nascence of these features goes, the cloud model will beat the on-prem features any day of the week for several reasons. Lack of configuration to set up, ease of use from anywhere without network configuration, etc. are table stakes. But the biggest at this point is the sheer amount of training and A/B testing data you can ingest to determine what is useful for your end users.
The velocity of cloud-based products is nothing short of amazing and I doubt that on-prem will compete with the feature set and ease of use of always connected solutions until there are feature-complete, mature cloud versions to then bring in.
And, for better or worse, Dragon's text-to-speech is pretty damned good after a rather minimal amount of training.
I don't think there's anything stopping voice and intent recognition from coming back to our personal machines other than the ability to keep making money from having it come up to the cloud.
When I was working on Google Search what really astounded me is how we could leverage hundreds of machines in a single request and still have virtually no cost per search. The reason was that each search used a tiny amount of the total resources of those machines and for a very short time. A total search might have (made up numbers) one minute of computation time, but spread across 200 machines it only takes 300ms from start to finish.
That's the benefit the cloud will provide. You don't want to have a 1000-machine data center available at all times to store billions of possible documents and process your requests with low latency. If we went to a private-network model I fear that the turn-around time would be a lot closer to a human assistant. You'd ask it to do things and then it would get back to you sometime later (seconds? minutes? hours?) when it had finished it's research and come up with an answer.
Quick shoutout to Urbit here.
Except that's what I did for many years using a computer only as a terminal for an AIX mainframe. My mail was there, I browsed what was the web, used gopher, wrote programs, all stored there.
I would like to say that the cloud we have is a privacy concern because we don't know the full scope of data collected, nor what happens to it, nor do we own any of "our" data once it's in the cloud. But not every cloud would have to be that.
There's a perfect world where one wouldn't have to be paranoid about this stuff, but it's not what we have right now.
Instead I get the current evolution. I want a 3rd party.
The same thing comes to my every day of usage, I'm still on win 7, and short of leaving Linux(yes I should have already), I can't upgrade without becoming a product.
I want what we all imagined and dreamt of, and I pay for it.
What I won't do, is become a product.
Instead I'm stuck with multiple fake accounts on Gmail, using a pseudonym on everything(including programming contract sites such as upwork), just to keep some semi iota of privacy, and to enjoy the benefits of what we all want.
Imho we need a new major party to emerge, that will charge an initial fee(like windows 7), and let us do what we want with those services(with caveats of course).
But I fear we won't get that anymore.
It's not that complicated. Use a common distro like Ubuntu and everything's documented online.
But my main coding(admittedly amateur and earning very little) uses .net. on top of that most of the games I play to relax are Windows does only(as far as I know for most).
O keep meaning to make time, but it just hasn't happened yet. I don't get paid as a programmer (English teacher), so I need to spend my free time earning money on what I know.
As always, one day when I have money to spare(or time, which is basically the same thing haha).
I will fully admit I have not spent enough time in Mac to figure out the file system, but from what (little) I have seen, your not in control.
I like my pc because I can see what sub-processes are running, who is taking up how much memory, install things to where I want and if worse comes to worse, manually change how windows runs. (I apologise if you can do all that in Mac, as I said, I don't have the experience certificate - I bounced off it hard).
To answer your questions, you're in complete control with macOS, you can turn off SIP, turn off Gatekeeper and install whatever kernel extensions you want. Apple doesn't snoop on you like with Win10 telemetry.
I suspect that chances of would-be burglars or identity thieves breaking into Google data are pretty slim, in comparison to a home-installed system.
OTOH both Google and a private person can be strong-armed by a court order, or even a three-letter agency, to open up their AI knowledge vaults.
It's like saying "People would never break into a bank, when they could break into someone's house and steal their stuff"
It may be easier to break into someone's home-brew system, but generally it would be unlikely to happen unless you were being otherwise targetted. Whereas google has a lot of users data, which could make it a more attractive target.
Just how "home-brew" are we talking here? If there's any web-facing code that you didn't write yourself, whether commercial or open-source or whatever, that's a target for attackers that just scan everything looking for known vulnerable services.
If it is entirely custom, I'm fairly sure there are a few classes of common security errors that can be reasonably well tested for without direct human involvement. Which brings back the threat of attackers just scanning for all available targets.
If you're a "person of interest", you'll be attacked either way.
If you're just a regular person, like me, you won't likely be targeted. But my chances of getting a collateral damage are much higher in a centralized system: when it's broken into, my data have a chance of being siphoned off along with data of someone less ordinary, who was the target of the attack.
But there's a third case: "normal boring people" become interesting just by being together in a big group, even if no particular "person of interest" is among them.
The odds of someone physically taking my computer and decrypting my data are zero (128-bit encryption for the win!).
The odds of someone electronically breaking into my computer are higher.
The odds of my data being misused or misaccessed at Google are one: it will happen.
Maybe someday that will be a realistic endeavor, but it would take a lot of effort to set up and maintain your own personal versions of all of google's services, and integrate them
But it stores the private data about your location, your searches, your purchases. this could even be an encrypted private fire-walled bit of cloud rather than a robot in your house.
The point being your AI is serving you. And you can delete/edit this private data (or even the whole AI) if you wish.
Perhaps using Urbit as the network architecture since it already has pki. As Yarvin calls it, true 'personal cloud computation'.
Really it's the only sane way to live in the 21st century.
I've designed my home automation system on the concept that the only route to the Internet is my computer (and therefore, my home automation software). My computer is a secure, well-managed intermediary that can store my data, and decide when and how to receive and send data to the Internet.
The idea of dozens of Internet-connected devices in the home is :terrifying: in comparison, especially considering that badly-secured IoT devices are now powering some of the biggest botnets out there.
My light switch, however, cannot talk to the Internet. It has local-only communication protocols that are simple. It knows how to be told to turn the lights on or off or dim or a handful of other settings, but it's literally incapable of doing anything else... and why would I want anything different? Why should my light switch have Bluetooth and Wi-Fi and software updates and a miniature flavor of Linux... It's a switch!
When my AI can talk directly to your AI, we can transfer mail etc without cloud services. If it knows our social network, perhaps it can remotely store encrypted backups of our data, but only with people who we already trust.
It's just not easy to do, both in setup and maintenance effort. Most people would not care about their privacy nearly enough to justify that.
When's the last time
Google risked itself or business or any tech ceo risked their livelihood for the sake of the greater good? The problem isn't necessarily the knowing everything part, it's who does what with it that's the problem. I can't really think of any company or person with influence in tech that'd be willing to dive onto that bombshell to protect us all.
February of 2001.
The former CEO of Qwest, a massive telco, spent years in prison for insider trading. He says this is because he resisted the NSA's demands to tap Qwest's network and hand over customer data.
A truly user-aligned AI assistant would be great. Ideally in the future these things will not be tied to indirect business models, but rather will be something you buy and all data/services will be under your control.
In the Star Trek world they had no advertising because they were a communist society. Everyone dressed the same or slightly differently based on rank. It's interesting how the new movies play over that.
In Star Trek you couldn't choose your AI. In our world you can. At the start of their development most of them are targeted at selling you stuff - but the industry is young and who knows where it will go.
While there are elements of communism depicted in the Federation as a whole, those are facets of a global post-scarcity society that has somehow evolved beyond the less "progressive" bits of human behavior. I'd argue that the biggest fantasy of Star Trek isn't warp drive, but the notion that humans are somehow less violent than Klingons.
You cannot compare the world's biggest seller of advertisement space with the ST universe. The motivation's aren't aligned: Google/Alphabet want to make sales based on my information.
I agree that I found these oh so clever AI fantasies interesting in my youth, still do to a degree. But I always pictured the data being held inaccessible to humans in general ("Where's my wife right now?") and not in the hands of a golden few with no oversight.
The public areas which were under surveillance on Star Trek tended to be only on military ships and star bases. I don't remember seeing much surveillance in public areas on planets. There certainly wasn't the sense that everyone was under surveillance on every street and in every shop, unlike most people in major metropolitan areas on Earth today. Nor was there anything on Star Trek like the ever-present spy satellites that can see in great detail anywhere on Earth today.
For the public areas which could be observed through cameras on Star Trek, the surveillance seemed mild compared to today because of Star Trek's lack of massive computers and artificial intelligence analysing what is seen for anomalies, using facial recognition, constantly recording everything and having those recordings instantly available for playback, sophisticated search, and computer analysis.
The reading and viewing habits of Star Trek denizens weren't recorded and analysed, unlike those of many people on Earth. Their positions weren't tracked wherever they went, unlike those of many people on Earth.
The so-called "24/7 surveillance" of Star Trek was very limited and even quaint compared to what we live under on Earth today.
Plus, most folks think ST is where hippies took over. Center of Federation is in San Francisco...
Remember kids: Don't intoxicate yourself with substances we do not condone. :)
Incorrect, in the original series, there was a currency ("credits") that was explicitly referenced several times; it was also referenced in at least one, and possibly more, early TNG episodes.
Sometime in the TNG era, Roddenberry laid down an edict that money, including the "credits" that had been repeatedly referenced previously, did not exist in the federation, and so they weren't mentioned again.
I don't think that's really accurate; the Ferengi were portrated as greedy merchants focused on profit starting fairly early in TNG without direct reference to currency (gold -- not the later "gold-pressed latinum" -- was mentioned, IIRC, as an item of interest, but not in any context which implied it was used as currency); I think gold-pressed latinum as introduced as a currency in DS9 because DS9's role as commerce hub was central to the theme of the series, and having currency just made telling stories about that a lot more convenient.
Throughout much of history, gold in standardized sizes was a common form of currency. Gold-pressed latinum in standardized "slips", "strips", "bars", and "bricks" is exactly the same thing.
Star Trek still had merchants who sold various wares. That would not be profitable if nothing was scarce.
They still had planets that lacked necessary medicine, requiring The Enterprise or some other ship to go on mercy missions to deliver the meds.
The Star Trek universe had pleasure planets which had highly desirable things that other planets did not.
There was clearly a shortage of starships and crew, as The Enterprise explored alone and not in a fleet, and couldn't just create a hundred others to help it when it was attacked by some alien enemy.
The Enterprise couldn't even use their on-ship replicators to make themselves some dilithium crystals (fuel) when they ran low.
The Star Trek fantasy is, "Computer, what were the principal historical events on the planet Earth in the year 1987?", and it could totally answer that without sending your entire fucking message history to google for deep AI inspection.
That's part of the Star Trek fantasy. But so is, "Computer, locate Commander Riker" and "Computer, use personal logs and personality profiles from compiled databases to create a personality simulation of Dr. Leah Brahms."
Google has a strong incentive to not allow their aggregated user data to leave Google-- the behavioral data Google collects is the reason why Google is valuable; if they start shipping that data off to third parties, suddenly the third parties don't need Google anymore.
(Same with Facebook-- they're not "selling" your data; they're selling the opportunity to target you based on your data, but the data itself is too valuable to Facebook to sell.)
Except the government who gets unfettered access. Not into conspiracy theories but Im definitely not a fan of this (and it goes for all social media and technology companies)
Does you have a source for that?
Yep. They all do it.
Can you provide a link to your claim or it is just your opinion because there is no proof of that?
ML relies on large data-sets and if anyone tried to release a personal device it simply wouldn't even work, let alone compete with the mass surveillance google/ms/amazon are bringing to bear.
Unless the state-of-the-art in AI suddenly morphs, we seem to be stuck between giving up our privacy or having vaguely intelligent AI.
I personally fall heavily on the privacy side of stuff, but I can see the intellectual and commercial appeal of pretending it doesn't matter in order to get there.
What needs to happen is a company needs to come along and create AI that is trained off of generalized information... some kind of socially accepted public data set... then the trained core is sold as a seed to individuals who then feed it their personal data.
It'll be the equivalent of buying an AI "teenager" and slowly training them to be an "adult".
As sensors become richer and the data becomes more valuable to the ML, consumers are becoming more aware of their privacy.
That means to get to a 'good seam' in the future instead of trawling through trash, you're going to have to convince millions of people their interests won't be affected.
That means in time there is an opportunity for a Google-killer with a different business model not based on using the raw data OR using by the use of intelligent agents. Google goes down because its stakeholders are contingent on getting to the raw data.
Which is kind of funny, because even if it might be accessed by personal mobile devices, the Star Trek "library computer" AI was never "small and physical, easy for a single person to entirely own and unable to remember more details than a human", it was an aspect of a large server (or networked cluster, the actual architecture is somewhat vague) that was part of a capital ship or base, had access in the server/cluster to a library of very nearly all generally available knowledge and extensive personal information about both its users and about people with little direct connection (and could reach out across a galactic network to access additional remote information sources to handle requests).
"Unfamiliar algorithms run on machines far away" is much more like the source of the "Star Trek dream" than "small and physical, easy for a single person to entirely own and unable to remember more details than a human".
Not just crewmembers: TNG showed some of the broader implications (both useful and creepy) of the convenience-oriented panopticon, e.g., when Geordi used the Enterprise library computer to construct a simulation from data (including personality profiles) of Dr. Leah Brahms, who later meets him (and encounters the simulation.) (S306 "Booby Trap", S416 "Galaxy's Child")
I suspect a huge part of the panopticon culture would be / is being informed that you're being peeped at. 99% of the time someone asking "Computer locate commander Riker" involved commander Riker knowing all about who's looking for him and why and having a substantial conversation with the requester.
I don't recall any plot along the lines of Deanna getting jealous and spamming the computer all night asking where Riker is and he better not be in that cute ensign's bedroom... Because it seems logical the computer would inform him each time and he would eventually tire of the interruption and nature would take its course, WRT his relationship with Deanna.
A better analogy would be technically I could walk up to the company president's office and stalk him, but culturally that is so not going to fly and I would have a lot of explaining to do. Merely using the computer instead of walking there in flesh isn't a major cultural shift.
However, every single "holodeck creeper" plot line involved the simulated attractive real world woman not knowing she's being simulated until the plot reached maximum spaghetti spilling cringe, which is one of the few Trek panopticon situations where people being spied on did NOT know they were being spied on, which seems very un-trek, although it made for some entertaining stories.
An alternative interpretation is I believe over the course of the series every attractive woman on the ship was simulated on the holodeck at least once by at least one lonely guy, and its possible that culturally they just got used to it, although I find that unlikely. People do get conditioned to become used to the weirdest things, so its not out of the realm of possibility. Possibly a culture of what happens in vegas stays in vegas develops and its just the sexism of the TV show that they never showed the women turning the tables on their fellow male crewmen on the holodeck.
1. That information were secret (you and service/device/implant) but not a whole company and its third party interests.
2. It wasn't making money for a third party after the initial purchase price for the device, service, etc.
So, better, but nowhere near "on the device only".
First party report of reviewing recordings: https://www.reddit.com/r/technology/comments/2wzmmr/everythi...
News article with more investigation and citings: http://sanfrancisco.cbslocal.com/2015/03/12/strangers-apple-...
When you use Siri the things you say will be recorded and sent to Apple to process your requests. Your device will also send Apple other information, such as your name and nickname; the names, nicknames, and relationships (e.g., “my dad”) found in your contacts, song names in your collection, the names of your photo albums, and the names of apps installed on your device (collectively, your “User Data”)
By using Siri, you agree and consent to Apple’s and its subsidiaries’ and agents’ transmission, collection, maintenance, processing, and use of this information, including your voice input and User Data
But does it? Does it have to know your birthday? (leave alone the fact that bdays are somehow part of a superkey for your identity).
Why should it know my residence, my spouse, or my CC# (with Apple's TouchID maybe it won't need to)?
Google's concept of AI is too creepy for me. It can be useful without being creepy. They're not even trying to make it less creepy.
Overlay on this the subtext that NSA and other tla's are monitoring all this (leave alone other countries). While I may trust Google, I don't trust them to not be forced to collude with the government.
How could it be otherwise?
I've totally passed on the 'mobile revolution', I do have a cell phone but I use it to make calls and to be reachable.
This already leaks more data about me and my activities than I'm strictly speaking comfortable with.
So far this has not hindered me much, I know how to use a map, have a 'regular' navigation device for my car, read my email when I'm behind my computer and in general get through life just fine without having access 24x7 to email and the web. Maybe I spend a few more seconds planning my evening or a trip but on the whole I don't feel like I'm missing out on anything.
To have the 'snitch in my pocket' blab to google (or any other provider) about my every move feels like it just isn't worth it to me. Oh and my 'crappy dumb phone' gets 5 days of battery life to boot. I'll definitely miss it when it finally dies, I should probably stock up on a couple for the long term.
Reading stories like this makes me want to carry a personal tracking device even less.
 People tend to have fewer emergency reasons to cancel when they can't reach you 5 minutes before the appointment.
I live in a big city. There's always a phone nearby, including landlines and payphones (they're still there precisely for emergency reasons). There are also passerbys who can help me. The risk of being all alone, having an urgent need to call 911, and be unable to do so is much too small to warrant carrying a phone around at all times.
All the cities I've been in have had virtually all their payphones torn out long ago.
The courts are still deciding when/whether that information requires a warrant.
But short of anybody wanting to aim a missile at me I figure that I'm better off with the courts in my country where such information does require a warrant at present (and without any indication that this will change), and without the company controlling those assets trying to 'mine' my profile in order to advertise to me more efficiently.
A close friend is a longtime professional software developer. Always interested in mobile. We used to have extensive discussions bout why I preferred carrying a small flip-top notepad and a pen vs a phone or tablet or whatever with a stylus (many have come and gone over the years). In the use-case scenarios I put forward (small lists, secure disposal, privacy, 'battery life') my little notebook frequently was the best approach for me. He disagreed, but that was the point of chatting about our views.
The big change is that the new stuff offers the ability to do things in a more efficient way. While it seems to offer very little benefit for individual tasks, some people will see a dramatic benefit while using it for the multitude of tasks that clutter their life. Other people will benefit simply because it enables them to do things that they would not have done before.
None of this is meant to dismiss your points. Personally, I find all of this data mining creepy even when I am confident that they are collecting the data for my benefit and that they won't use the data to my detriment when they are using it for their own benefit. Yet many people don't share that world view. Those people will benefit from Google's services, while nothing is being introduced to hinder the lives of those who don't use those services.
Most of the coolest memories I have were the product of something spontaneous, or mistakes, that become close to impossible with a computer and internet in your pocket 24/7.
Assessing what's around you, talking to strangers, actively looking for something without it instantly popping in suggestions after you've typed 4 characters, all those things have been a great source of circumstance-based, little everyday life adventures.
This is the difference between risking buying a random book, or browsing reviews and picking a 5 star one to download.
This is the difference between discovering a place you'd never thought existed while waiting for someone and poking your nose around, instead of standing there, frantically watching their dot on the map get closer to you.
This is the difference between the mesmerizing feeling of playing the first expansions of world of warcraft, versus the tiring experience of the super streamlined versions that followed. Yes, they are less frustrating, but they don't bring tear to your eyes when you thing about them, they just feel averagely satisfying.
A few minutes ago I got up to open the door for my cat, and in a few minutes she'll be back and I'll be interrupted again. I feel like those interruptions are precious. They keep you connected to reality. I could install an RFID cat door, hell I could make a voice activated one in a couple weekends, and I would not be annoyed anymore. I would also never have seen all the things I witness every time I get to that damn door.
So far I haven't seen much, but based on my limited experience I believe customers are going to continue handing over their data to Google and Facebook in exchange for personalised services.
The truth is, the only times my smartphone has actually felt smart is when Google has been mining my information from various services (mainly Gmail and Calendar) and presented it to me at correct time, enhanced with other information they have gathered from web.
I don't think there will be any major backslash from consumers. The old comparison about boiling frog applies here.
Even then, we have 'nanny-cam'.
Meanwhile actual geeks and hackers will be fine, because we'll have used our intuitions about these things to choose privacy conscious alternatives to mainstream technology.
In addition to which it is increasingly the case that 'privacy' is regarded as an elite thing, and thus will ultimately be sought after by less educated classes. Like how green lawns used to be for the rich to show off that they didn't need to grow crops to survive and now everybody has them and doesn't know why.
Remember Hillary Clinton and the emails. Remember Colin Powell and 'why can't I use my pda in this highly secure area'. These people are the dinosaurs, and in the business world if you're not hack-resistant you're going to go bust.
> I'm not sure what it would take to get more people to really care
Their interests get attacked or violated. That is what.
Did you intend two write 'Colin Powell'?
Do you trust every single person at Google, and every single person at every third party company Google shares your data with, now and in perpetuity, to never abuse the data collected on you? Personally, my circle of trust is not that large.
Totalitarian Surveillance is here. In the west. Secure document releases aside, it's too easy to do to imagine a state actor not doing it.
Data breaches of differing severities occur every day, at nearly every company. I would have thought Yahoo was big enough and smart enough to avoid it; but no. Not Yahoo, not Sony, not security contractors, not credit bureaus, not Apple (a'la celebrity photo leaks), not Google (stories abound of individual GMail accounts being hacked).
(Have worked at google in the past, may in the future, am not currently). You say this as though anyone at Google (or Microsoft or whatever) can go in and search for 'falcolas' and look through your GPS history.
I'm honestly not sure if there is a single individual at the company who had that power. I honestly think that the best thing Google could to is publicize their internal training and documents on personal information, because the regulations and such made me a lot more comfortable with giving Google the sort of amorphous entity my data, because no person is going to be looking at that data.
>, not Google (stories abound of individual GMail accounts being hacked).
One of these is not like the others, unless you're talking about something I'm not aware of. Hacking an individual GMail account requires guessing/taking someone's password, which is not an attack on Google's infrastructure (Unlike the yahoo, sony, apple, etc. examples), its an attack on a bad password.
In what way is this not exactly the nightmare scenario in 1984? You can argue you don't need to install this, but 10 years ago you didn't "need" a cellphone either. The risk is the consolidation of information and the potential for misuse/control. And not so much potential, but the inevitability.
Even if Google is perfectly secure from bad-actors today, they might not be tomorrow. And if they themselves suddenly switch to being a bad-actor, they aren't going to throw all that data away and start from scratch first.
This strikes me as a matter of semantics; does it really matter if I'm targeted whether they hacked my account or hacked Google?
> I'm honestly not sure if there is a single individual at the company who had that power.
Think harder. Who has the root access to the servers holding the data? Could the existing infrastructure and data segregation ever change? How many external checks and balances are in play that can't be manipulated by internal forces (i.e. is there anything stopping Google, or holding Google accountable if their data protection policies change)?
I think is incredibly important. If your information is put at risk due to bad practices by Google/Yahoo/Apple/Facebook/whomever that's a problem to be taken up with the company. If you use insecure passwords and someone is able to access your information that way, then the problem is with your passwords, not with the platform.
>Think harder. Who has the root access to the servers holding the data?
As far as I'm aware, no one. Like I said, from my experience, accessing personal data and user information as an engineer required a lot of red tape and approval from 'the powers that be', and violating those rules would get you fired faster than anything else.
>Could the existing infrastructure and data segregation ever change? How many external checks and balances are in play that can't be manipulated by internal forces (i.e. is there anything stopping Google, or holding Google accountable if their data protection policies change)?
Here I agree with you, probably not (or very little). They obviously have public privacy policies, but you have no proof that they abide by those, and I don't know (and doubt that) they get audited or whatnot to make sure that those policies are followed. Which is why being an employee made me more comfortable. If nothing else, it meant I'd know ;)
Do you go out in public? because if you do, some company could be recording you on CCTV, and the company that makes the CCTV equipment could sell the business to Google who could update it to use the CCTV footage in AI learning, which means that someone could eventually lookup your face and see you were at a smut store 6 years ago.
At some point you need to draw the line, there is no perfect privacy.
That said, you can limit your exposure. Adding all of these Google implements creates a far greater surface to lose privacy through than not using all of these Google implements.
People routinely underestimate how much can be gleaned about your from correlating such "incidental" data. Thus I feel it's important to remind them of what it can cost them.
Is the benefit worth the cost? To some, yes. To me, no. And that's why I posted this, an explanation of why I don't find this level of information gathering and correlation by a private and profit driven company acceptable.
> Who has the root access to the servers holding the data?
How do you explain this then?
E.g. Wakes up at 5:30 am, travels to a construction site, lives in a house with a large number of people -> signals possible immigrant. Or this:
Detecting Islamic Calendar Effects on U.S. Meat Consumption: Is the Muslim Population Larger than Widely Assumed?
We have to think about data not just in terms of our relative safety, but in terms of what could happen in adverse circumstances. And not even just in terms of our own government, but foreign governments.
A very limited number of Google employees have access to private user data (only when it's vital to their work) and they have strict policies in place (data does not leave the data centers etc.).
Which third parties are you referring to? As far as I know, Google does not give their users' private data to a third party.
Third parties get my voice recordings for "improving the voice recognition service" - what if my name is mentioned in the background of one of those recordings? What if I'm not a savvy user and add private data to those recordings?
You're also talking about what's in place today. If I give Google my data, that data is probably going to stay with Google as long as they are a business (and potentially after, if Google were ever liquidated and their assets sold off). What measures are in place to protect me then?
I'm responding to a comment that said trusting Google == trusting ALL Google employees, which is not true. Trusting Google with your data is believing that having some convenience (a mail service like Gmail, an intelligent assistant, etc.) is worth the risks you are talking about: Google drastically changing their policy, or being bankrupt and acquired by less scrupulous owners, etc.
Let's not just act like anybody at Google can look at your data and play with it, or a disgruntled employee will suddenly click a button and release all users' data on pastebin...
What guarantee do you have that these policies will never change in the future? Or are you simply assuming that risks never change?
More important is your assumption that the decision would even be made by Google. Outside forces such as governments may force Google's hand.
> able to say
It doesn't matter what is said. If Google had sufficient deniability (perhaps an NSL gag order? or a sufficiently high purchase price?), they can say user personal data is secure while sending it outside their control.
The only guarantee that would be believable is if they indemnified their users against any future damages derived from their data collection, and there is no way Google (or any company) would willingly accept that kind of liability.
Being ordered by a judge to do something obviously supersedes any policy, but that's the case for any person and business.
Which we have to take their word on and hope that never changes in the future, even though Google might not be the party with the authority to make that decision. Even when they are, business plans change and a pile of potentially profitable user data is a very powerful temptation towards moral hazard. Only a fool would claim that this wasn't a risk.
> that's the case for any person and business.
Only if you deliberately ignore the entire point that the data shouldn't be stored at all by 3rd parties. A business that sold a real product (instead of a service masquerading as a product) would run locally and no data would be put at risk.
If a judge orders me personally to reveal something, they probably need a warrant and there is a process by which I can challenge that order. If, however, that data is stored on Google's servers then I don't have standing to challenge any interaction between Google and the government.
You forgot: every single state which Google is subject to.
1. Concern that a single, third-party entity (Google, in this case) might peer into every aspect of our lives, and/or reverse-engineer an exhaustive catalog of our entire lives, by virtue of data collection.
2. Concern that many consumers will unwittingly opt into such control, unaware of the privacy they're relinquishing, and unable to make informed decisions about the possible applications and consequences of the tradeoff.
3. Concern that the custodian of all this personal data (Google) might use, sell, transmit, or turn over the data in ways we had not anticipated or believed we'd consented to.
Personally speaking, I understand these concerns but also understand the potential upside. I'm not 100% sure where I stand just yet. The aforementioned bullet points are presented without editorial comment; just trying my best to articulate what I believe to be the crux of people's concerns here.
Having said that: Google is not in the business of making your life easier, but in the business of selling you ads. The data that Google collects about you is incredibly powerful, allowing them to go from a "simple" manipulation to sell you stuff you wouldn't otherwise buy, to full-scale blackmailing you if they see it fit (not saying that this is happening, but if they wanted to, who would stop them?).
It's putting too much power in the hands of a single, amoral entity (like all corporations). That's not good.
The law, and the economic interest of all the rich shareholders that care about the company's reputation.
I don't want a company that employs tens of thousands of people, along with a government in a foreign country, along with all the governments on the data route in between, and their employees, civil servants and assorted snoopers of all shades, to have access to the artificial assistant's communications and thoughts relating to me.
All these organisations are made out of people. People with power are inherently untrustworthy; they need enforcement mechanisms to be kept in line, and enforcement mechanisms need to be activated every now and then to stay in working order. That is, occasional abuses are required to keep abuse in line. The thin blue line wavers like a pendulum: it's how we know it's working.
Edit: Interesting comment in another thread: https://news.ycombinator.com/item?id=12639530
Sure it would be useful. Sell the assistant as a locally-installed app that guarantees personal data never leaves the LAN and will sell.
> sure they'll collect your info
Only if you let them. Demand better behavior from their software and business practices.
> Totalitarian Surveillance or Data Breach Concerns
What you seem to be missing is that the concern isn't about today's level of surveillance or today's data breach risk. Data generally persists indefinitely once it makes its way into a database or logfile.
To make a claim that these are low risk requires that at no time in the future will surveillance risk increase or data breaches become more common, ... or that the company will run into financial trouble and need to sell your data, ... or that a breach will be forced by a government (not necessarily your's or Google's), ... or that your data will be aggregated into other databases, increasing the "predictive" power and attack surface, ... or any of the other unknown ways your data could be used in the future.
Humans are already known to be terrible at assessing risk, especially when there is a very large separation between the cause and effect. Smoking today giving you cancer many years later is a traditional example. We already know data breaches happen, well meaning employees make mistakes or succumb to corruption, and external powers such as governments or organized crime occasionally take away your agency. Do you really want to claim that none of these risks will ever happen? Because that's the actual wager you're making when when you use Google's products.
Maybe we've turned a corner and will never go back to that. But I don't have confidence yet.
No, not really. Restaurant recommendations and traffic reports are simply not that hard for me to find on Yelp or Waze myself. The "anticipation" here doesn't really help me in any material way.
This is what Elon means when he says AI is like inviting the devil. We have this algorithm in our mushy brain. Its takes about 20 years to train and lives for about 80 years. Its communication bitrate is pretty low (mostly blabbering through mouth) and doesn't retain much information. Only patterns.
Now imagine this algorithm from the mushy brain is run on a silicon chip, with gigabit bitrate, retains almost everything indefinitely and can learn from entire history of humanity.
That algorithm would just need to deceive us until it was powerful enough to wipe us in one sweep.
Google already manipulates humans psychologically to click on their ads en-masse. Giving them more of your personal data is just feeding the devil.
An interesting quote: “we found out that as long as a pregnant woman thinks she hasn’t been spied on, she’ll use the coupons. She just assumes that everyone else on her block got the same mailer for diapers and cribs. As long as we don’t spook her, it works.”
There's no reason to believe Google isn't doing the same thing. And I strongly suggest reading the original article, if only for the first two or three paragraphs.
I realize there's a grey area though.
Google recently started telling me how heavy the traffic is on my commute because they've figured out I do it every day, and when I'm doing it. That's nice, but I don't care. I could already get that information from my car's GPS and seeing how red the roads were.
I wonder how much infrastructure, fancy pants machine learning and effort when it to just creating those useless alerts?
Google, as a problem, has already solved the problem they were created to solve: search the Internet. Now they need to find something for all those twiddling thumbs to do, so we get braindead features that tell me what I already know.
I guess people have different experiences. Personally, I know how to get home from work, so I don't feel the need to turn on my GPS every time I drive home. So I appreciate getting notified when there's notable variances in drive times, without me having to look for it every day.
I need to get home on time to pickup kids, but mostly just leave a bit early...
Perhaps an exaggeration, the point is, even if you trust google today. There's no guarantee that data will always be held by the people who are google today. We know for a fact the NSA had access to all google data up until at least the snowdon leaks. To me that's the concern about privacy, you have no idea how it can be used AGAINST you in the future.
In this case it's "what's the danger of all this lazily deployed insecure ubiquitous surveillance gear in a political worst case scenario like a descent into totalitarianism, mafia statism, etc.?" That's not an unlikely thing. Complex societies undergo bouts of collective insanity or descents into pervasive corruption with disturbing regularity on historical time scales.
Personally I think the USA is one 9/11 scale (or worse) terrorist attack or one seriously painful economic crash away from an American Putin or Chavez (or worse). Which we get depends on which side manages to field the most charismatic demagogue. If that happens all this total surveillance stuff will be mobilized against dissenters on an industrial scale and with a significant amount of public support.
You limit things like surveillance to limit moral hazard. Future generations are likely to look back on the wanton deployment of all this stuff and say "what were they thinking!??!?"
Not quite sure how you got that out of "with every new technology comes new responsibility". That's neither singling out surveillance nor limiting to moral hazard.
or a Japanese in the US: https://en.wikipedia.org/wiki/Japanese_American_internment
We tend to think that the nazi's were crazy out-there, but we weren't too far off not that long ago, and we can get back there pretty quickly.
"Google's mission is to organize the world's information and make it universally accessible and useful."
One thing that drives me mad about Google is how they say "the world's information", then ignore 99.9% of the worlds information, and then expect their consumers to give them a pass and not call them to account for how they privatize user information.
Looking at the information that Google organizes and makes accessible and useful I don't see things like "species extinction", "oceanic water temperature history", or say "dolphin linguistic data", equally represented when compared to "my browsing history", "my location history", "my search history", "an archive of my voice searches", "when I leave or return home via Nest", "who I associate with via Google's communication suite". Google is organizing exactly that data which Google can monetize, which is not the world's data. Not a lot of people want to buy data on deforestation so it's much more difficult to get Google to put resources into that. How many people chew pieces of gum until 100% of the flavor is gone? I'll never know, and Google isn't going to help me, because it isn't a profitable data set.
Simply stated, Google needs to stop acting benevolent and start fessing up to attempting to be omniscient in order to be all knowing about its users, not "the world's data".
More directly to the point, I was (clearly) comparing the relative resources Google invests in some data sets vs others. Are you arguing that Google invests comparable resources in this type of data compared to the resources it invests in understanding Google's users' data sets?
Not sure what your point is, either: do you want to get a notification in the morning saying "try to leave early today, as an accident has caused increased traffic," along with another one saying "remember to save to buy an electric car"?
Gmail was initially a product started by one guy at Google, and was not a project born out of Google corporate philosophy or business strategy. --> https://en.wikipedia.org/wiki/History_of_Gmail
Google Books has been mired in lawsuits brought by the Authors Guild and the Association of American Publishers with the issues including copyright, privacy, censorship issues. --> https://en.wikipedia.org/wiki/Google_Book_Search_Settlement_...
Re: Google Earth... This one is fully being leveraged for monetization, especially with mobile's commercial possibilities finally being realized. From Wikipedia: "Google Earth is a virtual globe, map and geographical information program that was originally called EarthViewer 3D created by Keyhole, Inc, a Central Intelligence Agency (CIA) funded company acquired by Google in 2004 (see In-Q-Tel)."
some more reading on one take on Google books:
> One thing that drives me mad about Google is how they say "the world's information", then ignore 99.9% of the worlds information
One wonders what's that 99.9% that you miss. You mention:
> I don't see things like "species extinction", "oceanic water temperature history", or say "dolphin linguistic data", equally represented
What equal representation do you want? A notification when you arrive at home telling you "this is some new discovery on dolphin linguistics"? For what it's worth, even that I'd bet you can get, by letting Google Now know of your interest in the topic, or subscribing to a science news channel in YouTube.
> How many people chew pieces of gum until 100% of the flavor is gone? I'll never know, and Google isn't going to help me, because it isn't a profitable data set.
Is it even known? Google's certainly not going to do the research; research isn't organizing. Would such an investigation even get funding from anyone, to pay the researcher? But supposing it's done, and it's published in some paper or some book, what's your best chance at finding it? Google Search, Scholar, or Books.
> I was (clearly) comparing the relative resources Google invests in some data sets vs others. Are you arguing that Google invests comparable resources in this type of data compared to the resources it invests in understanding Google's users' data sets?
> Before Google Assistant there indeed were those other products, which, I honestly don't get why you chose these ones, they aren't exactly great counter arguments.
Because before Google was investing a dollar in any of:
> "my browsing history", "my location history", "my search history", "an archive of my voice searches", "when I leave or return home via Nest", "who I associate with via Google's communication suite"
it was already investing plenty of resources in those products I mentioned.
Apple has made preserving user privacy a paramount goal, investing in research and technology to achieve it with minimal loss (however much it is) of (intelligent) functionality.
I find that a very strong point for the Cupertino based company.
(edited for legibility)
Both Apple and Google comply with federal warrants, etc... That's obvious.
Neither of them have any intention of ever letting a 3rd party access user data, that should hopefully also be obvious. As in, neither company sells your information. It'd be a PR disaster, and in the case of Google it would be a massive loss of revenue as it would undermine their entire ad business.
So the only difference is what data they actually have access to, and it's not actually that different. The big difference is iMessage has end-to-end encryption by default so long as you're talking to another iMessage user. That's sorta it, though, and that gets largely neutered by the fact that the messages are then immediately backed up on iCloud anyway and that end-to-end encryption is lost in the process (otherwise you couldn't restore to a new device). Google now offers that, too, via allo's off-the-record, though. Everything else is pretty much the same between Apple & Google with regards to meaningful privacy.
After Snowden made Apples collaboration with the government economically untenable, Apple may now be willing to let users have control. They fought the FBI to protect privacy on a transparently political charade. They've built hardware and software key protection into the iPhone.
But this is a change, and maybe it's a lie, or maybe it will change back, or maybe they just won't succeed. I'm not rushing out to buy a Mac Book.
People say, competition will ultimately take care of it. Yet, there really isn't a serious competitor for Google's search engine. And don't even get me started about social networking with respect to your private lives, where the only player is FB as far as I can see.
People say they don't want the government involved, and often for good reason. But if there is no expectation that these tech giants will self-police when it comes to privacy, and people don't want these organizations to be policed by the government either, then how exactly does this play out? How far is too far before we start demanding more respect for our rights from these organizations?
Another thing to think about: when dealing with tangible goods, the creative destruction of capitalism is somewhat reasonable to justify because it is usually easy to see. How does it work with information? Suppose FB just completely blew it for a few quarters in a row, and starts tottering towards its demise, what happens to the "defensible barrier" called data? Does it belong to FB to do as it sees fit, like the assets of a company about to be liquidated? Or is FB going to "return" it to the people from whom it got it? If some other company now got possession of its assets, including data, what is the expectation around what are reasonable uses for such info? Or, is FB, with its trove of data about every single person who has held government office, now just too big to fail?
And all this can be asked just of the data that FB collects from you directly by asking you to fill it in. What about the stuff that it "infers" behind the scenes? What about the "connections" it adds to its social graph without your permission in order to provide a "local marketplace" which apparently gets rid of the "private information" challenge?  Not that Google is any better in this regards, of course.
I think the time has come for some serious thinking about checks and balances in the privacy arena.
Is the market really so bad that Google needs to invade people's privacy to this extent in order to grow?
I bet Google's CEO will not use the products himself. Google is almost behaving like a pusher, promising people comfort at the expense of their livelihood (the chilling effect).
Perhaps this should simply be illegal. If people want a personalized AI assistant, why not train the AI on the user's device? I seriously doubt that it has to know everything about everybody's behavior in order to know some things about the user's behavior.
I've been experimenting spending less time with my devices and it's hard because I'm addicted, but life is more fun when it's being lived and not having to even think about technology, leaving devices of all kinds at home and just sitting in a park is a real luxury.
Thing is, I just don't need AI for everyday things. AI to help solve big engineering, medical problems, great, but to help me schedule my life, not really.
Even when travelling, things like Google maps and translte just isolate and distract me in someways. Asking a local about something is really helpful and you can get more out of the intraction than just directions. It's rare language truly is a barrier I find.
I'm even questioning how much I really need a smart phone, it is mostly just a distraction. I remember actually being socially pressured into owning one when I was 18. Never actually stopped and asked myself if I need one, it's just some thing has become a "must have".
I know what you're saying about AI being out of the way though, and it would be excellent, if however; it was completely my data, it worked for me and not for a third party. For example, kept my data private for me, while paying my bills, that might be nice. But basically it would work to keep me focused on the real world, which could be achieved without tech ?
I might sound a by anti-progress here , but It would be nice to just see the right progress.
Hell that's the whole purpose behind devices like Google home and Amazon's Echo. People aren't buying them because they look nice, they are buying them because they are a simple voice-operated AI that can answer question and do things for them.
You might not want or like it, and that's okay, but don't act like nobody is asking for this. People have been begging for it since computers were a thing.
I totally disagree people wanted all their personal information fed to private corporations in the "cloud" BTW. Completely disagree. It's Orwellian.
Control the lights in the house, make sure the garage door and front door are closed and locked at night, adjust the temperature, play some music (specific band, song, or genre) on one of the TVs or on my computer, read me things like the weather, calendar entries, emails, ask it how long it'll take me to get to work right now, ask it about conversions or math-ey things I need done, have it make notes (specifically notes that can alert me when I get to work, or the next time i'm at a supermarket, etc...), set alarms, create calendar entries, send emails/instant messages (almost always short and sweet, but still useful). Have it lookup "knowledge graph" kinds of things like what time a store closes at, when does a music album go on sale, what kind of reviews did [movie] get, when does [movie] come out.
And that's just the stuff I've used it for in the last month or so.
On the phone, it can do even more:
Have it navigate me places, and ask it how long until the next turn while on a motorcycle (through a bluetooth setup in the helmet). It will alert me when I need to leave for work using my usual route, it can infer where I am going when i'm not navigating there, and can alert me about traffic incidents on the route, and suggest an alternate route (this one is fucking cool when it happens). It gives me severe weather alerts for my location, notifies me of things like price drops or new releases of things i'm interested in, and shows me almost an "rss feed" kind of thing for news articles that I'm probably interested in (this one is hit or miss, but i'd say every time I look at least one of them is something I wanted to know about). Just today it gave me a notification that I told it to remember. What I said was "remind me to call my doctor tomorrow afternoon", and about an hour ago it put a notification on my phone saying "call my doctor" with a "call" button on it. When clicking the call button, it started dialing my doctor's office. That's what I want from an AI, and it's working great so far.
>I totally disagree people wanted all their personal information fed to private corporations in the "cloud" BTW. Completely disagree. It's Orwellian.
Well that's a strawman... It's like saying that "people wouldn't want to hand over hundreds of thousands of dollars to get some wood and cement" when talking about buying a house.
People aren't dumb, they know these devices aren't magic. They know that if you ask what the weather is, obviously the device needs to know your location. If you ask it to play some music you like, it obviously needs to know your preferences. If you ask it how long it'll take to drive to work, it needs to know where you work. In most cases people don't want to spend hours on hours setting up every little setting to tell the system all this information, they just want it to work, so it just works. It infers information, it remembers preferences, it figures out connections that you didn't even know where there. And in return you get a wonderful device that can help you in your life. If you don't want that, it's fine. You can not use it, you can have google delete all information associated with you, and you can disable all tracking and gathering.
But let's not pretend that people don't want the outcome that handing over information can provide. They want the AI, and in order to do that, the AI needs to know them. These devices are being sold as being able to learn about you the fastest, and use it the most, it's not like they are being shady here.
> People aren't dumb, they know these devices aren't magic.
Yes they know that they aren't magic but for the most part they don't know how they work - so it's essentially black boxes with deceiving trade-offs, also almost no one reads TOS.
>it's essentially black boxes with deceiving trade-offs, also almost no one reads TOS.
See, people keep telling me that they are "deceiving", but I just fail to see how. Nobody is saying that these work without your personal information. Nobody is saying that they aren't using your history to "teach" the service. Nobody is hiding the fact that they learn your preferences over time to get better. Why do you think they are deceiving?
To me it's quite the opposite (as this article shows). People are asking for more learning, more automation, more "AI", and the companies are putting out headlines like "Our AI can learn about you and your wants and needs FASTER than our competitors can!"
That's not hiding or deceiving...
For example, having content fed to you is potentially unhealthy, "Google, read me today news", are you telling me you just want to be fed any kind of information based on some kind of "preferences"?
As you said it's a choice but don't pretend people totally know what's being done with the data.
I hope there is age limit restrictions placed on this kind of thing.
Come on now, you can't handwave away what to me are very real benefits as "perceived convenience", and just because it seems like a lot of personal information to you doesn't mean it is for me.
Yes, i'm letting them see a lot of personal information, but that's not a bad thing. I get tangible benefits from it (not just this "AI", but many many other services), and I'm actually asking for more. Right now it only learns my music preferences from when i play stuff with Google Music, i'd love to feed my soundcloud history into it to give me a more well-rounded set of preferences. I'd also like to feed my netflix watch history into it to let them give me better tv/movie recommendations. This isn't a "mistake" by me, this is a conscious decision I am making to improve my life by giving them more information, just like how it's not a "mistake" that someone pays money for a service they want/need (even if you personally don't want or need that service).
Also, i'm not so sure that "AI" is the right thing to call it either, but it's the term that was chosen, so it's what i'll call it. I read somewhere once that "AI" stops being "AI" when we understand it, and just starts being "programming" at that point, and it makes a lot of sense. As we get better at making programs that feel "natural", it's less "magical AI that can do anything" and more "well understood programming techniques".
>For example, having content fed to you is potentially unhealthy, "Google, read me today news", are you telling me you just want to be fed any kind of information based on some kind of "preferences"?
Come on now, are we going to have an actual discussion here or are you just going to build up strawmen to kick down? First off, it's not my only source of news. I'm not having them "feed" me anything. Second, it's more of reading headlines that i might be interested in. For example, this morning it showed me 5 headlines:
* A new XKCD comic is out
* "The Macbook pro 2016 October release date confirmed" from the University Herald
* A story from TechCrunch saying that the Boeing CEO says he's gonna beat SpaceX at something
* An article from PCWorld titled "Happy 25th once again to Linux"
* And (funnily enough) this story from TechCrunch titled "Not OK, Google"
I'm not being "brainwashed" here, i'm not letting google determine what i'm interested in, i'm not taking anything there blindly at face value, it's just a list of headlines that I can either click to view the article, or lookup at my own will (or in many cases lookup on HN or Reddit for some discussion about it). Every one of those i'm interested in in some fashion. I personally find it funny that you think it's unhealthy to have an "AI" "feed" you information, while most traditional news networks are much more of a "feed", but they don't tailor to any kind of personal preferences (what fox news decides to air, is what fox news watchers are going to watch). That to me is much more dangerous! At least in this case I can tell the system that I don't like this story (because it's blogspam, or it's incorrect, or it's just done in bad taste), and to not show me stories like this again.
>As you said it's a choice but don't pretend people totally know what's being done with the data.
No, and I don't pretend to know what is being done with the data, that's the point. I give them that data, and they do what they want with it, and in return I get all of the benefits I get. There's nothing stopping them from selling it, there's nothing stopping them from releasing it to the public, there's nothing stopping them from looking through it personally to find "bad" things. But I have "faith" (if you can call it that) that they won't. Because if they do, i'm done with them. And a lot more people would be as well.
>I hope there is age limit restrictions placed on this kind of thing.
There is, as with most things online it's "under 13 needs adult supervision". Funnily enough I've read that toddlers LOVE these things. It's much easier for a child to tell the TV to play Thomas the Tank Engine than it is for them to fumble around with a remote, or have access to a phone. It's actually becoming a really good way to let little kids be involved in computers and technology at a younger age, which I believe will be a major benefit in their lives (the jury is still out on that though).
What some of you don't seem to realize, (and this happens in EVERY SINGLE ONE of these threads) is that:
1) AI is not magic. Yes, we call it "AI", but you use words like "know" as if there is a conscious entity that "knows" something about you. The AI doesn't "know" anything. It's a computer.
I, however, want a future where an AI can tell me things like "Flights to Shenzhen are really cheap right now, and you have the discretionary income to afford a trip there. Here is a possible itinerary for you based on the types of things I know you are interested in. You could leave this Saturday and there is nothing on your calendar that you need to be at for the week."
"I noticed that you have been bicycling a lot lately, and based on the patterns of where you go, I think that the following bike trail would be interesting to you. The route is loaded up on your phone already."
The other thing: google is an advertising company. Yes, because I know this, I am able to take this into account when listening to google's suggestions. But here's the thing: I like being [well] advertised to. I have discretionary income, that is WHY I HAVE A JOB. I am going to spend that money on things. If there is an AI that is helping me find the perfect nexus of things I want and things that I can afford, that is a GOOD thing. That is helping me more efficiently spend the money that I got.
Yes this stuff is subtle. Yes this stuff is pervasive. No we don't need yet another "2edgyforme" "if you aren't the customer you're the PRODUCT" articles about google.
It's clear Google wants to "own the home" and all their products were built to further this goal (rather than be useful themselves). This is why Google bought Nest for 12 jillion dollars. And it's why the iWatch failed and Google Glass failed - right now, these are niche products that barely have purpose.
Now this stuff may become integral to our lives, as depicted in so many sci-fi stories, but if they become embedded in our lives and are wholly owned by one huge company, that should be terrifying to everyone.
Here are some real world reasons why: a virus is installed on your Google box through your wifi - now house robbers know everything about your schedule and habits. Your parent goes through your every personal action to make sure you aren't getting in trouble. A spouse uses the system to track your every movement and make sure you aren't cheating. And of course, the gov't has access to all of this data by default. Imagine being a famous celebrity with every action in your house known and accessible to any gov't peon with access and a bit of curiousity. This isn't some conspiracy theory, this is exactly the access Snowden had (and he was a contractor).
It isn't what these products are, it's the direction they represent: complete surveillance of every personal action, stored and owned by one monolithic corporation and the government. And not only is this is sort of where we are heading, it's Google's clearly stated objective.
It reminds me of the 50s when plastics were going to revolutionize everything... which they did, but we melted off the ozone layer before realizing the consequences of slapping new technology across the world. Especially when the benefits are so minimal and the threats are so real - imagine McCarthy with the type of access and control these devices would provide if Google succeeds in pushing this across 80% of homes.
Guest at house party: "Ok google, show naked pictures of [host's ex-girlfriend]"
I find privacy anxiety to be much like electric car range anxiety. Once you have the product its not an issue, it drops to zero, but debate on the internet is extremely hot and heavy right before widespread adoption kicks off. Enormous amounts of toxic anxiety and paranoia bleeding out all over stuff that in practice after deployment just doesn't matter. In other online venues I've been worried about causing heart attacks by suggesting my next car will be electric, and this topic is about the same here.
I have very conflicted feelings about this article.
Any time you do something big, that’s disruptive — Kindle, AWS — there will be critics. And there will be at least two kinds of critics. There will be well-meaning critics who genuinely misunderstand what you are doing or genuinely have a different opinion. And there will be the self-interested critics that have a vested interest in not liking what you are doing and they will have reason to misunderstand. And you have to be willing to ignore both types of critics. You listen to them, because you want to see, always testing, is it possible they are right?
Amazon sells you stuff in exchange for money.* This is a clearly understood type of transaction that people understand.
Google gives you stuff in exchange for being able to "sell your eyeballs" to third parties. And, most people actually believe that Google sells your data to those third parties, even though that's not the case, which gets at the fact that this model is not as well-understood.
As far as hardware design:
Echo provides a clear and prominently-placed button (right on top), with an LED light indicating when it's muting the microphone; when the button is active, the entire indicator ring also lights up. This button has equal prominence with the button that can be used to manually activate Alexa. The existence of this button was highlighted when Amazon introduced Echo.
Google Home places the button on the back side of the device, when there is clearly a "front," as defined by the tilt of the top surface. The existence of this button was not highlighted in the introduction at I/O.
* Yes, Amazon also runs an advertising network. Most people don't really know this. And it's a very, very small part of their business.
I'm worried about who Google wants to sell this information to and what they want to do with it. I'm worried about Google working with intelligence agencies to try and target me politically, feed me propaganda, or put me on some list of undesirables.
We can have an ultra-smart AI that does everything for me without worrying about these things. I don't want to pay with my personal information, I want to pay with money. I want Google to stay out of my life.
> I'm worried about Google working with intelligence agencies to try and target me politically, feed me propaganda, or put me on some list of undesirables.
So you can not say 'world hunger doesn't exist because I just had a large sandwich', the issue is more global.
We are just not that evolved yet, and if near 100% aren't where you expect them to be, then you can't assume anything else.
Television was ok, too. I used to watch. All the time. TV was the glue that kept us together. Now it's the acid that tears us apart. I no longer use a television.
Google. I love your maps. Your directions. Your free storage. And my earning a living never requires to use you, Google. Just like my TV.
I do expect Google to become something that I no longer desire. Just like the TV. And I think Google won't be able to control or predict it either. Just like tv.
It'd seem windows 10 is setting Microsoft up for this. Google is following suite with its own hardware.
Central task is to infer what you want and help you achieve it, but further, your AI can ask you questions too to work out all sorts of things subtely.
i think eventually, we'll think of I personal information as a commodity or "raw material" and regulate its extraction and trade as such.
But, doing it offline, has no incentive for advertising companies.
Universities / home automation companies / etc would be probably be more incentivized to develop that.
So, any advances in AI, if they come in the 'always online' will come with strings attached. It is not your AI optimization software, it's somebody's else.
Where is the border between inferring what I want and deciding (for me) what I want? How about an artificially created tilt toward certain consumer or political brands in the process of inferring what I want?
I don't think there is eventuality. We must fight hard now to change our society. We live in a era that determines how future will look like. Politics is path dependent and wrong choices now can have consequences that last centuries.
Economic information asymmetry over consumers and competition benefits few large corporations that can leverage it over customers and competition. They will fight for tooth and nail to prevent consumer regulation. Governments have additional agendas.
Oppression is not a theoretical idea and not only an historical problem: Government mass-murder in the Philippines, the oppression of Muslims in Europe, of a large religious group in Turkey, of Tartars by Russia (in Ukraine's Crimean province), of so many people in Syria, of populations in all the oppressive countries in the world. The U.S. election could result in oppression of Muslims, Latinos, blacks and others; some U.S. cities already use 'predictive policing' to identify and harass private citizens - what will happen if Muslims becomes an open target? And don't forget anyone who has any interaction with Muslims. Such things have been going on since the dawn of humanity and unfortunately will continue.
The idea that Google and other commercial mass surveillance will not be used for these purposes is a dangerous, irresponsible fantasy; it's lazy, head-in-the-sand thinking, akin to climate change denial: We haven't died yet is the only argument. These systems are not and will not be kept out of government hands: Government already has broad access, as is well known (National Security Letters, NSA spying, Yahoo's recent revelation, etc.). Laws can be made at any time giving government more access, and they will in climate of oppression. Many obtain illicit access, as we know, from the NSA to foreign criminals to antagonistic nation-states. And it assumes that the companies want to deny access; inevitably, some CEO of AllYourDataCorp will support government surveillance and be prejudiced against Muslims or immigrants or blacks. Likely, at least one already is doing it.
IMHO, while it disrupts our plans for IT and wealth, it's absurd to think otherwise.
Moving to voice-to-text everything android, just seems as a logical extension to advertise/sell data even more.
Whether is ethical or legal, it doesn't seem to concern them at all when profits are in question.
You many not 'personally' need privacy or freedom at this point in your life but to casually dismiss it out of hand and fail to consider its import for a functioning democractic society is beyond reckless. Its just one of those things you don't need until till you do.
And thankfully individuals aren't in a position to trade that away unless they can write a new constitution and convince everyone to get on board.
All surveillance does is compromise your society in a fundamental way, and in this case just to add to Google bottom line and ramp up Google's creepiness factor even more. That's a bad deal.
I want a startup that provides services like this but treats your personal location, correspondence, and behaviors like tax returns and credit card numbers. If we can achieve a good measure of safety and privacy in our messaging apps, we can do it for this sort of data.
Again, it might be neat, having a computer like on star trek, but what if you oppose your government? What if you oppose anything, and suddenly your toaster burns down your house, locks you out, locks you in, reports your every move?
Look at Manning, look at Snowden, look at Assange. They opposed and now they get terrorized by the govrnment and the software they once happily used to use. Look at how i will be treated right here by others.
Stop this Masspsychosis.
Even without the manpower/big data/processing power of a big co I'm sure we could create something that's somewhat useful.
Ads have become utterly pervasive, and avoiding using Google's AI isn't going to protect you from them. My Samsung "Smart" TV has ads for Hulu built right into the operating system (despite my being a Hulu subscriber at the time). Windows 10 is basically one big advertisement (at least the consumer edition).
If I have to have ads blasted in my face all the time, I'll take Google's AI-driven ones that at least stand a chance of being less annoying.
There is no concept of even discussing that this might be a tradeoff or a shift in what is perceived as private. There is no consideration given to how we might still do these things that people want while protecting their data. There's no consideration for how people's lives are changed in different ways by this tech.
Nope. It's either a total gain or a total loss.
And that is the real problem here. People are applying their political bad habits to what should be a reasonable and sensitive discussion about the varying levels of tradeoffs we should be willing to give and what the net good we can extract from this technology.
A great example is street view. Street view ultimately has enabled extremely detailed and powerful navigation, complete with a ton of ways to do real time traffic detection. Most people using apps that benefit from this data would say that's a net good, and in general as the tech evolves and traffic distributes more efficiently then urban environments see a similar positive effect.
Of course, the tradeoff is that I can scan a snapshot of your street and if you were there playing football with your kid, walking your dog, or publicly exposing yourself then minus your face I'm going to be able to see all that.
What makes these kind of issues even less clear is that street view enables self-driving car technology (we need the detailed and constantly updated nav systems for them). Self-driving car technology has the potential to totally transform some neighborhoods, has massive potential for assisting disabled people, can completely change the way we ship goods and thusly preserve oil and energy resources for generations to come. But it also has the potential to be a new way for the upper and rich classes of the world to completely cut out service industries and further alienate the economic middle and lower classes.
Why is this meaningful? Because if we don't talk about them then we can't help shape them. If we understand the implications as a society and demand commensurate good from these private industries then it can be an incredible boon to our societies. If we don't, then one of these extremist sides will win and all options for a middle ground where we get benefits and have tradeoffs will be excluded.
That's a terrible outcome.
I can see the day coming where this is their primary marketing slogan.
Unless, that is, you never buy any of that junk in the first place -- because like, who needs most, if any of it, anyway? -- and keep going on with your life. Which was humming along just fine before the IoT came along, after all.
I agree that in many respects the current corporate push for the Internet-Of-Things is mainly a wild-west style landgrab for self-serving integration into our daily lives.
However I don't think the IoT has to be this way.
I want all my IoT devices to only communicate to my home gateway, which would run open-source drivers for each device to provide the networking functionality. Problem solved. I don't know why this approach isn't getting more focus!
To remain competitive people will adopt new technologies. Google assistant/cloud, self driving cars, CRISPR. Consider what people gain and lose with each new technology, such as the ability to drive, a bio-engineered kill switch, or control their own hardware (windows 10).
All new technologies can be compromised. The ability to process the extreme amounts of data we are generating is already at previously unimaginable levels. Political dissidents or those who interfere with corporate interests can be identified and silenced with false evidence (pedophilia!); media control; and personally targeted DoS of finances, cloud services, etc.
This is the ability to control the world. The corporate world is disincentivized from doing anything about it, and governments don't really get it as evidenced by their hoarding of zero-days .
There's a war going on right now. It's terrifying, and awesome. Throw in some global climate change and our next 50 years are going to get interesting.
When the end comes I'll be that crotchety old guy who knows how to DRIVE A CAR and use a general purpose computer.
Hack the planet!
Here is what was possible in 2011: https://news.ycombinator.com/item?id=12528544
Ok Google, order new toilet paper -- order is routed to any ecommerce provider which outbids everyone else to fulfill the order.
Alexa, order new toilet paper -- order copies previous toilet paper order and goes to merchant with lowest advertised price that reports to have that specific product.
Hey Siri, order new toilet paper -- ?
It's interesting you assume that Google will route you to the lowest bidder. When I search for "New toilet paper" on Google right now, the top 6 results are advertisements. Why would they want to anger their customers and allow you ignore them?
Also how do they plan to make money besides the initial cost of the gadget? Can they push adds while driving? would be too intrusive, or is this supposed to be based on monthly payment, or a tax? Google for government! Wall-E might be needed to clean up the mess after them.
i didn't know that Sting said in 1983 that his song is really a nasty song about surveillance; at least they have an anthem for promotion purposes.
Now i really don't think that personal assistants are going to be a success. They do descriptive modelling based on what you do, there is no way to evaluate if the suggestions are any good. Without such an evaluation they can't do reinforcement learning. Also they might suck in too much data - that would make it harder to make meaningful suggestions.
I understand Apple and the EFF are staunchly against merging products databases involving the same user's data, but for me this is an essential feature of the google ecosystem. I can ask for traffic and have directions appear on my phone while driving, play movies on the tv that i am looking at instead of my phone, audibly alert me to meetings while at home or work, and turn the lights on and off in the place that I am.
I don't think they are misleading people, the mute button pretty strongly implies the duality that you can't hear and un-hear things post-hoc. In addition they dont hide the fact that you are talking to a computer at a company by obfuscating it with some quasi-futuristically named caricature.
As is often with these article "always listening" is far more misleading, an embedded keyword processor is listening for keywords, and only if they match the phrase "ok google" are they sent to google servers. otherwise it just sits there sharing nothing.
Edit: I've made this point before, but your data is a currency. Spend it wisely (or even not at all). It's up to you.
But we've seen that Google is happy to turn over massive amounts of customer data to government without a warrant and without alerting customers to the practice, which makes the technology seem ominous.
First the GPS, then the microphone, then the camera, accelerometers, 3D touch sensors, etc. Gait, affect, and all sorts of factors will be able to predict criminal behavior before it happens.
Let's hope the next generation of tech giants will take customer privacy and freedom seriously and avoid the dark patterns and privacy violations of the current era.
Only now, when it's likely too late, can we actually get a glimpse of the sort of Orwellian dystopia that so many have warned about in decades past.
All data generated by a user is encrypted and stored in the cloud with the decryption keys on the user's device. This way, the service provider (eg Google) can't read your data. The major advantage of this approach is that the user is in complete control of the data. The drawback is that service providers and AI systems will be starved of data that enable targeted ads/recommendations.
A good middle ground might be to offer users an option : 1) Give us your data 2) Pay up and we will not collect any data.
Thinking deeply about the state of the internet, I think we have to move towards a model where users pay for services they use if privacy is a concern. As it stands, a lot of services offer free services in exchange for our data which is monetized through ads.
The hard parts of the Assistant, Alexa, etc., are basically everything else.
Trying to take some big stand against it I don't believe will work. Look at all those who took a stand against the 2nd Iraq war -- they were drowned out. And now everyone thinks the opposite. Culture/society always pushes a particular direction until there's a very big disastrous reason to think otherwise. Until something clearly really really bad happens this is the track we're on, like it or not.
Or have you only comfort, and the lust for comfort, that stealthy thing that enters the house a guest, and becomes a host, and then a master?
Ay, and it becomes a tamer, and with hook and scourge makes puppets of your larger desires.
Though its hands are silken, its heart is of iron.
It lulls you to sleep only to stand by your bed and jeer at the dignity of the flesh.
It makes mock of your sound senses, and lays them in thistledown like fragile vessels.
Verily the lust for comfort murders the passion of the soul, and then walks grinning in the funeral.
I do hate being tracked, but I have slowly started to like the connivence of it. Having all my information on Facebook, Linkedin, Instagram, and everything else privacy has really gone down. If my Nexus 5x can save me time, I will sacrifice some privacy.
Not a problem if it run on an appliance at my home, disconnected from all the other appliances in other people homes. That would be a truly personal Google.
Could you give examples of how that can be achieved and how the ToS goes about stating that they are doing this?
I wish I could share your confidence about that. It seems to me we've seen corporate data leaks, deceptive practices, huge hack attacks, etc and all that happens is the corporations get a temporary PR black eye -- and then spring back into action as powerfully as before.
In the case of Google, if it came out that, say, Russian hackers had breached a bunch of gmail accounts, how many customers would just up and walk away from Google? And what company could even come close to filling the void to replace google for all of us consumers?
Like I say, I wanna believe what you believe. But it just doesn't seem like that's what happens in the real marketplace.
That's not company ending, but it should get the attention of directors.
That said, I agree with the OP's takeaway: people should be asking questions. I mean, people should have been asking these questions long ago, even as just search users questioning how Google manages to return such geospatially relevant results. But most people don't even stop to think about it, as that kind of thing is just taken for granted as the thing computers just do.
Maybe with Google's data and AI in the form of a physical, listening bot (I don't know many people who use OK-Google on their phones) will be the thing that clues people in. I'm mostly comfortable with Google's role in my life (though not comfortable enough to switch to Android just yet), but I'm aware of what it knows about me. If AI is to have a trusted role in our lives and society, people in general need to at least reach the awareness that the OP evinces, if not her skepticism.
(There are, of course, situations in which the actual existence or not of specific data is what matters, but I think those are less relevant to the success of something like Google Assistant than the perception of privacy -- and that perception is important, regardless of the underlying data.)
Would it be different? My gut says this article would have had a different title.
Obviously advertising is more important to Google than it is to Apple but at what point does Apple become an advertising company and Google a product company? Is there a revenue threshold? A branding threshold?
Edit: I'm not saying Google isn't an advertising company. I'm asking what makes Google (but not Apple) an advertising company.
90.4% revenue from advertising for Google
They're looking at search ads for apps in the app store, but that's pretty limited to just the store.
If Apple made 50% of its revenue from adverting would it still be a product company?
Is this distinction about the size of the business unit?
For some companies ads are supplemental, for some lifeblood.
Is Apple an advertising company? Why or why not?
Is Rooster Teeth (one of my favorite creator groups) an advertising company? They make a large percentage of their revenue from selling advertising but they also make video content.
So what is Google then? As of 2015 Q4, ~90% of revenue comes from ads and it's trending down vs. other sources.
Is Apple also an advertising company?
How about the local minor league hockey team? Are they an advertising company?
The Internet and the occasional use of a stationary landline is all many of us need.
The constraint is going to kill Google and Apple because a future competitor is going to manage to offer service without requiring the data in raw form, an example of which would be Agent Based Computation in say 15 years time. Because of that it shall become the standard. Google and Apple will be stuck because their stakeholders include parties which require the raw form even if Google/Apple did not. That opens up a good line of attack on a previously impenetrable business model.
The legal implications of visiting somebody's house should be considered. Did I sign anything saying Google or Apple could use my data? No. This opens up the door for class action law suits in some US states at least. Unless they are smart enough to be able to dodge this somehow, perhaps with a contextual local filter for new voices and a permission request made of same, but I doubt it, they probably don't have the incentive to look at the ugly.
Apple's approach to this is already different than Google's and has (arguably) hampered them a bit, but that might pay off long term and give them an opportunity to evolve, change and adapt in other ways to.
I see no particular reason either company couldn't change their tactics, regardless of shareholders. Granted evolving with technology is not necessarily easy/without its challenges but if staying with your current model is going to lose you customers, and as such data and therefor your income, why would you stick with it? If no one cares enough to stop sending them their data in the face of better/more privacy sensible alternatives, then it seems their model would still work.
I was talking about the government putting restrictions on the kinds of technology Google and Apple can invest in because some innovations do not suit their interests.
I'm pretty sure that if this was going to happen, it'd already have happened. Next time you walk into any store take a look around for discrete black plastic bubbles.
As opposed to all the other companies out there I guess?
But there would have to be current and actual improvements to my life before I'll start considering handing out my personal data.
Can you imagine going to burglarize somebody only to find the cops waiting at the house because they determined there was a high probability of that house bring robbed? Maybe they'll even wait inside the house, so as soon as you kick the door down, you find yourself in a living room full of cops, all with body cameras recording your break-in and guns trained at the door.
And speaking of cameras... even without the cops being physically present, if you have a tight enough network of cameras, all a victim would have to do is report being robbed and the police can simply trace the burglar's movements across the camera network all the way back to the front door of their hideout.
People want privacy. They just feel helpless and powerless to stop hemorrhaging personal data.
N.B.: I don't work for Google.