Totally OK for me Google. I respect that people have different privacy thresholds, but I think the fact that it's different for everyone is being lost in articles like this.
So no, it is not just my problem or your problem, it's everyone's.
I think this example serves both our points. To your point, it's totally leaked this other person's info into my "google world" because I'm on gmail. On the other hand, that person is leaking his information directly to me just because of typos when he fills out online forms. Perfect privacy requires a lot of vigilance in a digital world, with or without google/gmail/hotmail/yahoo/etc.
Fun story there. Because Google's internal privacy safeguards are so strict, the people working on features like that can't look for example emails to train their ML models with.
They can only look at emails that were explicitly sent to them in order to improve the feature (and almost no one forwards along positive nor negative examples). What they can do across the email corpus is run jobs that return aggregate stats, where each stat must be coarse enough that it is infeasible to trace back to original users (often 100k+ users per data point).
So, AFAIK, training & testing models under these safeguards is more or less done blind. Build a model with the few examples you do have, and then run it against the corpus. If you see numbers change, you have no idea if that's good or bad, since you can't actually inspect the run.
(at least, this is the way it was a few years ago)
One time, when I marked a bunch of incorrectly-classified emails as "not spam", the Gmail web UI asked me if I wanted to send these emails to the Gmail spam team. Was this what you meant, or something else?
(That's different than just marking an email as spam, though)
It's not clear whether these AI models have much incentive to correct anything. If 99% of people with attributes x,y and z are bad candidates for a job, will you even get an interview? Is there any attempt to account for the fact that attribute x is something you were born with? Or that you are actually in the 1% and really are a good candidate? Or that you don't actually have attribute y, and it was just inferred from something else or some kind of mixup like an email address typo?
Machine learning is also very opaque.
If you verify the race field, then now you're in the business of enforcing racial definitions.
Why would that matter to the company? People are born with stupidity.
I receive a ridiculous amount of other people's email - for serious things like email account reset, to banking info. When the Ashley Madison hack happened, my email address was there multiple times! Imagine if my then-partner had bothered to look, what a mess that would be.
I've called American Express a couple of times to report errant emails with account info ending up in my email. THEY didn't care that one of their customers had an issue, they thought it was a great time to have me fork over MY info "to check".
From time to time I have had messages relating to a government planning committee as one of the members got the domain of someone's departmental email account wrong and the messages come instead to the domain I administer.
In another instance I was getting emails about an account someone created with American Express using my email address. I sent multiple emails to their customer support to get them to stop sending financial information to the wrong person with no results. In this I also found it difficult to even figure out what agency to report them to for failing to take action when notified. Eventually I took the time to call them. It took around 20 minutes (not counting time on hold) of talking to multiple people to get them to remove the email address from the account. This included them asking me several times for my social security number - which I flatly refused to provide since I had zero business relationship with them. This refusal actually seemed to confuse them.
Many of the people I know who have very common gmail addresses have set up canned replies to let senders know they've reached the wrong person.
I often get some misdirected emails because of a very banal name in my country.
When the email contains a thread history, I sometime noticed that the address was simply corrupted by a recipient, such as numbers getting dropped from the genuine address in their reply.
I guess some systems can't handle correctly numbers or other characters in email addresses.
The former doesn't fix the issue at all, and the latter is unworkable because the guy reliably giving out the wrong email address will absolutely not remember his public key.
Have the browser suggest the key IDs from `gpg --list-secret-keys`.
I was somewhat radical about privacy in the late 90s (only person I knew that read every EULA) and am still a supporter of the EFF but I don't really understand the issue here.
When you visit you friend's house, the data is not under their control, it is under Google's control.
Therefore, Google has a WAY bigger responsibility than most people realize, once they decided to collect this data.
Ultimately, whenever you visit any place, business or home, the data is under the control of the owner and anyone they abdicate those rights to.
Some interesting scenarios to consider: If I visit a friend's house, and I start getting targeted ads for a service I didn't subscribe to without prior consent or my knowledge, can I sue her/him? What about a scenario where some service collects my data, said service is hacked, and someone commits identity theft on me, who is liable for damages? Do I need my buddy to sign a waiver when he visits to play some Xbox for a bit?
But it was your friend's decision to delegate that control. So, Google having that control is still a consequence of your friend's control.
With LTE, I don't need wifi.
I would imagine if you walk into a home with google's AI doodads all over the place, you're gonna be picked up.
Rumor is that even when your phone has been turned off, it can be turned back on remotely.
I have a couple of cameras, and at least one visitor has been uncomfortable with their presence, despite the fact they're not sending the data to a third party.
A business is typically located in a public space, where there is no expectation of privacy. In contrast, a home is by definition a private space where there is a strong expectation of privacy.
Aren't there some wiretapping laws around this sort of thing?
(IANAL, YMMV, etc.)
Tl;dr there's privacy in decentralization.
I'm very optimistic about Google making my life better in lots of little ways. I have virtually no concerns about my openness to Google causing me trouble.
I think that's a common goal and widespread belief and find its implication of having given up on "bigger ways" troubling.
Our industry was going to change the world. Rewrite the cultural rules of socializing. Rebuild the economy in a new form. Encourage major cultural and political shifts by empowering the little guy. Bring knowledge to the ignorant, companionship to the lonely, power to the powerless, jobs to the jobless.
What we have now is, well, maybe, with some luck, timidly, sometimes google can analyze the last time I went to a gas station, the number of miles I've driven at various speeds since then, the time of my next appointment, then it can adjust my google now card to encourage me to leave home earlier so I'll have time to fill up the tank and it'll find the cheapest advertised price along the way. That's nice ... but wheres my revolution?
Even worse in context of the article, OK well say I have to give up all privacy and go hard core 1984 telescreen big brother is always watching, to save endangered species and house the homeless and feed the starving and bring peace to the victimized. Well OK I'll think about it. Oh wait, we don't get any of that, all we're offered is slightly better appointment scheduling. Eh, no thanks.
Nothing is ever really new, and this era is likely similar to pre-quantum era Physics around 1890 where nothing is left to discover other than adding a few more decimal places here and there. The meme and speech patterns are all the same (although I'm not quite that old)
For centuries, people have navigated using stars and maps, yet now you have turn by turn navigation in your pocket with real time traffic and crowd sourced traffic incidents. You can reach most of the world population instantly by dialing a few numbers or even sending them a text message or mms, no matter where they are.
In the context of the article, a couple of years ago, Google Now told me I had to leave for a meetup in my calendar that I completely forgot about, gave me transit directions for a part of a city I've never been at and got me in exactly at the meetup start time.
While that all seems like modern conveniences nowadays, even thirty years ago if you'd have told people that you couldn't get lost anywhere in the world and could get ahold of just about anyone at anytime instantly, they'd think you'd be talking about science fiction, not today's reality.
Example: "Intuit’s TurboTax stores highly detailed financial data for millions of users who import their W2s, their banking data, info about their mortgages and more. Right now, all of this data is locked into TurboTax, but the company is now thinking about how it can do more with it by giving its users the option to share this data with reputable third parties." ... https://techcrunch.com/2016/09/22/intuit-wants-to-turn-turbo...
True, I don't mind Google today, but what about tomorrow? What about N years from now when they have failed to hit their financial targets 2 years running. Will that company have the same set of standards as the one today?
The reality is that once you've given up your privacy there is no getting it back.
Google might handle the data it collects better than other companies, but it would do much better still if it didn't collect any personal data at all.
Hands off my data, Google!
Going further, given that an encrypted email to Gmail will simply be unencrypted and then available to GMail, include in the protocol authorized (via both white and black list means) agents of the recipient. So, if you are hosting your own email but the intended recipient is expected to be not hosting their own email, the sender can blacklist "agents" such as Gmail and Yahoo! Mail, or blacklist all except for those chosen to be white-listed such as Proton Mail.
For almost all users getting a message from someone you personally know containing a message that they would legitimately send would be sufficient to trust the referenced key. If you are a little paranoid you could ask them over the phone or in person to send a specific message that you would then trust, or just ask for the URL itself. I believe that in-person key exchanges utilizing large trust networks are overkill for the vast majority of people sending everyday communications.
Make it so simple as 1) one time key creation and copying to each device 2) one time trust per other person that is completely streamlined by the software. If the gmail team implemented this for example other email providers would soon do the same and it would spread like wildfire. Very quickly a huge amount of email would be encrypted. Of course this would require an open design that anyone can implement.
Maybe there's a fundamental flaw with this idea that I'm not seeing. If so, please say so because otherwise I'm going to just be disappointed in five years when email is still 99% unencrypted.
Granted there may be a place for regulations to help us restrict what companies are able to do(perhaps making it easier for you to identify a region that is being recorded, right?), but, at some point society can't help the fact that you'd prefer if machines were unaware of your existence. That's just something you have to solve for yourself.
GPG doesn't hide that one is emailing, or when, or with whom; it only hides content.
Personally, I would have downvoted you because your original point seemed to condescendingly assume something about a large proportion of the community and their intentions - then you assume again that it's "The Google People" (and only them)?
Unless you can show that your downvotes came because of a specific reason, and from specific people...
I also fear you will be unable to notice in which areas of life and information the distinction between choice and reaction is harmless and which it isn't.
Of course, I'm not talking about "You" you, but just people. Me as well. I feel we are widening the field of unconscious decisions and I see that as inherently bad - in my fellow humans as well.
You could say that Plato wanted us to make easy things simple (link for distinction: https://www.infoq.com/presentations/Simple-Made-Easy).
I believe this to be a move in the opposite direction. We should have a care.
It's like saying we shouldn't use prescription glasses, or medication, or cars, because it's not "us".
Humans invent all these tools and systems to improve and optimize our life. Make our vision better. Make our health better. Make us move around faster. In the case of AI, make us do perform certain things more efficiently.
Imagine it wasn't an actually computer. Imagine it was a personal secretary you had that gave you the EXACT same information. Gave you your flight information, turned on the light when you asked, gave you the weather and your schedule. Would you think that was wrong? That this isn't "you"? No, it's just optimizing your life, but now available for a wider population rather than rich people.
In your metaphor, you are implicitly paying the secretary, so the secretary is incentivized to maintain your interests.
How much have you paid Google for its free services?
Your metaphor is inapplicable. You don't have a secretary telling you these things; you have a salesman trying to sell you things, and the salesman is getting smarter every day while you aren't. Not the same thing at all.
It seems to be a theme here today... a company can't serve both advertisers and customers. In the end, one of them has to win, and given the monetary flows, it's not even remotely a contest which it will be. https://news.ycombinator.com/item?id=12644507
It's funny how bad of a stigma ads have gotten, but at the core, if you think of it, it's not necessarily a bad thing. Think of a friend recommending you a restaurant, a new game to play, a movie to go watch. In that case you'll be super interested, but now if this AI who probably knows your taste better than your friend suggests you something, you are instantly turned off and annoyed.
I think the root cause of this is that there is so much mediocre ads out there that ruin it for all. Your mind just blindly blocks all ads now.
When you aren't paying anything for something of value, YOU are the product.
No, that would be slavery, which is illegal.
Google is selling advertising space on various channels that you provide in exchange for Google services to advertisers.
> When you aren't paying anything for something of value, YOU are the product.
No, when you aren't paying money for something of value, you are probably paying something else of value for it; often, something that the person with which you are trading is then selling for money, making you a supplier of an input to the good or service they are selling for money.
And it's not a simple product like glasses where you pay with money and then they improve your vision. It's a product which goes far beyond your understanding and for which you don't pay money.
Google isn't interested in making your life better. What they are interested in is getting you to believe that they want to make your life better and to then recommend going to that bar, because the bar owner has given Google money to advertise for the bar.
Yes, you might actually like that bar, but Google isn't going to recommend going there in intervals which are beneficial to you. They'd rather have you go there a few too many times. Because that's what makes them money. It's not improving your life, which makes them money. Their AI will always work against you, whenever it can without you noticing.
First, there is a way to tell it to not do that. With Google Now, you simply tap the menu and say "No more notification like this". With the assistant, you will probably be able to ask directly.
Second, let's be honest, humans fail pretty often too, so that's just a weak argument.
Lastly, I think it's unfair to dismiss a new technology just because it could maybe fail, without having even tried it.
The awful success cases are far more interesting than the awful failure cases.
Your noose example is pretty contrived, however.
How about sleeping pills? Opiates? Local extortionist cult?
There's not only the "morally reprehensible" metric ("Don't be evil"); there's also the "absolute PR catastrophe" metric that printing such an ad for a rope would mean.
I think the real issue is the casual deception which you just fell for: It isn't "your" electronic secretary, and the thing it just did might actually be a "good job" from the perspective of those who control it.
I'm not saying we shouldn't use AIs. We should, however, think about how we use them.
To build on your example, what are the dangers of having a personal secretary on the payroll of anyone but you?
What I am expecting from this is a super devious filter bubble - because that's how you make money. Google's old slogan "Don't be evil" is long gone. "For a greater good" might be more on point.
What does the Google Assistant help me do more efficiently? In all honesty, I can't figure it out. I don't need or want a secretary, and I can do written planning for myself.
I need less paperwork and fewer web forms and identities, but the Google Assistant only promises more of that crap.
I'm never buying one. It's a sacrifice of privacy for zero to marginal gains in convenience.
If you do not care what I say, why even reply?
Google is designed to sell ads, and subtly influence your behavior towards the most profitable results. Please do not confuse a fact-based tool with an ad generator.
This is the very common theory that a company will (shadily) try to offer you a worse product to make more profit. It fails to account for competing companies that would jump on that opportunity to offer their better product, and get the market share.
But what's funny here is that the suggested alternative is to not get any product at all. As in: "Poor OP, didn't realize that it wasn't really him who was enjoying that burger he was enjoying."
Maybe Play Music is the best thing. Maybe it is not. Neither of us can answer that. But if a definitively better product comes along it will have no way to make a foothold because Google is still pushing everyone to their own product, from their other product (Search), and even when people try your product, if they use Google's other products, they'll tend to stick to other Google products.
Honestly, the worst problem with companies like Google is vertical integration. The ability to provide a wide product line where you integrate best with other products your own company makes has an incredibly chilling effect on competition, and therefore, innovation.
And if your theory that companies prioritizing results for profit would lose to companies that always prefer the best products, why is DuckDuckGo still in what... fourth or fifth place?
You'd need to argue that DuckDuckGo's search results are better; I don't think they are. That's what made Google first among many competing search engines, before there was even a clear business model in it. Today the incentive to outperform is bigger.
If a product Y definitely better than X comes along, and only Google Search fails to rank it higher, people will start thinking "I rather search on Bing too, as it finds better products in this category".
"I ordered this burger because I was hungry and it tastes good" vs "I ordered this burger because Google was able to successfully predict that I would be receptive to having burgers, or the idea of burgers, placed in my environment"
Other people's happiness metrics work differently, and all popular web services are popular precisely because they satisfy the unconscious desires of the majority of people.
I wonder what is then causing inefficiency when we read restaurant's menu and can't decide what we will have.
I'm with those who think we make choices and decisions far less often than we think, but that we still do make them.
this is like absolutely full on plugged into the matrix world. and we're living right in it.
these guys are like the ones who've taken the red pill, and gone on to find out how far the rabbit hole is going.
(edit: i'm even more intrigued by the possibility that the future is not just the matrix singularity, but an oligopoly of several large singularities, all fighting to plug us in)
We already see what happens when peoples decision making is coloured by mass media advertising. An obese population trapped by debts taken out to fuel consumption.
It is in other peoples best interests for you to work like a slave, be addicted to unhealthy habits & run up vast debts in order to buy their products.
We keep allowing those with power to distort the markets gaining themselves more money and more power at the expense of the little guy. I don't see any reason why AI in the service of the powerful will do anything but accelerate that.
But I think it would be a problem if every Monday and Thursday night Google Now started providing information about AA meetings in the area, instead of bar information. It's up to the user to make the choice, Google Now just detects trends and then displays information based on those trends.
I go to the gym every Monday, Tuesday, Thursday, and Friday morning. And each of those mornings Google Now tells me how many minutes it will take me to get to the gym from my current location. Should Google Now start giving me directions to the nearest breakfast place instead? No, not unless that starts becoming my pattern.
Google may not have a responsibility to be a good friend, but personally I'd prefer not to have a bad friend always following me around, thus I'm a little less excited about this feature.
> Should Google Now start giving me directions to the nearest breakfast place instead?
That may depend on how much Waffle House pays for advertising, and that is the problem.
If you replace "AI" with "marketing" would you still make that statement?
should I just be a actor playing through a set itinerary of vacations and movies and burgers and relationships? maybe you think its that way already, except less perfect than it might be, but thats a pretty frightening notion to me.
My culture, education and skills limit what work I can do.
Our culture places limits on a vast number of experiences. On the road and the only thing is fast food? Welp, eating fast food. Live somewhere that only has one grocery store or cable provider?
I don't really see AI in the form Google is peddling as really all that much different. We're just 'more aware' that the world around us is really guiding us.
I may be somewhere new, and can only see the immediate surroundings without a lot of exploring. And let's be real, in the US, most cities are the same when it comes to restaurants/hotels and such. There are differences in culture but we don't usually see them if we're just visiting. Not in a way that matters.
Google will let me know that the things I prefer back home? there are equivalents nearby.
Fencing ourselves in is what we do. Who knows, perhaps a digital assistant would help us stick to our personal goals and decisions better. Rather than just having to accept what's there.
I'm curious why you think this is bad. I don't necessarily think it is good but I also don't necessarily think it is actually happening
Which news-sources are you going to learn about?
Which news-sources are you for some reason very unlikely to encounter?
Now apply a real-time AI filter-bubble, able to also include government policies in its decision-making, onto those questions.
I believe the most important thing in life is thinking. I believe a key element of thinking is looking at "easy stuff", the stuff we just live with every day and don't think about, and for some reason be forced to think about it and make it simple.
Take the Snowden-leak. We lived a nice life being the good guys and that kind of surveillance was publicly thought of as conspiracy theories. Suddenly we were forced to look at what was going on. How much of it are we okay with? On the grounds of what principles and tradeoffs? This is all very unpleasant, but we're all better off for facing those questions and work towards new principles. We take a chaotic gruel of cons and pros, and try to hammer them into a few simple principles our societies may function by. For instance, the separation of power into 3 has served us well.
I fear that we end up in a world where raising such unpleasant questions becomes almost impossible - and we'll never even notice. Not because of AI (I believe AI to be inevitable and fascinating) but because of the way AI is used.
Living a life assisted by an AI, made and paid for by someone else, seems like the epitome of naivete to me.
Maybe the illusion is that it was a choice . . .
As example, the old weather widget from Google's "News & Weather" was replaced by Google Now. That provided a similar experience for some time but then stopped working with another update that required search history to be enabled and/or some other setting in privacy control.
Also the launcher integrated Google with a system update(Moto G line of phones). I have since replaced launcher, browser, search app (all with open source replacements), weather app (with a paid service). Convenience has suffered...
At that point I turned it on but deleted* the search history each day, until such point as they changed the delete controls to be more of a nuisance.
I now use DuckDuckGo and an iPhone instead.
* Or so their site claims.
Your location history tracking is paused..
It can probably save them from a legal mess if they 'resume' it in future updates.
The more specific work I need to go through to set up my privacy, the less inclined I am to do it. If I didn't think I was able to be manipulated psychologically in this way, well, I wouldn't worry about advertising at all! If I were to ever do something politically dissident/personally embarrassing on the internet (not that I ever have of course) I'd go to the trouble of ensuring encryption and being hard to track, but I think it's important that I'm able to say to Google "Hey, I'm cool with you telling me when I should check off work to hit the bar, but it's super weird that you know what I should get my Mom for Mother's Day."
Of course, the simplest way of making a system that's both fine-grained and intuitive might involve... more AI, so I'm not sure how to crack that issue.
This is by design, so that the majority of users are confused and leave the defaults as is, enabling Google to do whatever they like.
I never thought of that before. But, what a subtle way for Google to dissuade people from using a tool that could impact their revenue.
And it annoys me that on maps, when you turn off all the spying capabilities there's no fallback to local history. You either share it with us or you get none.
> ... it was creepy as all hell that the people [...] knew enough about me
Speaking of which, anyone following today's stories about the Yahoo email scandal, the pressure on the folks who own Signal, and recent litigation from Microsoft against government gag orders?
But, let's go back to talking about none of us have free will and talking about how clever Now is.
GPS navigation devices with much less storage than a phone have been more than capable of what Google Maps offers for a long time. There's essentially no reason for it to do anything with the Internet except getting map updates.
I use Google Maps every single day to get to and from work, simply because it knows how to avoid traffic. 10% of the time it saves me half an hour on my commute.
1- There is no way to set your privacy level.
2- Things that Google/Siri/Alexa know about you are not limited with the name of the bar you go frequently. They know much more about you. And you don't know what they know. The sky is the limit here.
3- Things that they know are not limited with you personally. They know about you, your family, your friends and all their interactions. They know very much about the whole society.
2 - My point is that I personally am OK with Google's AI knowing more about me. I respect that others aren't. I'm not naive in my acceptance.
3 - I don't really have a response here.
The privacy control where I disable location tracking and half a year later when I look in Google Dashboard I see months of travel history?
I respect that others aren't. I'm not naive in my acceptance.
So what do you do in a situation where your use of Google's data collection also affects people who do mind it? I would not be comfortable visiting a friend with an always-listening device like Alexa or Google's equivalent.
I nuked my paid Google Apps account a couple of months ago. I had enough of their total disrespect for privacy. E.g. conversations that I had in Google Mail (which is protected by the Google Apps agreement) were used for suggestions, etc. in Google+ (which is not covered by the Google Apps agreement and uses data for targeted advertising).
I'd turn it off if/when they ask. I don't think that's unreasonable in the least. I'm not responsible for enforcing everyone's privacy preference, but I also respect them and will accommodate guests in my house.
It could have been a visit to a website.
You can monitor the traffic of the Alexa and see that it is only sending data when you ask it to do something, and furthermore, Amazon gives you a log of everything you've said to it and it recorded.
Others may be reviews left, reviews voted on, prime video consumed, audio/video/book samples consumed, kindle activity, how long you spend on a product page, how you scroll on it, the breadcrumb of how you got there, and surely dozens more.
My impression also is that most early adopters of this kind of technology are younger people. (again: mostly)
So this brings up an interesting question about the future. As the young early adopters age, what will happen?
a) their privacy thresholds will also increase and they will have a "oh holy crap" moment in the future, where as a middle-aged or older person, who has lived a now much richer and problem-laden life, they will realize that google (and/or other co's) have what they consider now, as too much personal information about them,
b) they will keep their young-ish privacy thresholds as older people, and in general, across society, people will have lower thresholds than exist nowadays. In other words the world will change.
My money is on a)
My impression (in the main) is that younger and older people have different views on privacy. Older folk might be creeped out by Google knowing their schedule, but okay with the NSA or FBI or whomever reading their emails because "because terrorists" whereas younger folk are more likely to balk at the latter, but very much okay the former.
Do you think my opinion is accurate? I'm curious because to some extent I completely agree with you.
Of course it also once told me how long it would take to get to an ex's house from my current girlfriend's place.
How about an 'OK, funny once' command?
(eugenics through large scale suggestions anyone? ;>)
There are even apps to that... It wouldn't surprise me if someday this tracking was learnt by Google
This isn't funny at all. "They" (or Skynet) could totally do this and who would be the wiser?
> How about an 'OK, funny once' command?
My whole point in posting originally was to bring up the fact that the interesting discussion of different thresholds and having a spectrum of options is being lost in binary "privacy vs complete corporate control". It's not binary, and it does everyone a disservice to act like it can be for everyone.
Your phone's microphone can be turned on remotely and listen to what you say (I know several startups that do this). Security/traffic/drone/satellite cameras are everywhere. You are being watched literally all the time, but to think the watchers actually care about your personal life indicates a pretty inflated view of self-importance. We're starting to complain about it about 20 years too late.
If i'm on gmail, then you need to talk to gmail to talk to me. It's a tradeoff you'll need to make. If you want you can GPG encrypt the email, but there is nothing from stopping me (legally, morally, or otherwise) from just decrypting it and replying with the contents, or saving the decrypted message in my google drive.
So, by the same token, does Google have the rights to profile people even if they haven't consented to that? One answer, as with copyright, is to see what the law says; and it's very possible that the law says no, especially in the EU. (I've not researched it; others will doubtless know much more than I do.)
Or Google could just seek to do the right thing and not profile people unless they've opted in. But my definition of the right thing may well be different to theirs.
1. finding someone who is not using any Google service at all, and
2. finding their data in a Google database.
I hope that there will be a day when Google and Facebook will combine forces and work on a sequel to "The Lives of Others" .
Yeah, that's a great movie.
Pretty much your only privacy is in your head at that point.
I'm not sure that is a "threshold" of privacy but rather a "I am okay with 24/7 surveillance of all of my activity."
is this story unrealistic, or has it already occurred?
One Wednesday afternoon, at work, I got a notification saying "Travel time to the Lion & Crown". The first thing that ran through my head was "oh my god, I'm living in the future".
The problem is that I want to use Google Maps so what choice do I really have?
Sure I use a dedicated gmail for my phone but that really does not help much.
"Hey, it's a been a while. Why don't you go to ... today? Traffic conditions are favorable too."
Given that, and the existence of pervasive surveillance and data mining, the above is inevitable.
"Oh you wanted to go to the bank today? Fuck you, we're going over by the mall" doors lock shut
"I was feeling bored, so I drove down to the mall. Look what all did I get!"
And then prepares to rush the owner to the hospital.
I think that it is different for everyone is completely and utterly obvious. It's clear the author doesn't think it's OK, but that it's his/her opinion.
Extremely annoying. This sort of thing should not be acceptable, an honest mistake results in every place I've been being logged in such a way that anyone with access to my Google account, access to Google servers or with a subpoena can have my full location history in a matter of seconds.
This needs to be a big red option every time you add an account "we're gonna log everywhere you go and hand it over to whoever we feel like, you cool with that?". It'd be different if the log and analysis were done only on my device, but doing this on Google's servers is completely unacceptable by anyone with even the weakest standards of privacy.
In your Settings app Settings, scroll down and tap Google.
Open a separate app called Google Settings Google Settings.
Tap Location and then Location History.
At the bottom of the screen, tap Delete Location History.
Also https://maps.google.com/locationhistory allows for delete all history.
Schrems described the file obtained through a legal request as
a 500MB PDF including data the user thought they had
deleted. The one sent through a regular Facebook request was a
150MB HTML file and included video (the PDF did not) but did
not have the deleted data.
EDIT: Okay, found it. Follow your instructions then open that "Manage Activities" -> Menu -> Settings -> Scroll down -> Delete All Location History.
You _must_ do this for every account if you have multiple.
You _must_ do this for every account if you have multiple.
At least for turning it off I expected it to be phone wide, not account specific. Deletion yeah, I suppose you're right.
Do you think you're setting a behavior of the phone, or a behavior of your account?
Our expectations of how "Star Trek AI" would actually be implemented were completely different than how highly connected cloud-based services like Google Assistant work today.
Anyway, the point being, if the assistant lived entirely in your own computer, it would entirely different. Most people are not concerned about what their "computer" knows about them, they're concerned about what companies and their employees do.
The trope-namer (Star Trek AI) was a ship-wide AI - when considering the ship sizes, it definitely is closer to the "cloud" model and not limited to a private instances on officers' bunker/bridge terminals/tricorders. Perhaps a hardcore Trekkie could answer this question: is there any canon that defines the AIs scope? Is restricted to just one ship, or could it possible be a Federation-wide presence with a presence/instances on ships?
There are some cannon exceptions to this (such as in Nemesis where the subspace communication interruption affected the star charts), but even then the functionality of the ship was not impacted.
The Star Trek ships are very analogous to our own ocean-bound ships, where satellite communication is possible almost anywhere, but they don't rely on it.
So, yes, the AI is completely confined to the ship.
Was there ever an indication that their AI-level data was transferred along with their personnel file? For example did the replicator know what food to offer them on day one?
If so, then it's seems reasonable to assume that the Enterprise's AI data was backed up at Federation HQ during routine maintenance, and that the "IT department" at Federation knew exactly what you liked to do on the Holodeck.
Through specific indication from the user. Recall the constant utterance of "tea, earl grey, hot"?
Ultimately, I imagine the user's information (documents, etc) was passed directly between ships, or through (as you say) Federation HQ.
> Enterprise's AI data was backed up
Ultimately, I think this is where AI will differ from ML. An AI won't have data that isn't a part of the AI - i.e. you couldn't separate out information specific to Picard from the rest of the AI code. An AI might be able to "scribble" down some notes about interacting with Picard and pass them off to another ship's AI, but the second AI would never treat Picard quite the same way as the first, even with those notes.
This stems from my belief that how ML interprets data is different from how an AI would. If you were to copy all of the data used to build a ML model and apply it again, you'd end up with the same ML model. An AI, on the other hand, if built twice twice from the same data would end creating two separate AIs.
for example, the Star Trek universe didnt seem like a universe where you had to shop around for a trustworthy mechanic, who wouldnt overcharge or over diagnose. (e.g. headlight fluid)
Maybe the implicit trust of other people was integral to the AI being successful in that universe.
I also think the comparison isn't perfect, because Federation vessels (in my mind) are similar to today's Navy vessels. All onboard systems are connected to other onboard, but opsec demands the ship systems not be influenced by external actors.
* Don't start that argument. Seriously.
The clearest example of extensive off-starship monitoring (within the Federation) that I can think of in TNG is a civilian (though, to be fair, a civilian in a role analogous to a "defense contractor"), Dr. Leah Brahms.
> I'd argue enough episodes are strong on fundamental individual rights, that it's hard to imagine Federation life for civilians being a surveillance state.
Actually, I'd say that its quite plausible that the Federation is a "benevolent surveillance state", that is, one with pervasive monitoring but a very low incidence of "serious" abuse (that is, the kind that substantially limits practical liberty -- casual intrusions on privacy may be more common.)
While the Federation seems keen on "fundamental individual rights", it doesn't seem to exactly mirror, say, some modern views on what those rights are -- and not just in terms of privacy.
And arguably, if she was working at Utopia Planitia Fleet Yards on the Galaxy class project, she presumably worked on a Starfleet orbital facility (technically, a number of facilities) over a span of years, where certainly enough data would be collected to make a poor replica of her personality, as in the show. I don't see anything suggesting specifically outside of basic biographical data, that she was being monitored in her civilian life.
La Forge, having interacted with that hologram extensively, and having surely read Starfleet's records... apparently didn't know she was married.
We're starting to lose that ability now.
Where technology has failed us most is in the utterly stagnant evolution and maturation of secure private networks. The following is a utopian notion, but had private networks seen as much R&D as the public clouds, they would be significantly less cumbersome than today's clunky VPNs. Imagine all of your devices collaborate directly with one another and with you on your own secure private network—no central cloud servers needed. Your personal assistant is software running on a computer you own rather than a third-party's centralized server.
I still feel this ideal will eventually be realized, but for the time being, no large technology company is willing to take the necessary risks to buck the trend of centralization.
The biggest fiction propped by up centralization and cloud proponents is that it would be impossible to provide the kind of utility seen in Cortana, Siri, Google Assistant, Alexa, et al without a big public cloud. A modern desktop computer has ample computational capability to convert voice to text, parse various phrases, manage a calendar, and look up restaurants on Yelp. Absolutely nothing the public clouds provide strikes me as something my own computer would struggle to do (to be clear, I would expect a local agent would be able to reach out to third-party sites such as Yelp or Amazon at your command in order to execute your desires, but they would do so directly, not via an intermediary).
A few years back, when Microsoft was at the beginning of its Nadella renaissance, I had hoped it would be the first technology titan to disintermediate the cloud and make approachable and easily-managed personal private networks a thing. Microsoft's legacy of focusing on desktop computers would have made it well-situated to reaffirm your home computer as an important fixture in your multi-device life. They could have co-opted Sun's old bad tagline: "Your network is your computer." But they elected to just follow the now-conventional public cloud model, reducing everyone's quite-powerful home computer to yet another terminal of centralized cloud services. Disappointing, but I think it is ultimately their loss. I suspect a lot of money is on the table for someone to realize a coherent easy-to-use multi-device private network model that respects consumer privacy by executing its principal computation within the network.
Not just secure private networks, but secure and programmable personal computing in general. The amount that I can actually do with my workstation PCs, let alone laptops or mobile phones, is now thoroughly restricted compared to problems that require a full-scale datacenter.
I originally enjoyed computing because, so to speak, it was an opportunity to own and craft my own tools, rather than being forced into the role of consuming someone else's pre-prepared product. Now we're being boxed into the consumer role in computing, too.
idk man. Computers are powerful. I like seeing what I can do with them.
I think that as far as the nascence of these features goes, the cloud model will beat the on-prem features any day of the week for several reasons. Lack of configuration to set up, ease of use from anywhere without network configuration, etc. are table stakes. But the biggest at this point is the sheer amount of training and A/B testing data you can ingest to determine what is useful for your end users.
The velocity of cloud-based products is nothing short of amazing and I doubt that on-prem will compete with the feature set and ease of use of always connected solutions until there are feature-complete, mature cloud versions to then bring in.
And, for better or worse, Dragon's text-to-speech is pretty damned good after a rather minimal amount of training.
I don't think there's anything stopping voice and intent recognition from coming back to our personal machines other than the ability to keep making money from having it come up to the cloud.
When I was working on Google Search what really astounded me is how we could leverage hundreds of machines in a single request and still have virtually no cost per search. The reason was that each search used a tiny amount of the total resources of those machines and for a very short time. A total search might have (made up numbers) one minute of computation time, but spread across 200 machines it only takes 300ms from start to finish.
That's the benefit the cloud will provide. You don't want to have a 1000-machine data center available at all times to store billions of possible documents and process your requests with low latency. If we went to a private-network model I fear that the turn-around time would be a lot closer to a human assistant. You'd ask it to do things and then it would get back to you sometime later (seconds? minutes? hours?) when it had finished it's research and come up with an answer.
Quick shoutout to Urbit here.
Except that's what I did for many years using a computer only as a terminal for an AIX mainframe. My mail was there, I browsed what was the web, used gopher, wrote programs, all stored there.
I would like to say that the cloud we have is a privacy concern because we don't know the full scope of data collected, nor what happens to it, nor do we own any of "our" data once it's in the cloud. But not every cloud would have to be that.
There's a perfect world where one wouldn't have to be paranoid about this stuff, but it's not what we have right now.
Instead I get the current evolution. I want a 3rd party.
The same thing comes to my every day of usage, I'm still on win 7, and short of leaving Linux(yes I should have already), I can't upgrade without becoming a product.
I want what we all imagined and dreamt of, and I pay for it.
What I won't do, is become a product.
Instead I'm stuck with multiple fake accounts on Gmail, using a pseudonym on everything(including programming contract sites such as upwork), just to keep some semi iota of privacy, and to enjoy the benefits of what we all want.
Imho we need a new major party to emerge, that will charge an initial fee(like windows 7), and let us do what we want with those services(with caveats of course).
But I fear we won't get that anymore.
It's not that complicated. Use a common distro like Ubuntu and everything's documented online.
But my main coding(admittedly amateur and earning very little) uses .net. on top of that most of the games I play to relax are Windows does only(as far as I know for most).
O keep meaning to make time, but it just hasn't happened yet. I don't get paid as a programmer (English teacher), so I need to spend my free time earning money on what I know.
As always, one day when I have money to spare(or time, which is basically the same thing haha).
I will fully admit I have not spent enough time in Mac to figure out the file system, but from what (little) I have seen, your not in control.
I like my pc because I can see what sub-processes are running, who is taking up how much memory, install things to where I want and if worse comes to worse, manually change how windows runs. (I apologise if you can do all that in Mac, as I said, I don't have the experience certificate - I bounced off it hard).
To answer your questions, you're in complete control with macOS, you can turn off SIP, turn off Gatekeeper and install whatever kernel extensions you want. Apple doesn't snoop on you like with Win10 telemetry.
I suspect that chances of would-be burglars or identity thieves breaking into Google data are pretty slim, in comparison to a home-installed system.
OTOH both Google and a private person can be strong-armed by a court order, or even a three-letter agency, to open up their AI knowledge vaults.
It's like saying "People would never break into a bank, when they could break into someone's house and steal their stuff"
It may be easier to break into someone's home-brew system, but generally it would be unlikely to happen unless you were being otherwise targetted. Whereas google has a lot of users data, which could make it a more attractive target.
Just how "home-brew" are we talking here? If there's any web-facing code that you didn't write yourself, whether commercial or open-source or whatever, that's a target for attackers that just scan everything looking for known vulnerable services.
If it is entirely custom, I'm fairly sure there are a few classes of common security errors that can be reasonably well tested for without direct human involvement. Which brings back the threat of attackers just scanning for all available targets.
If you're a "person of interest", you'll be attacked either way.
If you're just a regular person, like me, you won't likely be targeted. But my chances of getting a collateral damage are much higher in a centralized system: when it's broken into, my data have a chance of being siphoned off along with data of someone less ordinary, who was the target of the attack.
But there's a third case: "normal boring people" become interesting just by being together in a big group, even if no particular "person of interest" is among them.
The odds of someone physically taking my computer and decrypting my data are zero (128-bit encryption for the win!).
The odds of someone electronically breaking into my computer are higher.
The odds of my data being misused or misaccessed at Google are one: it will happen.
Maybe someday that will be a realistic endeavor, but it would take a lot of effort to set up and maintain your own personal versions of all of google's services, and integrate them
But it stores the private data about your location, your searches, your purchases. this could even be an encrypted private fire-walled bit of cloud rather than a robot in your house.
The point being your AI is serving you. And you can delete/edit this private data (or even the whole AI) if you wish.
Perhaps using Urbit as the network architecture since it already has pki. As Yarvin calls it, true 'personal cloud computation'.
Really it's the only sane way to live in the 21st century.
I've designed my home automation system on the concept that the only route to the Internet is my computer (and therefore, my home automation software). My computer is a secure, well-managed intermediary that can store my data, and decide when and how to receive and send data to the Internet.
The idea of dozens of Internet-connected devices in the home is :terrifying: in comparison, especially considering that badly-secured IoT devices are now powering some of the biggest botnets out there.
My light switch, however, cannot talk to the Internet. It has local-only communication protocols that are simple. It knows how to be told to turn the lights on or off or dim or a handful of other settings, but it's literally incapable of doing anything else... and why would I want anything different? Why should my light switch have Bluetooth and Wi-Fi and software updates and a miniature flavor of Linux... It's a switch!
When my AI can talk directly to your AI, we can transfer mail etc without cloud services. If it knows our social network, perhaps it can remotely store encrypted backups of our data, but only with people who we already trust.
It's just not easy to do, both in setup and maintenance effort. Most people would not care about their privacy nearly enough to justify that.
When's the last time
Google risked itself or business or any tech ceo risked their livelihood for the sake of the greater good? The problem isn't necessarily the knowing everything part, it's who does what with it that's the problem. I can't really think of any company or person with influence in tech that'd be willing to dive onto that bombshell to protect us all.
February of 2001.
The former CEO of Qwest, a massive telco, spent years in prison for insider trading. He says this is because he resisted the NSA's demands to tap Qwest's network and hand over customer data.
A truly user-aligned AI assistant would be great. Ideally in the future these things will not be tied to indirect business models, but rather will be something you buy and all data/services will be under your control.
In the Star Trek world they had no advertising because they were a communist society. Everyone dressed the same or slightly differently based on rank. It's interesting how the new movies play over that.
In Star Trek you couldn't choose your AI. In our world you can. At the start of their development most of them are targeted at selling you stuff - but the industry is young and who knows where it will go.
While there are elements of communism depicted in the Federation as a whole, those are facets of a global post-scarcity society that has somehow evolved beyond the less "progressive" bits of human behavior. I'd argue that the biggest fantasy of Star Trek isn't warp drive, but the notion that humans are somehow less violent than Klingons.
You cannot compare the world's biggest seller of advertisement space with the ST universe. The motivation's aren't aligned: Google/Alphabet want to make sales based on my information.
I agree that I found these oh so clever AI fantasies interesting in my youth, still do to a degree. But I always pictured the data being held inaccessible to humans in general ("Where's my wife right now?") and not in the hands of a golden few with no oversight.
The public areas which were under surveillance on Star Trek tended to be only on military ships and star bases. I don't remember seeing much surveillance in public areas on planets. There certainly wasn't the sense that everyone was under surveillance on every street and in every shop, unlike most people in major metropolitan areas on Earth today. Nor was there anything on Star Trek like the ever-present spy satellites that can see in great detail anywhere on Earth today.
For the public areas which could be observed through cameras on Star Trek, the surveillance seemed mild compared to today because of Star Trek's lack of massive computers and artificial intelligence analysing what is seen for anomalies, using facial recognition, constantly recording everything and having those recordings instantly available for playback, sophisticated search, and computer analysis.
The reading and viewing habits of Star Trek denizens weren't recorded and analysed, unlike those of many people on Earth. Their positions weren't tracked wherever they went, unlike those of many people on Earth.
The so-called "24/7 surveillance" of Star Trek was very limited and even quaint compared to what we live under on Earth today.
Plus, most folks think ST is where hippies took over. Center of Federation is in San Francisco...
Remember kids: Don't intoxicate yourself with substances we do not condone. :)
Incorrect, in the original series, there was a currency ("credits") that was explicitly referenced several times; it was also referenced in at least one, and possibly more, early TNG episodes.
Sometime in the TNG era, Roddenberry laid down an edict that money, including the "credits" that had been repeatedly referenced previously, did not exist in the federation, and so they weren't mentioned again.
I don't think that's really accurate; the Ferengi were portrated as greedy merchants focused on profit starting fairly early in TNG without direct reference to currency (gold -- not the later "gold-pressed latinum" -- was mentioned, IIRC, as an item of interest, but not in any context which implied it was used as currency); I think gold-pressed latinum as introduced as a currency in DS9 because DS9's role as commerce hub was central to the theme of the series, and having currency just made telling stories about that a lot more convenient.
Throughout much of history, gold in standardized sizes was a common form of currency. Gold-pressed latinum in standardized "slips", "strips", "bars", and "bricks" is exactly the same thing.
Star Trek still had merchants who sold various wares. That would not be profitable if nothing was scarce.
They still had planets that lacked necessary medicine, requiring The Enterprise or some other ship to go on mercy missions to deliver the meds.
The Star Trek universe had pleasure planets which had highly desirable things that other planets did not.
There was clearly a shortage of starships and crew, as The Enterprise explored alone and not in a fleet, and couldn't just create a hundred others to help it when it was attacked by some alien enemy.
The Enterprise couldn't even use their on-ship replicators to make themselves some dilithium crystals (fuel) when they ran low.
The Star Trek fantasy is, "Computer, what were the principal historical events on the planet Earth in the year 1987?", and it could totally answer that without sending your entire fucking message history to google for deep AI inspection.
That's part of the Star Trek fantasy. But so is, "Computer, locate Commander Riker" and "Computer, use personal logs and personality profiles from compiled databases to create a personality simulation of Dr. Leah Brahms."
Google has a strong incentive to not allow their aggregated user data to leave Google-- the behavioral data Google collects is the reason why Google is valuable; if they start shipping that data off to third parties, suddenly the third parties don't need Google anymore.
(Same with Facebook-- they're not "selling" your data; they're selling the opportunity to target you based on your data, but the data itself is too valuable to Facebook to sell.)
Except the government who gets unfettered access. Not into conspiracy theories but Im definitely not a fan of this (and it goes for all social media and technology companies)
Does you have a source for that?
Yep. They all do it.
Can you provide a link to your claim or it is just your opinion because there is no proof of that?
ML relies on large data-sets and if anyone tried to release a personal device it simply wouldn't even work, let alone compete with the mass surveillance google/ms/amazon are bringing to bear.
Unless the state-of-the-art in AI suddenly morphs, we seem to be stuck between giving up our privacy or having vaguely intelligent AI.
I personally fall heavily on the privacy side of stuff, but I can see the intellectual and commercial appeal of pretending it doesn't matter in order to get there.
What needs to happen is a company needs to come along and create AI that is trained off of generalized information... some kind of socially accepted public data set... then the trained core is sold as a seed to individuals who then feed it their personal data.
It'll be the equivalent of buying an AI "teenager" and slowly training them to be an "adult".
As sensors become richer and the data becomes more valuable to the ML, consumers are becoming more aware of their privacy.
That means to get to a 'good seam' in the future instead of trawling through trash, you're going to have to convince millions of people their interests won't be affected.
That means in time there is an opportunity for a Google-killer with a different business model not based on using the raw data OR using by the use of intelligent agents. Google goes down because its stakeholders are contingent on getting to the raw data.
Which is kind of funny, because even if it might be accessed by personal mobile devices, the Star Trek "library computer" AI was never "small and physical, easy for a single person to entirely own and unable to remember more details than a human", it was an aspect of a large server (or networked cluster, the actual architecture is somewhat vague) that was part of a capital ship or base, had access in the server/cluster to a library of very nearly all generally available knowledge and extensive personal information about both its users and about people with little direct connection (and could reach out across a galactic network to access additional remote information sources to handle requests).
"Unfamiliar algorithms run on machines far away" is much more like the source of the "Star Trek dream" than "small and physical, easy for a single person to entirely own and unable to remember more details than a human".
Not just crewmembers: TNG showed some of the broader implications (both useful and creepy) of the convenience-oriented panopticon, e.g., when Geordi used the Enterprise library computer to construct a simulation from data (including personality profiles) of Dr. Leah Brahms, who later meets him (and encounters the simulation.) (S306 "Booby Trap", S416 "Galaxy's Child")
I suspect a huge part of the panopticon culture would be / is being informed that you're being peeped at. 99% of the time someone asking "Computer locate commander Riker" involved commander Riker knowing all about who's looking for him and why and having a substantial conversation with the requester.
I don't recall any plot along the lines of Deanna getting jealous and spamming the computer all night asking where Riker is and he better not be in that cute ensign's bedroom... Because it seems logical the computer would inform him each time and he would eventually tire of the interruption and nature would take its course, WRT his relationship with Deanna.
A better analogy would be technically I could walk up to the company president's office and stalk him, but culturally that is so not going to fly and I would have a lot of explaining to do. Merely using the computer instead of walking there in flesh isn't a major cultural shift.
However, every single "holodeck creeper" plot line involved the simulated attractive real world woman not knowing she's being simulated until the plot reached maximum spaghetti spilling cringe, which is one of the few Trek panopticon situations where people being spied on did NOT know they were being spied on, which seems very un-trek, although it made for some entertaining stories.
An alternative interpretation is I believe over the course of the series every attractive woman on the ship was simulated on the holodeck at least once by at least one lonely guy, and its possible that culturally they just got used to it, although I find that unlikely. People do get conditioned to become used to the weirdest things, so its not out of the realm of possibility. Possibly a culture of what happens in vegas stays in vegas develops and its just the sexism of the TV show that they never showed the women turning the tables on their fellow male crewmen on the holodeck.
1. That information were secret (you and service/device/implant) but not a whole company and its third party interests.
2. It wasn't making money for a third party after the initial purchase price for the device, service, etc.
So, better, but nowhere near "on the device only".
First party report of reviewing recordings: https://www.reddit.com/r/technology/comments/2wzmmr/everythi...
News article with more investigation and citings: http://sanfrancisco.cbslocal.com/2015/03/12/strangers-apple-...
When you use Siri the things you say will be recorded and sent to Apple to process your requests. Your device will also send Apple other information, such as your name and nickname; the names, nicknames, and relationships (e.g., “my dad”) found in your contacts, song names in your collection, the names of your photo albums, and the names of apps installed on your device (collectively, your “User Data”)
By using Siri, you agree and consent to Apple’s and its subsidiaries’ and agents’ transmission, collection, maintenance, processing, and use of this information, including your voice input and User Data
But does it? Does it have to know your birthday? (leave alone the fact that bdays are somehow part of a superkey for your identity).
Why should it know my residence, my spouse, or my CC# (with Apple's TouchID maybe it won't need to)?
Google's concept of AI is too creepy for me. It can be useful without being creepy. They're not even trying to make it less creepy.
Overlay on this the subtext that NSA and other tla's are monitoring all this (leave alone other countries). While I may trust Google, I don't trust them to not be forced to collude with the government.
How could it be otherwise?