i dont share your concerns at all. my daily feelings and activities wouldnt change, whether facebook and google know about them or not. Contrast that to being killed by a drone.
privacy concerns is rather an interesting intellectual problem, and not a real one. if it was a real problem, people would change their use of those tools but they dont.
you can argue that loss of privacy allows corporations to manipulate us, but i see only trivial effects. look at how people vote and buy. i see more diversity and conclude less manipulation.
* Your admission, for you or your children, to a school or university is refused because more worthy people have applied
* Your job application is refused,
All because of a fuzzy social score, that is mined from your contacts, activities on social network, meta data gathered from your smartphone in an opaque way, CCTV cameras, etc
I mean if we have a way to rate people, why not use it everywhere !
This is already partially implemented in China, I remember reading an example where with a low social score you have to pay for a deposit, and you don't have to with a high social score.
In a way, that sound efficient and meritocratic, but also ruthless if you end up with a low score.
edit :
add a reference with some every day life examples
Doesn't the vast amount of data we have allow for things like getting a loan without knowing a guy? As for point 2, if someone is more worthy... why is that bad? And why should an employer not use information easily accessible to them to determine the character of a potential employee when they only can have brief correspondence via interviews? I'm not sure if you're talking about future implications or what but it seems like this is the system that we exist in today. I don't think most of these decisions are based on a "fuzzy social score" though. At least not the topics you brought up.
I am talking about a future that is already being built today, piece by piece. I am well aware we are already judged with multiple bias, in a an opaque way. But this is not systematic, hopefully Employer A can have different bias than Employer B.
For me, the dystopic part would be to have a single, automated source of "truth", like the Sesame score, used everywhere.
It means that everything becomes easy or difficult because of that score. Suddenly you cannot rent or at higher price, your insurance rate will increase, it is harder to get a job...
Optimistically, in theory, it could make people "better", but who decides what "better" or "worthy" means ?
Hah, I can agree with a single source of truth. A credit score is essentially what you are referring to. I know for sure you cannot rent without a large deposit with a terrible score, and some jobs will not hire you (especially GOVT jobs) when you have large debts and bad credit. Not sure about the insurance rates, but it certainly would not surprise me.
> As for point 2, if someone is more worthy... why is that bad?
Someone is "more worthy" according to an opaque machine learning algorithm. Your line is exactly the danger: people are going to confuse "the computer says X" with "X is true".
I agree that it could be frustrating, and maybe someone will not make that distinction. But either way, these are decisions that have to be made. What is your alternative to how the decisions are made? In the end, you are either leaving it to a human decision who is going to make a choice based on some heuristics and prejudice or an algorithm which is going to make a choice based on heuristics and prejudice. I'm not sure if you have a problem with the lack of distinction between a human and an algorithm or the idea of a lack of fairness. The decisions that require something like a machine learning algorithm were probably opaque to begin with. Fairness doesn't seem so different in either case...
However, there is a differnce. If I dont get a job offer becuase I dont fit a humans biases of what they want. I can go somewhere else with a human that has differnt biases. The problem is with AI is how many googles are there? We will liekly end up with a handful of companies that rate you. Suddenly it does not matter where you apply for a job. If you did one thing that ruins/lowers these ratings you may not get a job. The problem is only a handful of entities decide what good is. If everyone is using these systems and you fail to meet their definition of good person/hire you like may suddenly become difficult.
Where as before people would have decide that themselves even if its a shallow cursory judgement. At least people are makeing their own decisions instead of blindy following a number/evaluation with criteria they do not know.
Well, that's just part of the problem with the connected world. If you do something wrong it doesn't take machine learning or AI to find out, you just have to google it. Someone hiring you for a job will most likely do this anyways... I have had recruiters even call me out on such things. It has nothing to do with an algorithm.
It could be problematic if there was only one source of subjective information, especially if any perceived past transgressions were irredeemable. But if they are factual, do you have a problem with that? E.G, you go to hire somebody and find out that it's not recommended by hiring_hueristic_x based predominately on factors X,Y,Z (X being he stole from his past 10 employers, Y being he never held a job for more than 6 months, Z being he commonly posts serious threats about people he doesn't like online).
Also, do you have any idea what actually occurs when you make a decision? I don't. I would love to know someone who does. I still follow it.
Did I say I have a problem with people making a decision based off information. Problem is these AI systems will be making decisions and if everyone is only using a handful of vendors and there software just says bad canidate or gives you a low rating. The people are just going to default to that. What it boils down to is small number of systems making decisions.
It turns into a small number making decision for many. Where as without the system it is decentralized and eveyone is making independent decisions. There also more room for nuance.
Do think conpanies are going to develop there own AI and gather datasets on people just evalute potental hires. Train it for the requirements they feel is best for there company?
No they are going to out source it. So what is likely going to happen is only small number of systems making these decisons.
Nothing's changed. People have always been unfairly refused loans or denied jobs. Presumably, these metrics wouldn't be used if they weren't better than the status quo.
> privacy concerns is rather an interesting intellectual problem, and not a real one. if it was a real problem, people would change their use of those tools but they don't.
It is a real problem and plenty of people do change their use of their tools.
I may have been incorrect to use the word 'plenty.' So let me rephrase: people do change their use of those tools. I don't see how those people being insignificant statistically makes a problem any less real.
well you didnt say majority so the word plenty is ok actually
i agree that it is a real problem for lets say millions of people.
but what counts is the majority opinion for me. there are all kinds of contrary (!) opinions among small subgroups of the population, and thus it isnt possible to adjust wide-ranging policies to those opinions
I'm sorry, but you're naive. The day might come when anyone with an 'X' in their online username is considered a threat. Ridiculous? Sure. But how about the things you've purchased? Schools/churches you've attended? Meetings you've gone to? Friends you have? The problem is when the definition of a threat changes, you can't go back and change your past, but are stuck.
Being a Jew in Germany in 1910 wasn't an issue. They had nothing to hide. In 1940 there was no going back for them.
you are naive and live in a fantasy world if you are afraid of this.
regarding the jew example, the problem is not about having stated in public what you are but with societies which discriminate.
you might have a point saying “my life should not be transparent, as to make it more difficult for a potential discriminatory society to attack me”.
but it makes no real sense to worry about it because it is so super minor in comparison to the deep problems this society will have. i know you disagree :)
> the problem is not about having stated in public what you are but with societies which discriminate.
Doesn't history teach us over and over that societies DO discriminate? I don't believe that basic human trait will ever change. If that could change, maybe someday we could do away with 'evil' altogether, but I don't see that happening either.
Just look at Internet justice vigilanties. If you're on the wrong side of a social issue, you could very well be targeted. I have a few unpopular opinions, and I keep them to myself for that very reason. You don't have to look too far to see what happens if you don't.
privacy concerns is rather an interesting intellectual problem, and not a real one. if it was a real problem, people would change their use of those tools but they dont.
you can argue that loss of privacy allows corporations to manipulate us, but i see only trivial effects. look at how people vote and buy. i see more diversity and conclude less manipulation.