It baffles me that we just accept the kind of malice inflicted on people by programmers because "someone will always do it". As a profession / collection of skilled persons, we should really be better than that.
Obviously, one cannot see the future, nor would we want to be paralized by fear of doing anything. But there is a certain minimum requirement for collective responsiblity which I really don't think we are meeting at the moment.
* Don't give loans to people living in [poor area].
* Avoid people with names that aren't similar to the most common ones in the database (i.e. foreign ones)
* When linking the customer data to their social media and you see the picture is dissimilar to [preferred ethnicity], do not give a loan.
Now without any malice from the developer, the system has become racist: It saw the correlation that in the US blacks and hispanics live in poverty more often that others . It knows that poor people pay back loans less frequently and makes the rational decision to not give a loan to that group. This of course reinforces the problem.
But how are we to solve this? Introduce an additional column "race" to the data and bias the results with it? Would that not be just as racist? How do we give the system an awareness not to discriminate against ethnic groups, if the data contains implicit clues? This comes down to giving such an AI human intuition about such questions.
It seems silly to take problems which humans cannot solve with all the data, intuition and humanity that we have, and then use that data to train machines to make decisions, we know will be flawed.
If the data really suggests that these factors (poor areas, foreign names, social media) affect liklihood of default, should they be completely ignored? I'm sure credit scores and salaries are correlated to race in the US too. Should those be ignored? Just give out loans indiscriminately?
They're not ignored now by loan approval. Worse, inaccurate stereotypes are used as a proxy for some of these factors that may be relevant to a borrowers ability to pay. We don't live in a perfect world where people ignore certain factors.
"Weapons of Math Destruction" by Cathy O'Neil delves into this in more depth. It's a very valid concern, particularly in the way non-technical people are trained over time to give deference to the algorithm.
I am familiar with the book you mentioned.
You still didn't answer my question. If credit score and income are correlated to race in America, should they be ignored from loan applications? What are valid factors to consider?
Is correlation causation? Is it fair to encode a judgement based on correlation but not causation?
Or to make your position more clear, do you support racial profiling? If not, why not, and how do you justify that position, and how does that justification apply to judgements based on correlation but not causation?
(My position is that making decisions based on correlation but not causation is the general case for which racial profiling is a specific case.)
What are factors that are causation? Presumably not where you're living and other considerations you determined to be implicitly racist
It's when you start to deny proper information into decision making that those bad proxies start to look good.
Ofcourse, but we don't avoid studying moral philosophy because we can't find a uniform model. Nor does not being able to find a uniform model mean that there are not things worth learning from it.
>> there are a lot of smart people who do much worse things than create malicious software when money is at stake
Agreed. Two wrongs do not make a right, and as engineers in the know, we have a duty to stand up to such people when we have the chance.
>> I fear it will fall short of your ideal.
Me too, but the alternative is worse in my opinion.
Also, refraining from building these systems means that other countries get ahead of us. Besides being an economic disadvantage, this could also threaten security.
So I do agree that engineers should adhere to ethical guidelines, but they should be considered in a global context.
The US and Europe need to wake up or we'll be facing a billion hypernationalist, gentically modified super-geniuses while the only thing we have is the moral highground. Based on what I've seen over the last decade we should probably start learning Mandarin.
Imagine there's a big red "end the world" button that anyone on the planet can push. No matter how idealistic and utopian your views are, do you really trust that there's not a single person out of 7 billion people that might push the button? That is the kind of situation I worry we may face once people figure out how to create strong AI.
There is a 0.001% chance of something bad happening, so there is no point in the other 99.999% changing their behaviour in order to stop that bad thing happening.
I don't think that follows either. Second guessing human nature to the point where you resign yourself to catastrophe is surely less worthwhile than striving to create a sense of collective responsibility before it is too late?
I'm not second guessing human nature, just saying people do crazy things all the time. The vast, vast majority of people are not suicide bombers, for example, but it still happens sometimes. It doesn't matter how much collective responsibility we have; eventually you may just end up with a brilliant programmer with schizophrenia or something who writes an AI in his basement and unleashes it on the world.
I'm also certainly not resigning myself to catastrophe. There are probably ways to prevent or mitigate the negative impact of AI, like the one being suggested in the original article, for instance. I would hardly consider myself a cynic, but forgive me if I'm skeptical of the idea that every human in the world can somehow agree to never do anything that might create a dangerous AI. If it can happen, it eventually will happen.
I'm not saying that education / provoking thought will stop everyone doing something (else) monstrously stupid with AI. Just that we should be doing that educating / provoking more than we currently are.
I am sorry to admit, though, that it did not involve AI or machine learning in the slightest. Most of it was about melding the capabilities of fast, dumb dynamic testing like fuzzing together with the deep analytical capabilities of more advanced program analyses like symbolic execution.
It's completely indescribable and epically nerdy. +10, would attend again. :)
I'm still amazed this show had a full run on mainstream TV. One of the best TV/movie treatments of "Hacker News topics" there has been.
That book still makes me goosebumps every time I remember it.
She possesses the power of foresight, which she uses to advise and guide the humans attempting to fight the Matrix
Konrad Zuse built the Z1 computer in 1936, almost 90 years later.
Just because the technology isn't here yet doesn't mean we can't start discussing the theory and its implications.
Examples include That Dress, duck-or-rabbit, stereotypes, "garden path sentences", and most film special effects.
"Generally, the vehicle will have a set of sensors to observe the environment, and will either autonomously make decisions about its behavior or pass the information to a human operator at a different location who will control the vehicle through teleoperation."
Evil robots could always be created by evil actors, and there is nothing interesting here.