AI only introduces a new danger, but at the current complexity of AI I don't think its any more significant in effect than the other ethical problems we have. In fact, many misuses of A.I are based on the uses, not the AI itself. I don't think linear regression is inherently an immoral tool either if we're going the "this tool is too dangerous to use/allow access to" route. Until we have anything close to AGI, AI is a tool only as ethical as the user of the tool. That's the real issue here.
We absolutely do need regulation, but I'll be damned if 10 people with power in any government understand AI well enough to regulate it. Every day I think we get closer and closer to needing to implement technocrats in governments. The FCC is a great example of a place we should have had the model for decades.
We don't? Not only we question those all the time, but we also have failed in producing them in any stable condition in most of our societies.
tl;dr: Ethical AI is hard. Ethical NI is harder!
Doesn't "Coherent Extrapolated Volition" actually boil down to, "Hey AI, don't do like us. Do as we should do!" (If we were better, more noble beings.)
And if we can't have "Coherent Extrapolated Volition" aren't the only outcomes the subsumption of Homo sapiens into a different kind of intelligence and/or its extinction?
Come to think of it, "Coherent Extrapolated Volition" just sounds like the same sort of wishful thinking which religions hook into.
Voltaire in the 18th century: "If God did not exist, it would be necessary to invent him."
Tech in 2019: "We need to be the first to implement the god-like AIs, because the the first mover advantage will yield tremendous profits!"
If one is interested in seeing an opposing point of view to this, one could lookup Philip Hamburger and the term, administrative state.
At a personal level, I'm much more scared of a bad person with a gun and access to me than a bad person with an AI and access to me. It's more important than the ethical use of a teacup, but certainly rivaled and beaten at a micro level.
At a macro level I think your argument has more ground to stand on when it comes to things like improved efficiency of mass surveillance, but how much of that is AI part rather than the mass surveillance part? What immoralities are enabled/accelerated that could not be done without AI? In the end I'm just as concerned about mass surveillance as I was before. In a practical sense, even if somehow passed laws limiting AI, do I really think that a government performing mass surveillance in the shadows is going to follow those laws when AI is a concept anyone (with the know-how) can implement? I don't think AI's power/capacity is as high as people think, it just tends to fit well with some very bad macro level ethical actions to enhance them.
I'm not sure I'm convinced either way yet, but the claim of "unrivaled" made me immediately skeptical, at least considering the AI of this decade(s). In a hundred years you're probably right with the unrivaled part, though atom bombs and the like are probably close seconds.
The other thing AI is about is cost and perhaps secrecy. Turk system requires many people who need to be fed, that puts a pretty high low bar on price. It also requires disseminating data to many agents, each of whom could leak it.
Being more accessible means more Bad Guys (unethical actors) using it.
There are also other topics but it's intended as a primer for engineers interested in understanding the social problems created by new technologies.
The article mentions trying to have a human enforce ethics, but then that person has to be an example of ethical excellence; something you can't test for. And in the end, they say every man has his price, so no. I don't think "ethical AI" is possible. I think "ruthlessly efficient AI" is the goal. Maybe it should only be used in situations where ethics don't matter
Things are decent enough right now, it could be a lot worse; do the upsides of creating ruthless AI justify the risks? Is it an eventuality anyway? At this stage could we conceivably prevent it?
People want to deeply believe this as true. I find it false because I see free will as an illusion. Majority of people think free will exists. Agreement on ethics cannot exist with this conflict because it inherently effects morality.
And of course, it entirely depends on whose culture and historical norms you're looking at and who is doing the judging. So really, no.
The law is an accepted and used set of actions that are considered bad, with associated repercussions. It's a best effort and it _works_. Why would AI have to reinvent the wheel here?
If you consider there are many situations where the taking of a life is considered politically or economically expedient, ethically justified, or a social or cultural necessity, the ethics of life-taking become a lot less straightforward.
One of the greatest potential benefits of AI is that having to define our ethics explicitly, instead of wrapping them up in layers of propaganda, manipulation, and self-serving lies, has the potential to transform society.
It's currently a very remote potential, but it does exist.
How could AI ever reach such a specific conclusion?
However, the issue is that it's very culturally dependent on WHY it's seen as wrong, and it's the reason behind WHY something is seen as wrong that the more complex elements of ethics are built on.
A utilitarian vs a deontologist vs a virtue theorist vs a follower of almost any religion vs a supporter of god knows how many other moral theories I don't know about would all have a different answer to that, and different answers to which cases killing or stealing or anything else might potentially be justified/right.
And you can see that reflected in legal systems of different countries right now. Some countries have self defence as basically 'anything is permissible if they're intruding on your property' where some have it so you need to use reasonable force. Some countries consider it fair to let the government kill criminals via the death penalty, and others don't.
The difficulty isn't defining whether murder or theft or what not in its most blatant forms is wrong, since most ethical frameworks will state it is. It's trying to define the many, many edge cases that people don't agree on, and which many countries/states/societies take different approaches to.
If we can derive morality from first principles, why hasn't someone applied this to the legal system yet?
If we can't derive morality from first principles, why would we need to invent that for world changing technology to happen?
Wouldn't it be a lot more in line with the population's sense of morality to train the AI based on our current laws than on some generalizing philosophical view?
You can't assume the same AI setup that'd work fine in the US would work in the UK or vice versa, because the laws aren't consistent about many elements. Either way, it's still a complicated thing to figure out.
No it's not. There are cases where taking a human life is considered acceptable and ethical. Killing an abductor that threatens to kill many people and negotiations have failed is considered ethical and not punishable by law.
Now if you assume that murder is the case where an unlawful act happened then you've stopped querying for ethics and your question becomes what is lawful which is again extremely ill-defined.
I'm really glad you raised this point.
I don't know if there's a uniquely good ethic or not. But for sure there's no consensus about what constitutes ethical behavior.
It drives me crazy when companies make preachy policy statements that beg this question.
* You can't ethically undermine a democratically elected government for a dictator who will better fit your economic interests.
* You can't ethically avoid health and privacy laws just enough that you make more irregardless of legal fees.
* You can't ethically discriminate hiring efforts based on race and gender.
* You can't ethically discriminate against employees with families.
Seems like an awful place to work. They have a role I was about to apply to in the bay area. Looks like I'll avoid this place.
Fun fact, they hired an ex trump org assistant to be his new assistant https://www.linkedin.com/in/sharon-benita-23703449
I suppose when enough things go wrong with a complex system it's like having runtime errors popup that you can debug against and get a better understanding of what you created, but that first execution is just dangerous enough that you wouldn't want something that's complex enough to be doing anything important. Then again, we might not think something is complex enough until we start running into the "unknown unknowns" of real world usage.
Maybe a somewhat subjective qualifier for what's "complex" could be developed and then the ethical question is "is due diligence being taken to reduce the risks inherent in this complex system?"
Government arguably should be an expedient (this is Thoreau's argument anyway), and it's possible A.I. could be at least a more consistent expedient that also commits to ratting itself out anytime its ethical programming is substantially altered. That isn't at all how humans behave, they can't be programmed this way.
Merely having A.I. that concisely points out the competing ethical positions on an issue, would be an improvement to word salad propaganda; propaganda is a significant impediment to both ethical and critical thinking, so an A.I. that were to score statements on a propaganda scale would itself be useful.
Neural nets are inspired by human neural structures. Training is in some ways similar to human learning. Genetic algorithms especially in simulated worlds are directly inspired by the biological evolution in the real world.
Is there any chance that the resulting algorithms themselves (the AI) shall have any ethical rights or significance? Especially once it exceeds in complexity the human brain.
The reason I ask is that it is an ethical question about AI (and therefore on topic), but I don't know how to think about it and hope others here might share some insight.
I love the concept of AI that could be not just super-intelligent, but also/instead super-moral. I hope we can find the inspiration to bring some of that concept into reality.
Only some tools are inherently abusable -- something that has been expressed in lots of forms, from "the medium is the message" critique of TV/internet/etc, to the gun control debate ("guns don't kill people, people kill people" etc).
Or a fax machine.
Or a pencil.
(Yeah, you could stab someone in the eye with the latter, but it's not something inherently violent about a pencil more than 2000 other things you could do it with, and most people would never do anything like that despite having used one).
Other things however, are either precisely made for harm and profit (e.g. a grenade, or "Hot Pockets"), or lend themselves very well to it (e.g. dynamite, internet cookies, etc.).
I'm not disagreeing with the idea that "artifacts have politics" , but rather pointing out that focusing solely on the negative actions enabled by guns is fallacious. In fact, the development of firearms is historically seen as having democratized power. It's just that most of what we consider the positive effects have been incorporated into the framework of our society, while we continue trying to diminish the negative ones.
Now having said that, so far "AI" (really ML) is primarily being wielded by large centralized entities against individuals. A less-equipped defending army does not particularly benefit from having a few drones with facial recognition, as their human soldiers are under attack regardless. Whereas it does enable a much larger conquering army to further insulate itself from the results of war.
Even relatively benign uses like voice recognition or recommendations have become excuses to retain large surveillance datasets on our entire society, and to shape the development so results favor large centralizing entities rather than their supposed users - eg the common engagement metric of "human time wasted".
But are the ethical issues the result of "AI" technology itself, or better attributed to the larger system its deployed upon? On the consumer side, I would say that the fundamental ethical problem is that software is not under its users control. If the majority of training sets were being accumulated voluntarily, with the goal of developing applications that actually helped users, then I think the ethical landscape would look quite different.
 If you don't recognize the reference, search it.
AI is a tool, and while you can say some tools are easier to abuse than others it has not baring on their ethics since tools have no ethics to begin with.
And the leverage or force multiplication you get from a tool is directly tied to how easily it can be abused but also to how useful it is to you in general.
Is a gun less ethical than a siringe because it can be used for violence? How about siginges fuling the drug epidemic?
Are nukes less ethical than conventional weapons?
Were the looms that the luddites seek the destroy unethical because they put people out of work?
Should we consider combines and modern agriculture unethical because they drastically changed the balance of power of various nations?
Ofc not as all of these arguments are silly and flawed once you actually begin to deconstruct them.
This has nothing to do about gun control I have no problem controlling guns because I don’t want to get shot and if someone does break into my house I prefer them not to be armed.
I also prefer the police not to be armed at all times because I think that it’s just as important not to bring a gun to a knife fight as it is the other way around if you don’t want to escalate things.
I have no problem of regulating the application of AI when necessary.
However it does not mean that I think that it has anything to do with its ethics but with ours.
Some uses of AI could be deemed by society to be unethical for the same reason that society deemed that harvesting the organs of a random person to save 5 people is unethical because people wouldn’t be able to function knowing that they might be harvested at any moment.
So if we bring this back to AI it’s not that I would think that say an AI run mass surveillance system is unethical because I’m not sure if the AI can make ethical decisions I would consider it unethical if society wouldn’t be able to function well under it.
Is Instagram inherently evil? Obviously not; but through the lens of pre-existing human social dynamics, including status competition, mating drives, social signaling, etc, the capabilities introduced by that particular tool almost inevitably lead to the perverse incentives of lifestyle facades, “influencers”, Fyre Festival, etc. Do we have to do these things? Of course not. But it’s naive to not think about the “realpolitik” scenarios, and what could be done to mitigate them, rather than assuming perfectly ethical and rational actors.
What makes AI even more complicated than previous technological changes to our game landscape, is the potential not for new tools, but for new players: at best, these artificial players are proxies for each of our interests (though see the side effects of “flash crashes” from high-frequency trading bots); at worst, we may have to contend with vastly intelligent new players with emergent interests of their own, which we can’t necessarily predict. While I don’t think it’s inconceivable that A.I. will always be subject to human understanding and control, we’re in such new territory that that’s fundamentally an assumption (see the arguments from Bostrom, etc).
That's a religious view of AI and humans, where humans have special qualities (like soul and consciousness) perhaps evolved "magically" or "god given", that a machine can't have.
There are no serious arguments why an AI can't have perfect human-like consciousness, feelings and everything.
Inversely, there are no serious arguments why humans are special in any way, and their brain mappings can't be replicated by technical AI (eg. artificial neurons) or emulated by software.
The ethical/conscious part is a matter of degree, not necessarily of quality.
It also doesn’t know what a kid is nor does it makes any decisions based on the info even if it did. It is tasked with avoiding collisions it does not make ethical decisions any more than my microwave makes an ethical decision of not to burn my food when I use a fixed program to defrost chicken.
A gun's purpose is to shoot bullets. A syringe has the purpose of delivering fluid into the human body. It's not hard to evaluate the ethics of the most common actions for each of those. While the tool itself technically has no ethics, owning or using the tool tends to have ethical implications. In a practical world, it is very fair to evaluate those implications, at least at an estimation level.
Don't get me wrong, there's a ton of complexity when it comes to tools and ethics and no easy answers, and with the luddite example it brings in questions of work ethics and societal structure, so the loom doesn't exist in a vacuum. But not all tools bring in the entirety of ethics into consideration. There are ethical estimations of tools (think of them as potential ethical energy) that we can try to estimate. Syringe > Gun seems like one we can make. Gun vs Nuke, a bit harder due to the consideration of use vs threat of use. The loom alone is pretty neutral and is far more reflective of direct context, where the context of a gun or syringe generalizes easier. And of course these are all my calculations, and you and others can have different ones, but I think if we really decided to spend 8 hours nailing down this discussion, you would indeed see a loose tiering/ranking of tools and "potential ethical energy". I wouldn't be surprised to see other measures/categories emerge either such as severity, risk, and commonality of use case.
The question becomes this: what is the "potential ethical energy" of AI. IMO it's very close to the loom in that the context matters the most, but there's also a severity of effect factor in play that makes it more dangerous. Still, I would say AI has an overall positive potential ethical energy.
Yes there is, I just made it up on the spot and defined it! If you mean that it is not a commonly discussed philosophical term then yes, you'd be correct.
> It’s not even a useful thought experiment
Now that's a discussion to bad had that you gave no evidence for its lack of use, while I used it to describe tools that humans use and how an "ethical potential energy" calculation can correlate to the practical effects of a tool, and perhaps how we should view/regulate/restrict said items in the context of humans. You could easily derive firearm laws from such a base if you chose to do so, for example. If that derivation is valid then depends on if the concept has the proper grounding in relation to ethics, which is again a discussion to be had.
A gun is not something that can be held ethically responsible itself because it does not make decisions. An autonomous gun turret would be.
There are mere guns, they are not problematic in themselves, it's how people use them.
No reason to think giving everybody in your city a gun would be any different in its outcome to you (and the city's wellbeing) than giving everybody a banana...
For one, that argument only holds for tools without conscience (e.g. a hammer or a headphone). AI, though, is precisely the kind of tool that can be capable to have ethics.
Second, even dumb tools without ethics of their own, can be ethically problematic (e.g. a bomb).
We are barely capable of defining and arguing ethics as a society claiming that a microwave would be able to make ethical decisions is laughable.
A bomb is no more ethically problematic than a bottle coke.
Which is still beside the point.
As I already wrote, an advanced AI is a tool that can precisely define ethics for itself or adopt ones.
Plus, as I also already wrote, even if a tool has no ethics, it can be ethically problematic (to society), so that is something we should discuss too.
>We are barely capable of defining and arguing ethics as a society claiming that a microwave would be able to make ethical decisions is laughable.
Which is again irrelevant, as an AI is not a microwave -- and future AI even less so. We are capable of seeing/discussing even things that are not immediately in front of us...
The point is that agents that operate ethically need not be sentient, they just need to play faithfully in our rule sets.
Which will sometimes mean eschewing maxima when doing so violates them. It will sometimes mean losses or ties in zero sum games.
So ... no.
AI is only required when there is some unknown, ambiguous, adversarial, or otherwise non-existent input or constraint. AI (or indeed any intelligence) is only useful in situations where there is "bias" (in the data science sense), inference, preference, and extrapolation being used to make decisions in an unknown space.
And it's precisely in these areas where ethics can be part of the "weights" given to those inferences and preferences.
A person can absolve their own sense of guilt by saying that they were just following instructions. But the farmer still chose here to use the AI, or to implement its recommendations. A person can even talk themselves out of feeling guilty for a bad choice just by saying "I felt so strongly, I couldn't do anything differently". Some choice was still made by the person; they bear the responsibility even if they don't think they do.