Hacker News new | past | comments | ask | show | jobs | submit login
Is Ethical A.I. Even Possible? (nytimes.com)
58 points by furcyd 52 days ago | hide | past | web | favorite | 97 comments

Ethical AI is hard, but so are ethical laws, ethical taxation, and ethical advertising, to name a few more ubiquitous things than AI. The real answer here is that ethics is hard, and even if you know ethics and can get some agreement, getting humans to be ethical is even harder.

AI only introduces a new danger, but at the current complexity of AI I don't think its any more significant in effect than the other ethical problems we have. In fact, many misuses of A.I are based on the uses, not the AI itself. I don't think linear regression is inherently an immoral tool either if we're going the "this tool is too dangerous to use/allow access to" route. Until we have anything close to AGI, AI is a tool only as ethical as the user of the tool. That's the real issue here.

We absolutely do need regulation, but I'll be damned if 10 people with power in any government understand AI well enough to regulate it. Every day I think we get closer and closer to needing to implement technocrats in governments. The FCC is a great example of a place we should have had the model for decades.

>Ethical AI is hard, but so are ethical laws, ethical taxation, ethical advertising, and yet we don't see those being questioned.

We don't? Not only we question those all the time, but we also have failed in producing them in any stable condition in most of our societies.

Very fair, I'm editing the last part off.

Ethical AI is hard, but so are ethical laws, ethical taxation, and ethical advertising, to name a few more ubiquitous things than AI. The real answer here is that ethics is hard, and even if you know ethics and can get some agreement, getting humans to be ethical is even harder.

tl;dr: Ethical AI is hard. Ethical NI is harder!

Doesn't "Coherent Extrapolated Volition" actually boil down to, "Hey AI, don't do like us. Do as we should do!" (If we were better, more noble beings.)


And if we can't have "Coherent Extrapolated Volition" aren't the only outcomes the subsumption of Homo sapiens into a different kind of intelligence and/or its extinction?

Come to think of it, "Coherent Extrapolated Volition" just sounds like the same sort of wishful thinking which religions hook into.

Voltaire in the 18th century: "If God did not exist, it would be necessary to invent him."

Tech in 2019: "We need to be the first to implement the god-like AIs, because the the first mover advantage will yield tremendous profits!"

>Every day I think we get closer and closer to needing to implement technocrats in governments.

If one is interested in seeing an opposing point of view to this, one could lookup Philip Hamburger and the term, administrative state.

I think AI is especially important for it to be used ethically because it has an unrivaled capacity to be used unethically

> [AI] has an unrivaled capacity to be used unethically

Does it?

At a personal level, I'm much more scared of a bad person with a gun and access to me than a bad person with an AI and access to me. It's more important than the ethical use of a teacup, but certainly rivaled and beaten at a micro level.

At a macro level I think your argument has more ground to stand on when it comes to things like improved efficiency of mass surveillance, but how much of that is AI part rather than the mass surveillance part? What immoralities are enabled/accelerated that could not be done without AI? In the end I'm just as concerned about mass surveillance as I was before. In a practical sense, even if somehow passed laws limiting AI, do I really think that a government performing mass surveillance in the shadows is going to follow those laws when AI is a concept anyone (with the know-how) can implement? I don't think AI's power/capacity is as high as people think, it just tends to fit well with some very bad macro level ethical actions to enhance them.

I'm not sure I'm convinced either way yet, but the claim of "unrivaled" made me immediately skeptical, at least considering the AI of this decade(s). In a hundred years you're probably right with the unrivaled part, though atom bombs and the like are probably close seconds.

AI is about scale. You can achieve similar terrible results by using, say, Mechanical Turk system with additional error checks. That scales too.

The other thing AI is about is cost and perhaps secrecy. Turk system requires many people who need to be fed, that puts a pretty high low bar on price. It also requires disseminating data to many agents, each of whom could leak it.

Being more accessible means more Bad Guys (unethical actors) using it.

Past a certain point I don't think it will be used. It will use.

If you're interested in reading more about the subject of AI and Ethics or AI and Fairness, I kindly suggest this reading list I've been working on for a while: https://github.com/chobeat/awesome-critical-tech-reading-lis...

There are also other topics but it's intended as a primer for engineers interested in understanding the social problems created by new technologies.

I'm kicking myself that I never thought to use GitHub to put together a reading list. Great idea.

Yeah, great idea. The thing that's missing, though, is that you can't easily leave a comment on each of the listed items. That makes it a sort of one-way piece of information. Perhaps a wiki would be better (?)

True but you have the issues and the PR. The format for sure is not intended for casual discussion but at the same time there's plenty of space to discuss improvements and bring criticism.

Ethics is subjective. If ethics could be codified I think we'd have one ruleset everyone agrees on.

The article mentions trying to have a human enforce ethics, but then that person has to be an example of ethical excellence; something you can't test for. And in the end, they say every man has his price, so no. I don't think "ethical AI" is possible. I think "ruthlessly efficient AI" is the goal. Maybe it should only be used in situations where ethics don't matter

Ruthlessly efficient AI is the next "big stick". Nuclear weapons brought us relatively close to world peace (after burning up hundreds of thousands of people and scaring the whole world shitless) maybe ruthless AI can do the same. I'm not particularly eager to see what an AI arms race and cold war looks like though.

Things are decent enough right now, it could be a lot worse; do the upsides of creating ruthless AI justify the risks? Is it an eventuality anyway? At this stage could we conceivably prevent it?

> If ethics could be codified I think we'd have one ruleset everyone agrees on.

People want to deeply believe this as true. I find it false because I see free will as an illusion. Majority of people think free will exists. Agreement on ethics cannot exist with this conflict because it inherently effects morality.

Well, if no one has free will it doesn't really matter what you think, does it? Any agreement or disagreement must also be an illusion, because no one involved can do otherwise. In fact, "I find it false" is meaningless, since you have no alternative.

No. What I see people describing as ethical is not correct when people have no choice. It's also not meaningless to live with knowing free will is an illusion. Hard to really reply to such a dismissive assertion.

Exactly. The question should be "Is a (universal) Ethical person even possible?".

And of course, it entirely depends on whose culture and historical norms you're looking at and who is doing the judging. So really, no.

Yet murder is a punishable offense and people seem just fine with that.

The law is an accepted and used set of actions that are considered bad, with associated repercussions. It's a best effort and it _works_. Why would AI have to reinvent the wheel here?

Murder is the taking of a life in a manner/context which is considered bad by definition.

If you consider there are many situations where the taking of a life is considered politically or economically expedient, ethically justified, or a social or cultural necessity, the ethics of life-taking become a lot less straightforward.

One of the greatest potential benefits of AI is that having to define our ethics explicitly, instead of wrapping them up in layers of propaganda, manipulation, and self-serving lies, has the potential to transform society.

It's currently a very remote potential, but it does exist.

In Holland we have a law which states that you can drink a maximum amount of alcohol before driving. This amount is based on research of how the alcohol concentration affects a human's ability to do things like driving. This is a sensible approach.

How could AI ever reach such a specific conclusion?

Using statistics and efficiency measures alone is enough to derive this conclusion. (Albeit it might derive that having almost any BAC is bad, error rate of sensors notwithstanding.) Utilitarian not deontological but result is the same.

Well, most moral philosophies usually hold some very basic things as wrong, murder and stealing being among them.

However, the issue is that it's very culturally dependent on WHY it's seen as wrong, and it's the reason behind WHY something is seen as wrong that the more complex elements of ethics are built on.

A utilitarian vs a deontologist vs a virtue theorist vs a follower of almost any religion vs a supporter of god knows how many other moral theories I don't know about would all have a different answer to that, and different answers to which cases killing or stealing or anything else might potentially be justified/right.

And you can see that reflected in legal systems of different countries right now. Some countries have self defence as basically 'anything is permissible if they're intruding on your property' where some have it so you need to use reasonable force. Some countries consider it fair to let the government kill criminals via the death penalty, and others don't.

The difficulty isn't defining whether murder or theft or what not in its most blatant forms is wrong, since most ethical frameworks will state it is. It's trying to define the many, many edge cases that people don't agree on, and which many countries/states/societies take different approaches to.

This however seems more of a problem in philosophy than in governments. AI can cover the laws of the country in which it runs. The laws define the edge cases more more thoroughly than any philosophical school of thought. It raises questions like

If we can derive morality from first principles, why hasn't someone applied this to the legal system yet?

If we can't derive morality from first principles, why would we need to invent that for world changing technology to happen?

Wouldn't it be a lot more in line with the population's sense of morality to train the AI based on our current laws than on some generalizing philosophical view?

Well it depends how the AI operates, is distributed, etc. Any AI that affects people and populations in multiple regions will need to adapt to the laws in those different regions. Which could mean a lot of extra complexity, even if you limited it just to the US.

You can't assume the same AI setup that'd work fine in the US would work in the UK or vice versa, because the laws aren't consistent about many elements. Either way, it's still a complicated thing to figure out.

Great, murder is one tiny issue in the huge ethical problem space, and probably the simplest as dead and not dead is pretty binary. Now define suffering algorithmically.

Well there's laws that to about hurting people

   No it's not. There are cases where taking a human life is considered acceptable and ethical. Killing an abductor that threatens to kill many people and negotiations have failed  is considered ethical and not punishable by law.
   Now if you assume that murder is the case where an unlawful act happened then you've stopped querying for ethics and your question becomes what is lawful which is again extremely ill-defined.

Which is also handled by the legal system already. Instead of going back to the drawing board, people should continue with the huge work on morality defined as rules which is the legal system.

> Ethics is subjective

I'm really glad you raised this point.

I don't know if there's a uniquely good ethic or not. But for sure there's no consensus about what constitutes ethical behavior.

It drives me crazy when companies make preachy policy statements that beg this question.

What's even worse, ethics vary from culture to culture and tend to change over time.

Also important is that both Government and Industry are often intentionally unethical such that neither would actually want ethical AI to be the standard.

* You can't ethically undermine a democratically elected government for a dictator who will better fit your economic interests.

* You can't ethically avoid health and privacy laws just enough that you make more irregardless of legal fees.

* You can't ethically discriminate hiring efforts based on race and gender.

* You can't ethically discriminate against employees with families.

First we’d have to make an ethical society...

Wow, was just looking through the glassdoor reviews at the company in the article,Clarifai.


Seems like an awful place to work. They have a role I was about to apply to in the bay area. Looks like I'll avoid this place.

Wow this one is good https://www.glassdoor.com/Reviews/Employee-Review-Clarifai-R...

Fun fact, they hired an ex trump org assistant to be his new assistant https://www.linkedin.com/in/sharon-benita-23703449

For this I usually think that complexity in general is dangerous when our ability to apply our own value systems degrades past a certain point. For the 2008 financial issues we had a few financial products like subprime loans where the reasoning and economics behind these were complicated enough that most couldn't effectively regulate or understand the implications of their use.

I suppose when enough things go wrong with a complex system it's like having runtime errors popup that you can debug against and get a better understanding of what you created, but that first execution is just dangerous enough that you wouldn't want something that's complex enough to be doing anything important. Then again, we might not think something is complex enough until we start running into the "unknown unknowns" of real world usage.

Maybe a somewhat subjective qualifier for what's "complex" could be developed and then the ethical question is "is due diligence being taken to reduce the risks inherent in this complex system?"

Universally ethical A.I.? No. But then neither are humans. So I refuse the idea that always ethical A.I. is a goal that disqualifies A.I. as being useful.

Government arguably should be an expedient (this is Thoreau's argument anyway), and it's possible A.I. could be at least a more consistent expedient that also commits to ratting itself out anytime its ethical programming is substantially altered. That isn't at all how humans behave, they can't be programmed this way.

Merely having A.I. that concisely points out the competing ethical positions on an issue, would be an improvement to word salad propaganda; propaganda is a significant impediment to both ethical and critical thinking, so an A.I. that were to score statements on a propaganda scale would itself be useful.


(Throwaway in case this is crank science.)

Neural nets are inspired by human neural structures. Training is in some ways similar to human learning. Genetic algorithms especially in simulated worlds are directly inspired by the biological evolution in the real world.

Is there any chance that the resulting algorithms themselves (the AI) shall have any ethical rights or significance? Especially once it exceeds in complexity the human brain.

The reason I ask is that it is an ethical question about AI (and therefore on topic), but I don't know how to think about it and hope others here might share some insight.

Really that's more about the effect on us and the systems of society than the intrinsic nature of rights of the AI. Having things that seem human or capable of human insight and yet making them slaves may demean us all.

We humans have a multitude of competing ethics in the form of religions; we can't even agree on which one is the right one; programming ethics into AI is going back to that chicken-and-egg problem, requiring selecting which philosophical system is the right one. We can view competing religions as optimization algorithms where the best one for given state of world is winning, pressuring other ones to evolve or die off. Would we need competing, multi-GAN style AI "religions" all times at each other's throat as well?

There is nothing inherently ethical about any religion. They have some ethical concepts in them that might be universal like don't kill but they are easily overridden by don't kill fellow believers. Similarly there is nothing ethical about any society other than within the small confined system of a certain ideology which validates its own ethics. At the end ethics is just a game we play and we can teach computers to play similar games with a certain set of rules. That does not in any way mean that a computer will be ethical, only that it can be taught to observe the norms of the ethical games played by the society it serves using a rule based system that is applied to certain final actions that it has computed it should carry out.

There is a work of fiction (the name of which I unfortunately do not recall) that tells tales of war machines, hard-coded with the rules and conventions of war, coming into conflict with their own angry and frustrated operators.

I love the concept of AI that could be not just super-intelligent, but also/instead super-moral. I hope we can find the inspiration to bring some of that concept into reality.

Ethical yes, whether those ethics are in alignment with humans is another question.

(Criminal|Unethical|Bad) for one is (hero|ethical|good) for another.

First things first. What do we mean by "ethical"?

Is an ethical hammer even possible?

A, the old "it's how you use the tool" chestnut.

Only some tools are inherently abusable -- something that has been expressed in lots of forms, from "the medium is the message" critique of TV/internet/etc, to the gun control debate ("guns don't kill people, people kill people" etc).

Could you provide some examples of tools which are not inherently abusable? I can't think of any.

You can't very much abuse a shoe to cause harm.

Or a fax machine.

Or broccoli.

Or a pencil.

(Yeah, you could stab someone in the eye with the latter, but it's not something inherently violent about a pencil more than 2000 other things you could do it with, and most people would never do anything like that despite having used one).

Other things however, are either precisely made for harm and profit (e.g. a grenade, or "Hot Pockets"), or lend themselves very well to it (e.g. dynamite, internet cookies, etc.).

I think the point is more like: consider how often a gun is abused vs. a blanket. Surely it's not a stretch to conclude that the former enables unethical acts far more than the latter.

That framing only captures half of the truth though. The quality a gun has that a blanket does not is to enable adversarial actions. So yes, the technology of the gun does enable unethical actions like point-and-click murder. But it also enables individuals and smaller groups to defend themselves against aggressors, which most people would consider ethical.

I'm not disagreeing with the idea that "artifacts have politics" [0], but rather pointing out that focusing solely on the negative actions enabled by guns is fallacious. In fact, the development of firearms is historically seen as having democratized power. It's just that most of what we consider the positive effects have been incorporated into the framework of our society, while we continue trying to diminish the negative ones.

Now having said that, so far "AI" (really ML) is primarily being wielded by large centralized entities against individuals. A less-equipped defending army does not particularly benefit from having a few drones with facial recognition, as their human soldiers are under attack regardless. Whereas it does enable a much larger conquering army to further insulate itself from the results of war.

Even relatively benign uses like voice recognition or recommendations have become excuses to retain large surveillance datasets on our entire society, and to shape the development so results favor large centralizing entities rather than their supposed users - eg the common engagement metric of "human time wasted".

But are the ethical issues the result of "AI" technology itself, or better attributed to the larger system its deployed upon? On the consumer side, I would say that the fundamental ethical problem is that software is not under its users control. If the majority of training sets were being accumulated voluntarily, with the goal of developing applications that actually helped users, then I think the ethical landscape would look quite different.

[0] If you don't recognize the reference, search it.

Firearms didn't democratise power in the soviet States, revolutionary China or Africa.

I think you are missing the point, the argument is AI ethical isn’t the same as can AI be ethical.

AI is a tool, and while you can say some tools are easier to abuse than others it has not baring on their ethics since tools have no ethics to begin with.

And the leverage or force multiplication you get from a tool is directly tied to how easily it can be abused but also to how useful it is to you in general.

Is a gun less ethical than a siringe because it can be used for violence? How about siginges fuling the drug epidemic?

Are nukes less ethical than conventional weapons?

Were the looms that the luddites seek the destroy unethical because they put people out of work?

Should we consider combines and modern agriculture unethical because they drastically changed the balance of power of various nations?

Ofc not as all of these arguments are silly and flawed once you actually begin to deconstruct them.

This has nothing to do about gun control I have no problem controlling guns because I don’t want to get shot and if someone does break into my house I prefer them not to be armed.

I also prefer the police not to be armed at all times because I think that it’s just as important not to bring a gun to a knife fight as it is the other way around if you don’t want to escalate things.

I have no problem of regulating the application of AI when necessary.

However it does not mean that I think that it has anything to do with its ethics but with ours.

Some uses of AI could be deemed by society to be unethical for the same reason that society deemed that harvesting the organs of a random person to save 5 people is unethical because people wouldn’t be able to function knowing that they might be harvested at any moment.

So if we bring this back to AI it’s not that I would think that say an AI run mass surveillance system is unethical because I’m not sure if the AI can make ethical decisions I would consider it unethical if society wouldn’t be able to function well under it.

Though I agree in principle, I think tools can’t be considered in isolation, but rather in the context of iterated game theory.

Is Instagram inherently evil? Obviously not; but through the lens of pre-existing human social dynamics, including status competition, mating drives, social signaling, etc, the capabilities introduced by that particular tool almost inevitably lead to the perverse incentives of lifestyle facades, “influencers”, Fyre Festival, etc. Do we have to do these things? Of course not. But it’s naive to not think about the “realpolitik” scenarios, and what could be done to mitigate them, rather than assuming perfectly ethical and rational actors.

What makes AI even more complicated than previous technological changes to our game landscape, is the potential not for new tools, but for new players: at best, these artificial players are proxies for each of our interests (though see the side effects of “flash crashes” from high-frequency trading bots); at worst, we may have to contend with vastly intelligent new players with emergent interests of their own, which we can’t necessarily predict. While I don’t think it’s inconceivable that A.I. will always be subject to human understanding and control, we’re in such new territory that that’s fundamentally an assumption (see the arguments from Bostrom, etc).

AI isn’t s player it’s as dumb as a rock. The players are still humans that build, maintain, operate and can switch it off.

AI today is that dumb, but I think the parent comment is discussing a theoretical AGI nearing or possessing consciousness. A bit irrelevant for today's moral conversation on AI but a very interesting one down the line if we ever do get to that point.

Let’s have this talk in 3 centuries then.

>AI isn’t s player it’s as dumb as a rock.

That's a religious view of AI and humans, where humans have special qualities (like soul and consciousness) perhaps evolved "magically" or "god given", that a machine can't have.

There are no serious arguments why an AI can't have perfect human-like consciousness, feelings and everything.

Inversely, there are no serious arguments why humans are special in any way, and their brain mappings can't be replicated by technical AI (eg. artificial neurons) or emulated by software.

Humans aren’t special but what we have as an AI today and for foreseeable future is capable making as many decisions as your microwave.

We already have AI that can make the decision or not to step on a kid passing the street in self driving cars.

The ethical/conscious part is a matter of degree, not necessarily of quality.

No we don’t, the “AI” didn’t made any decisions it only follows its programming.

It also doesn’t know what a kid is nor does it makes any decisions based on the info even if it did. It is tasked with avoiding collisions it does not make ethical decisions any more than my microwave makes an ethical decision of not to burn my food when I use a fixed program to defrost chicken.

> Is a gun less ethical than a syringe because it can be used for violence?

A gun's purpose is to shoot bullets. A syringe has the purpose of delivering fluid into the human body. It's not hard to evaluate the ethics of the most common actions for each of those. While the tool itself technically has no ethics, owning or using the tool tends to have ethical implications. In a practical world, it is very fair to evaluate those implications, at least at an estimation level.

Don't get me wrong, there's a ton of complexity when it comes to tools and ethics and no easy answers, and with the luddite example it brings in questions of work ethics and societal structure, so the loom doesn't exist in a vacuum. But not all tools bring in the entirety of ethics into consideration. There are ethical estimations of tools (think of them as potential ethical energy) that we can try to estimate. Syringe > Gun seems like one we can make. Gun vs Nuke, a bit harder due to the consideration of use vs threat of use. The loom alone is pretty neutral and is far more reflective of direct context, where the context of a gun or syringe generalizes easier. And of course these are all my calculations, and you and others can have different ones, but I think if we really decided to spend 8 hours nailing down this discussion, you would indeed see a loose tiering/ranking of tools and "potential ethical energy". I wouldn't be surprised to see other measures/categories emerge either such as severity, risk, and commonality of use case.

The question becomes this: what is the "potential ethical energy" of AI. IMO it's very close to the loom in that the context matters the most, but there's also a severity of effect factor in play that makes it more dangerous. Still, I would say AI has an overall positive potential ethical energy.

There is no such thing as an ethical potential energy it’s not even a useful thought experiment.

> There is no such thing as an ethical potential energy

Yes there is, I just made it up on the spot and defined it! If you mean that it is not a commonly discussed philosophical term then yes, you'd be correct.

> It’s not even a useful thought experiment

Now that's a discussion to bad had that you gave no evidence for its lack of use, while I used it to describe tools that humans use and how an "ethical potential energy" calculation can correlate to the practical effects of a tool, and perhaps how we should view/regulate/restrict said items in the context of humans. You could easily derive firearm laws from such a base if you chose to do so, for example. If that derivation is valid then depends on if the concept has the proper grounding in relation to ethics, which is again a discussion to be had.

This approach presumes that the ethical responsibility falls on the user of the tool, including the responsibility to see what the effects of use are. Where this starts to break down is autonomous systems - when the system itself makes decisions that could be considered to be ethical, or have significant social effects. Autonomous weapons are the usual example, from landmines to drones, but this also applies to systems making economic decisions.

A gun is not something that can be held ethically responsible itself because it does not make decisions. An autonomous gun turret would be.

Because they do as these systems are designed and put into operation by humans and come with an off switch.

How do you not see the contradiction in writing about how no tools are inherently unethical, then say that one tool is inherently unethical because you are scared of it in particular?

I’m not scared of any of it, I don’t think that neither guns nor AI are unethical, nor ethical I don’t anthropomorphize objects.

OK, so you wouldn't be scared if we gave guns to everybody in your city except you.

There are mere guns, they are not problematic in themselves, it's how people use them.

No reason to think giving everybody in your city a gun would be any different in its outcome to you (and the city's wellbeing) than giving everybody a banana...


Sugar, pollution and cars are much more likely to kill you.

>AI is a tool, and while you can say some tools are easier to abuse than others it has not baring on their ethics since tools have no ethics to begin with.

For one, that argument only holds for tools without conscience (e.g. a hammer or a headphone). AI, though, is precisely the kind of tool that can be capable to have ethics.

Second, even dumb tools without ethics of their own, can be ethically problematic (e.g. a bomb).

Tools cannot have ethics since tools do not define ethics, society does and ethics aren’t universal or static.

We are barely capable of defining and arguing ethics as a society claiming that a microwave would be able to make ethical decisions is laughable.

P.S. A bomb is no more ethically problematic than a bottle coke.

>Tools cannot have ethics since tools do not define ethics, society does and ethics aren’t universal or static.

Which is still beside the point.

As I already wrote, an advanced AI is a tool that can precisely define ethics for itself or adopt ones.

Plus, as I also already wrote, even if a tool has no ethics, it can be ethically problematic (to society), so that is something we should discuss too.

>We are barely capable of defining and arguing ethics as a society claiming that a microwave would be able to make ethical decisions is laughable.

Which is again irrelevant, as an AI is not a microwave -- and future AI even less so. We are capable of seeing/discussing even things that are not immediately in front of us...

Does a hammer make decisions?

Does an AI (as we have them now)? If I give someone a paper and say "wait a day then follow these instructions", I'm making the decisions, not them, even if the paper has branching logic like "if it's morning, call Alice but if it's afternoon call Bob."

Isn't that the point? Ethics is in the execution, if I make "ethical" instructions and someone follows them, they're operating by the constraints of said ethics.

The point is that agents that operate ethically need not be sentient, they just need to play faithfully in our rule sets.

Which will sometimes mean eschewing maxima when doing so violates them. It will sometimes mean losses or ties in zero sum games.

Does a bomb makes a decision to explode? What we define and can construct as an AI is no more capable of making a decision than your microwave.

If the hammer is trained to operate autonomously, yes.

My dishwasher is autonomous should I it be concerned about it’s ethics?

Your dishwasher makes no autonomous decisions. AI takes training data and uses algorithms to make choices. Aware of the protocols of ethics, it can reject those choices. Without ethics, it will aim for global maxima and victory in zero sum games.

So ... no.

Paywall bypass: https://outline.com/VBgvaj

It's time for a Butlerian Jihad

best comment of the day on hn.

Loved Dune, nice reference.

const not_evil = true;

Removes paywall, and reduces page load 91% - from 3.93MB to 349KB, uses zero JavaScript: https://beta.trimread.com/articles/215

How apropos to post that in a thread discussing ethics.

Instead of giving your data to a third party service, just use umatrix. None of the javascript on the article is necessary to read the article: https://0x0.st/zocK.png

No. And if you think yes, you are probably deceiving yourself about your own ethical abilities.

What is unethical about a farmer using AI to figure out the best arrangement of his crops to maximize the food he produces, maintain his land quality, and minimize his environmental impact?

I mean, the short answer is if all of the constraints and inputs to a problem are known, you don't need an AI at all. There is a guaranteed optimal crop arrangement, and the model to produce it would be purely based on natural sciences only - physics, biochemistry, engineering etc.

AI is only required when there is some unknown, ambiguous, adversarial, or otherwise non-existent input or constraint. AI (or indeed any intelligence) is only useful in situations where there is "bias" (in the data science sense), inference, preference, and extrapolation being used to make decisions in an unknown space.

And it's precisely in these areas where ethics can be part of the "weights" given to those inferences and preferences.

Would weather patterns / trends not qualify a need for AI in this context? What about prediction of food demand by type for the upcoming year? Those are the big unknown inputs an AI could pretty easily morally aid to here.

Nothing. But what about when said farmer buys Farm AI 2.0 that also optimizes the livestock side of his farm. The AI has determined it's more efficient to pack cubes into a barn than keep them out in the pasture (the cubes contain chickens or pigs). Now the farmer upgrades to Farm AI 3.0 which also optimizes the business side of his farm. And it decides to fire a bunch of farm hands, perhaps replacing them with cheaper robots or aliens. The farmer sips his lemonade, absolved of guilt for the decisions the AI has made.

This has nothing to do with AI. You could say the same about a human agricultural consultant.

A person can absolve their own sense of guilt by saying that they were just following instructions. But the farmer still chose here to use the AI, or to implement its recommendations. A person can even talk themselves out of feeling guilty for a bad choice just by saying "I felt so strongly, I couldn't do anything differently". Some choice was still made by the person; they bear the responsibility even if they don't think they do.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact