
Artificial Intelligence and Corporate Social Responsibility - benbreen
https://medium.com/@RoyaPak/artificial-intelligence-and-business-social-responsibility-69d6299b4d9d
======
bem94
I think this is going to be one of the biggest challenges in the field of AI
going forward. How do we make sure that as an extremely powerful tool, it gets
used responsibly. Also, acknowledging that like all tools, AI _will_ be
misused or applied in some truly heinous ways (examples across the internet if
you're interested), how should the AI field respond and learn from this? This
article is certainly evidence of people pushing in the right direction, even
if some of the answers are a little woolly.

Ultimately, I think that the best answer is for AI Devs themselves to be
extremely clued up on the potential social impacts of what they are working on
and treat that with the same reverence as staying up to date on the latest
literature. Only then do the kinds of "specific impacts" which the article
brings up become clear enough to act upon and plan around. From there, one can
work on safeguards, due diligence, or whatever else you want to call "making
sure my tool doesn't hurt people".

You need to bridge the gap as much as possible between engineers sat in a
plush office in SV, and the individual who is going to have their life
affected by those engineers (and their AI imbued tools). Otherwise, it's just
too easy to loose touch with what you & your tools do to the world and the
people in it.

~~~
nathanaldensr
Some problems have no solution. Go a bit further: Should we even be relying on
AI as a substitute for human interaction? Will AI ever be capable of replacing
an empathetic human? Even if it can, should it? What if I could replace one of
your family members with a perfect simulacrum that was indistinguishable from
the real thing, except you knew it was a simulacrum. Would you accept it?

~~~
pixl97
>What if I could replace one of your family members with a perfect simulacrum
that was indistinguishable from the real thing, except you knew it was a
simulacrum. Would you accept it?

You may not, but your kids generation that grew up with it always around
would.

------
fancyfacebook
GE is currently unraveling because they hid their losses using creative
accounting for 20+ years. Nobody will go to prison for this.

If you're expecting much in the way of social responsibility from a large
corporation I think you're just being naive. People want to further their own
individual careers and little else.

------
aalleavitch
He makes a great point, which is that this is an issue that shouldn't just be
on the minds of leadership, legal teams, and PR; increasingly decisions that
impact people's lives in massive ways are made on the level of the engineers
and data scientists who are actually making the implementations, and they need
to be every bit as aware of the ethical implications of what they're doing as
anyone else (if not moreso).

In this day and age a single line of code written by a junior engineer could
drive something that millions of people interact with hundreds of times a day.
This is going to have an impact on those people's lives.

------
Joeboy
So I'm way behind on developments in AI ethics. Are there actually any
promising avenues for mitigating harmful impacts of AI? I'm possibly damaged
by reading The Guardian, but it seems like the only options are a) making AI
fairer and b) using humans instead. And option a doesn't really work because
people can't agree on what's fair.

~~~
sdenton4
FWIW, humans aren't terribly fair either. You can create a shite review
system, implemented by thousands of inconsistent humans, and end up with
something crappy where it is far harder to point at any single decision and
say 'this is crappy' because the system itself is masked by thousands of
individual decisions. (Think 'customer service call centers' here.)

Or you can automate, and end up with a _transparently_ bad decision system
backed by a tweak-able model, which - at least in this moment in history - is
far easier to write outraged front-page guardian articles about.

The sorts of problems that we're deploying machine learning on often don't
have single right answers. We've quickly gone from wonder that these things
work at all to public outrage that they aren't providing perfect answers, but
instead answers which often mirror our own societal shortcomings. The design
choices made by the capitalists promote revenue over well-being, and the
eigenvector of desire often points in the direction of the horrific.

The systems, as they stand today, are like any other tool, amplifying human
ethical choices, for better or worse.

~~~
contextfree
"... which - at least in this moment in history - is far easier to write
outraged front-page guardian articles about."

which could be considered a positive feature!

Sort of reminds me of how you'll occasionally see articles highlighting the
excessive permissions demanded by mobile apps, but they rarely mention that
under the application models still used by PC operating systems (most
notoriously on Windows pre-UWP, but also unsandboxed Mac OS applications, and
on mainstream Linux distributions), most apps run with full user rights and
therefore have more excessive permissions than mobile apps are even able to
ask for.

------
arca_vorago
The issue of AI and ML is only a subset of the many issues at hand, and until
corporations are reeled in on the basic principles of ethics and social
responsibility, there will be no progress on any subset of those issues such
as AI.

------
cryoshon
a post on corporate responsibility that does not mention concepts like
"income" "inequality" "wealth" or even "politics" requires a bit of rounding
out before it's a complete discussion.

the author touches on a few points but tiptoes around the biggest issue:
corporations need to pay in to society by design, and AI will be disastrous if
they are not forced to uphold that standard. the same goes for AI's potential
impact on privacy and free speech-- these are major major issues, but they are
not the only major major issues.

economic inequality will continue to skyrocket if AI is remotely as
efficiency-improving as everyone hopes; higher inequality tends to erode other
democratic ideals like free speech, human rights, public political
engagement/power, and the rejection of civil encoding of status.

artificial intelligence will make it far easier for corporations to avoid
their fundamental economic responsibility to pay for the society in which they
operate and directly enrich the lives of people within society in multiple
dimensions. this is already happening. this is a problem that needs a
solution.

ideally, AI is used to distribute resources for greater goods rather than
entertaining individual gluttony for money.

we need to start agreeing on the axiom that the purpose of corporate activity
is to increase the standard of living of society and increased profit margins
should mean increased burden of charity rather than merely more zeroes on a
billionaire's wealth.

that is the correct moral vision of corporate responsibility in the light of
artificial intelligence.

getting into the mud with details on human impact assessments doesn't serve
the fundamental truth that AI-- for all the benefits it will bring-- is an
existential threat to the status quo that everyone knows will kill that status
quo off over time and replace it with a new and unknown one.

