
Human-Centered AI: Building Trust, Democracy and Human Rights by Design - benbreen
https://medium.com/stanfords-gdpi/human-centered-ai-building-trust-democracy-and-human-rights-by-design-2fc14a0b48af
======
d--b
This is not specific to AI, this should apply to capitalism as a whole. AI is
just a tool. It's the value system that needs to be addressed. Data-driven
capitalism usually has one objective: maximize profit.

Take youtube recommendation engine for instance. I am an engineer and I am
really interested in a lot of stuff, physics, arts, economy, you name it. But
on youtube I don't get recommendation for Feynman's lectures on Physics, or
guided visits of the Louvre collection, I get shit like: "what happens if you
throw a ton of dry ice in a pool" or "the sharpest knife made of cardboard" or
"10 unforgettable goals".

This is pushing addictive crap on me, because the objective function of the
recommendation engine literally is: maximize the time people spend watching
stuff. Why? Because of ad revenues!

What culture needs to curb is capitalism. AI will follow.

~~~
YouAreGreat
> the objective function ... is: maximize the time people spend watching stuff

Sure, but the real problem is _how well it works._ Humans eagerly eat it up
because their behavioral heuristics aren't adapted and get gamed. Humans want
it and enjoy it and can't get enough, even though it sucks up their time
without tangible benefit.

Reducing the problem to "capitalism" is like saying the only problem humans
have with opioids is that the Sacklers get filthy rich and therefore humanity
would be fine if only everybody could get the stuff for free at a friendly
neighborhood government drug outlet.

~~~
rustyboy
That's not the most outlandish idea though is it? Plenty of people argue that
if there were more living wage paying jobs, better universal healthcare, and
legalized drugs then the opioid crisis wouldn't be as huge, or even exist as
it currently does. That being said I agree that

> is like saying the __only __problem

capitalism cannot be the only thing to blame, but separating out the
correlations versus the side effects of capitalism is a huge academic
discussion in it's own right.

------
vowelless
RE: the idea of "human centered AI"

I am quite saddened by this view of "human-centered AI". I read the linked
piece by Fei-Fei Li [0], which this talk is essentially based around. The
three goals mentioned seem extremely limited and immature. Additionally, I
think human-centered AI should not be based around goals, but rather, axioms /
laws -- agreed on and debated by humans.

The goals are immature because they are extremely broad, ripe for misuse. They
are attempting to describe some end result, which, I am not really sure is
necessarily a good approach for the concept of "human centered AI". For
example:

* How does _enhance human capability, not replace it_ protect against discrimination of humans who don't have access to AI from the humans who have "enhanced capabilities" due to "human-centered AI"? If the answer is "get AI capability to all humans", then isn't it extremely important for the first goal to be "get equal opportunity of access to AI capability to all humans"?

* Fei-Fei Li says _No amount of ingenuity, however, will fully eliminate the threat of job displacement. Addressing this concern is the third goal of human-centered A.I.: ensuring that the development of this technology is guided, at each step, by concern for its effect on humans._ Why should human-centered AI not be a precursor to a post-job world for humans?

* Donahoe paraphrases Fei-Fei Li's first goal: _Goal 1 — making AI more human-like in its intelligence — is essentially a technological task._ Does human-centered AI _need_ to have human like intelligence? For now, I am unconvinced.

YouAreGreat mentioned: _Sure, but the real problem is how well it works.
Humans eagerly eat it up because their behavioral heuristics aren 't adapted
and get gamed. Humans want it and enjoy it and can't get enough, even though
it sucks up their time without tangible benefit._

And they are absolutely right. Things like slavery also worked _really well_
for a lot of people for _thousands of years_. Many generated lucrative profits
off of that practice and it struck at some core human failing in how we looked
at each other. It took deep philosophical works, wars and strict enforcements
to alleviate those problems (and we still haven't complete 'solved' it I
guess).

Maximizing an objective function based on some human desire is a horrible
paradigm for human-centered AI, and the three goals listed by Fei-Fei Li don't
seem to address this fundamental issue. As an alternative take (maybe others
can chime in), I think a human-centered AI should be built around the great
philosophical developments of our world. Things like equality of opportunity
should be baked in as an axiom for developing human-centered AI (perhaps
_this_ should be part of the objective function). No discrimination based on
protected classes [1] should be a at the foundation of whatever system the
'human-centered AI' constitutes. (These are examples, maybe there are better
core principles).

RE: Ethics

Donahoe says:

> A key theme we will emphasize today is that human-centered AI will require
> new thinking about democratic accountability for data-driven machine-based
> governance decisions, as well as richer development of the concepts of
> algorithmic scrutability and interpretability for governance actors.

This is incomprehensible to me. Can someone actually explain what she is
talking about here?

I am glad she says that the third ethics point is of their primary focus. But
I don't see what solutions are provided to manage / enforce that point. Is it
left to the benevolence of the "AI engineers"? She paraphrases:

> They use slightly different terminology but all revolve around some
> variation of the concept that AI should incorporate “human values,”
> reinforce “human dignity, or benefit human beings and humanity. To date,
> most of these initiatives remain at a relatively high level of abstraction,
> so it’s hard to know what they might actually require in practice.

I'm glad there is some discussion going on about this. I'll have to look into
the linked websites at a later date.

> In a parallel way, the roots of today’s human-centered AI movement reach
> back to the Universal Declaration of Human Rights drafted in the aftermath
> of World War II, and to the body of international human rights law developed
> in the 70 years since.

It is well attested that these human rights declarations have a strong western
bias. I am personally fine with that. But what about other societies that
don't agree with the declarations? There will need to be international
treaties and policies, similar to nuclear policies, that keep everyone on
board. But this can get messy, fast.

\--------------------------------------------------------------------------

[0] [https://www.nytimes.com/2018/03/07/opinion/artificial-
intell...](https://www.nytimes.com/2018/03/07/opinion/artificial-intelligence-
human.html)

[1] Fortunately, this is an issue that is being heavily looked at. A lot of
papers at ML conferences tend to focus on this issue.

~~~
some_account
I'm of the unpopular (but realistic) opinion that people can talk all they
want about how AI should be used, but it's not going to be up to them to
decide.

Corporations and military will do what they want (in private if needed) .

~~~
royapakzad
At least Maven project proved they might not be able to do privately if tech
workers stand against it.

