Hacker News new | comments | show | ask | jobs | submit login
Human-Centered AI: Building Trust, Democracy and Human Rights by Design (medium.com)
71 points by benbreen 8 days ago | hide | past | web | favorite | 13 comments





This is not specific to AI, this should apply to capitalism as a whole. AI is just a tool. It's the value system that needs to be addressed. Data-driven capitalism usually has one objective: maximize profit.

Take youtube recommendation engine for instance. I am an engineer and I am really interested in a lot of stuff, physics, arts, economy, you name it. But on youtube I don't get recommendation for Feynman's lectures on Physics, or guided visits of the Louvre collection, I get shit like: "what happens if you throw a ton of dry ice in a pool" or "the sharpest knife made of cardboard" or "10 unforgettable goals".

This is pushing addictive crap on me, because the objective function of the recommendation engine literally is: maximize the time people spend watching stuff. Why? Because of ad revenues!

What culture needs to curb is capitalism. AI will follow.


No, what we need to curb is agreeably unethical behaviors, and to delineate basic rights that need to be expressed in light of new technologies (like data privacy rights). This is regardless of any other socioeconomic structures, and ends up being the fundamental basis of complaints like what you raise.

> What culture needs to curb is capitalism.

I think this is even more specific: culture needs to curb advertising monetization. Google, FB, youtube, et al generally suck because they are rapidly evolving toward convergence with tabloid publishers.


>the sharpest knife made of cardboard

OMG, so I'm not the only one who gets these idiotic "sharpest knife made from cardboard/pasta/tinfoil/rice/wood" in their recommendations!


> the objective function ... is: maximize the time people spend watching stuff

Sure, but the real problem is how well it works. Humans eagerly eat it up because their behavioral heuristics aren't adapted and get gamed. Humans want it and enjoy it and can't get enough, even though it sucks up their time without tangible benefit.

Reducing the problem to "capitalism" is like saying the only problem humans have with opioids is that the Sacklers get filthy rich and therefore humanity would be fine if only everybody could get the stuff for free at a friendly neighborhood government drug outlet.


That's not the most outlandish idea though is it? Plenty of people argue that if there were more living wage paying jobs, better universal healthcare, and legalized drugs then the opioid crisis wouldn't be as huge, or even exist as it currently does. That being said I agree that

> is like saying the only problem

capitalism cannot be the only thing to blame, but separating out the correlations versus the side effects of capitalism is a huge academic discussion in it's own right.


RE: the idea of "human centered AI"

I am quite saddened by this view of "human-centered AI". I read the linked piece by Fei-Fei Li [0], which this talk is essentially based around. The three goals mentioned seem extremely limited and immature. Additionally, I think human-centered AI should not be based around goals, but rather, axioms / laws -- agreed on and debated by humans.

The goals are immature because they are extremely broad, ripe for misuse. They are attempting to describe some end result, which, I am not really sure is necessarily a good approach for the concept of "human centered AI". For example:

* How does enhance human capability, not replace it protect against discrimination of humans who don't have access to AI from the humans who have "enhanced capabilities" due to "human-centered AI"? If the answer is "get AI capability to all humans", then isn't it extremely important for the first goal to be "get equal opportunity of access to AI capability to all humans"?

* Fei-Fei Li says No amount of ingenuity, however, will fully eliminate the threat of job displacement. Addressing this concern is the third goal of human-centered A.I.: ensuring that the development of this technology is guided, at each step, by concern for its effect on humans. Why should human-centered AI not be a precursor to a post-job world for humans?

* Donahoe paraphrases Fei-Fei Li's first goal: Goal 1 — making AI more human-like in its intelligence — is essentially a technological task. Does human-centered AI need to have human like intelligence? For now, I am unconvinced.

YouAreGreat mentioned: Sure, but the real problem is how well it works. Humans eagerly eat it up because their behavioral heuristics aren't adapted and get gamed. Humans want it and enjoy it and can't get enough, even though it sucks up their time without tangible benefit.

And they are absolutely right. Things like slavery also worked really well for a lot of people for thousands of years. Many generated lucrative profits off of that practice and it struck at some core human failing in how we looked at each other. It took deep philosophical works, wars and strict enforcements to alleviate those problems (and we still haven't complete 'solved' it I guess).

Maximizing an objective function based on some human desire is a horrible paradigm for human-centered AI, and the three goals listed by Fei-Fei Li don't seem to address this fundamental issue. As an alternative take (maybe others can chime in), I think a human-centered AI should be built around the great philosophical developments of our world. Things like equality of opportunity should be baked in as an axiom for developing human-centered AI (perhaps this should be part of the objective function). No discrimination based on protected classes [1] should be a at the foundation of whatever system the 'human-centered AI' constitutes. (These are examples, maybe there are better core principles).

RE: Ethics

Donahoe says:

> A key theme we will emphasize today is that human-centered AI will require new thinking about democratic accountability for data-driven machine-based governance decisions, as well as richer development of the concepts of algorithmic scrutability and interpretability for governance actors.

This is incomprehensible to me. Can someone actually explain what she is talking about here?

I am glad she says that the third ethics point is of their primary focus. But I don't see what solutions are provided to manage / enforce that point. Is it left to the benevolence of the "AI engineers"? She paraphrases:

> They use slightly different terminology but all revolve around some variation of the concept that AI should incorporate “human values,” reinforce “human dignity, or benefit human beings and humanity. To date, most of these initiatives remain at a relatively high level of abstraction, so it’s hard to know what they might actually require in practice.

I'm glad there is some discussion going on about this. I'll have to look into the linked websites at a later date.

> In a parallel way, the roots of today’s human-centered AI movement reach back to the Universal Declaration of Human Rights drafted in the aftermath of World War II, and to the body of international human rights law developed in the 70 years since.

It is well attested that these human rights declarations have a strong western bias. I am personally fine with that. But what about other societies that don't agree with the declarations? There will need to be international treaties and policies, similar to nuclear policies, that keep everyone on board. But this can get messy, fast.

--------------------------------------------------------------------------

[0] https://www.nytimes.com/2018/03/07/opinion/artificial-intell...

[1] Fortunately, this is an issue that is being heavily looked at. A lot of papers at ML conferences tend to focus on this issue.


This is incomprehensible to me. Can someone actually explain what she is talking about here?

They want to make an AI's "decision making" process interpretable and auditable to a wider group than simply the creators - hence "democratic."

The overall problem here is that there is, and always has been, broad value misalignment between individuals, organizations, institutions and states. Powerful data collection and automation capabilities give an outsized efficiency advantage to groups who have a coherent vision of how they want to shape the world and a platform to do it on.

I doubt it's possible to do this, but if Humanity wants to stay "in the lead" and not build a world of hundreds of independently created General Intelligence systems, then we need to collectively and intentionally decide what we think the whole point of living is, and then direct our systems to optimizing for that.

Like I said, I don't think that's possible so my guess is that we'll see a market based competition between entities that utilize General Intelligence for growth, and the one with the best data collection and influence levers will have the most impact.


It is well attested that these human rights declarations have a strong western bias. I am personally fine with that. But what about other societies that don't agree with the declarations? There will need to be international treaties and policies, similar to nuclear policies, that keep everyone on board.

I think you are right. It's very hard to have a common understanding of human values and ethics. But regarding "rights" and their universality, I should say how is it "well attested that these human rights declarations have a strong western bias?" I agree there are biases, cultural differences, etc. But, do we have any other framework that is more universal (and well-defined) than human rights? I myself am from a non-western developing country, I studied EE and also Human Rights Studies. With regards to AI and ethics, in fact, whenever I talk to non-western individuals they emphasize on the importance of human rights framework instead of abstract words such as "ethics", "values", etc.

This might surprise you but 185 countries have ratified the Convention on the Elimination of All Forms of Discrimination against Women except "the US" and 6 other countries including Sudan, Somalia, etc. the US has neither ratified the International Covenant on Economic, Social and Cultural Rights. You can find out about all the countries here, you'll be surprised: http://indicators.ohchr.org/

My point is not to speak negatively about the US and its commitment to the international human rights treaties but to mention that these treaties are more universal than you think. At a digital rights conference, I spoke with a Tunisian digital rights activist, I used a phrase "ethics in AI" and she corrected me to actually say the word "human rights." Because it has a clear framework, it has clear indicators and metrics to measure (to some extent), it's legally binding (depending on different cases and with respect to certain actors), not anybody (in this case any company) can create its own definition and interpretation.


The example you provide about the US highlights the issue I am talking about.

I was referring to the objection, at least historically, by Muslim nations (i.e, governments based on Islamic theocracy in some form): https://en.wikipedia.org/wiki/Universal_Declaration_of_Human...

I personally believe that no government should be based on Islamic theocratic values. But even the quickest conversation with legal scholars from those nations will highlight the fact that they believe that their system is "universal" (because it encompasses all of humanity in it's legal system: muslims, dhimmis, etc). There are very strong quranic reasons for them to believe this.

So it boils down to beliefs and values. Most of the world indeed signed the UN declarations. But lets say a country that is opposed to that concept ends up creating a "human centric AI". They didn't sign the declaration. How will we enforce the UN declaration on them?


I'm of the unpopular (but realistic) opinion that people can talk all they want about how AI should be used, but it's not going to be up to them to decide.

Corporations and military will do what they want (in private if needed) .


At least Maven project proved they might not be able to do privately if tech workers stand against it.

I, too, share this view.

Consider Fei-Fei Li's opening paragraph:

> Tech companies from Silicon Valley to Beijing are betting everything on it, venture capitalists are pouring billions into research and development, and start-ups are being created on what seems like a daily basis.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: