
The world’s biggest problems and why they’re not what first comes to mind - BenjaminTodd
https://80000hours.org/career-guide/world-problem
======
tudorw
I'd like to see more effort placed in the space between research and
application, the output of academia seems largely a vast trove of unread and
unimplemented ideas. When the research reveals something that is of value to
us all, yet not exploitable in a capitalist system, if there is no financial
gain from exploiting research then who else will look to benefit from it.
Given the large amount spent on research, it seems we would benefit from
additional spending on disseminating the results to those who can make use of
the information to have real world impact.

As far as AI risks, I am more concerned of the risk of human error in
believing they understand things enough to deliver systems that can really
mimic intelligence when in reality I've seen nothing intelligent in anything
that claims AI in it, high speed weighted pattern recognition, yes,
intelligence, no.

~~~
xg15
> _When the research reveals something that is of value to us all, yet not
> exploitable in a capitalist system, if there is no financial gain from
> exploiting research then who else will look to benefit from it._

Moreover, if there _is_ financial gain, the incentives in the current system
are to grab an idea and keep the details of its realisation under tight wraps
as far as possible: Further development is done outside of public academic
circles and results are protected by trade secrets or patents.

Yes, the results of the research will eventually benefit society - when
offered as a commercial service or product - but only in an opaque, "black
box" kind of way as the details are not for public discussion. Even that offer
is primarily driven by the desire to make money and consolidate power an only
secondary to advance society.

~~~
tudorw
I found anecdotal evidence of this when researching nutritional medicine, as I
followed the path of who researched what most academics career had switched
track to either start a nutritional medicine company practising the outcome of
their research, whether for profit or protection, or had joined ranks with a
traditional pharmaceutical company to work on their own product, regardless of
the path no further work was published to the public.

------
bwindels
I can't find a reference to it, but a quote comes to mind when I see the list:
"Worrying about A.I. taking over in a time of climate change is like standing
on the tracks with an oncoming train, and worrying about lightning hitting
you". Anyone remember who said it?

~~~
henryaj
One of the principles of effective altruism is finding causes which are
important, tractable, and _neglected_. Climate change is definitely not
neglected - you could certainly make the case that we ought to be doing more,
but not that it's ignored.

Research into x-risks from AI, though, remains an extremely niche field - its
total funding is probably several orders of magnitude less than the amount
spent on climate change. 80,000 Hours care about effectiveness on the margin,
which is why their list of priorities looks the way it does.

~~~
xapata
Climate change is still largely ignored by a majority or significant fraction
of government officials, which are the ones who must take action.

~~~
BrandonMarc
Which makes it a much tougher problem, therefore harder to have as much impact
as on a truly neglected problem.

------
sparkzilla
>FTA: Potentially promising problem areas we haven’t yet rated

The Copenhagen Consensus is an attempt by economists to rate the cost/benefit
of various schemes to improve life for humans globally. It was was prompted by
the apparent waste of governments committing to billions, if not trillions of
dollars towards climate change mitigation efforts, which may or may not
provide results. All while people, especially children, are dying from
problems that have lower-cost solutions. The question posed is: If you had
$75bn for worthwhile causes, where should you start?

1\. Bundled micronutrient interventions to fight hunger and improve education

2\. Expanding the Subsidy for Malaria Combination Treatment 3\. Expanded
Childhood Immunization Coverage

4\. Deworming of Schoolchildren, to improve educational and health outcomes

5\. Expanding Tuberculosis Treatment

6\. R&D to Increase Yield Enhancements, to decrease hunger, fight biodiversity
destruction, and lessen the effects of climate change

7\. Investing in Effective Early Warning Systems to protect populations
against natural disaster

8\. Strengthening Surgical Capacity

9\. Hepatitis B Immunization

10\. Using Low‐Cost Drugs in the case of Acute Heart Attacks in poorer nations
(these are already available in developed countries)

Note: No AI.

[https://en.wikipedia.org/wiki/Copenhagen_Consensus](https://en.wikipedia.org/wiki/Copenhagen_Consensus)

~~~
bkmartin
Why not clean water for everyone? $75bln all but solve the problem. This would
dramatically decrease the number of sick and dying children, free up hours a
day for things like schooling and other work opportunities, and help more
people farm their own food. We have all the technology it just has to be
deployed. We have made huge progress the last 10-15 years...we just need a big
push to the finish.

~~~
BenjaminTodd
More info on the pros and cons of water:

[http://www.copenhagenconsensus.com/publication/post-2015-con...](http://www.copenhagenconsensus.com/publication/post-2015-consensus-
water-and-sanitation-assessment-hutton)

[http://blog.givewell.org/2016/05/03/reservations-water-
quali...](http://blog.givewell.org/2016/05/03/reservations-water-quality-
interventions/)

------
averagewall
The funny thing about trying to stop AI doing bad things is that we are barely
able to stop natural intelligence doing bad things. We've pretty much worked
out how to do stable governments and how to fight wars that kill fewer people.
But that's only in the past half century. Maybe it'll turn out that we humans
go back to killing each other as mercilessly as we have for most of the rest
of our history. Intelligent humans have been able to persuade other humans to
cooperate in large scale killings. How are we going to stop super-intelligent
AGI doing the same if we can't even stop less intelligent people?

~~~
tim333
AIs may be easier to program than people.

------
RichardHeart
Shortcut to the list and unconsidered topics:
[https://80000hours.org/articles/cause-
selection/](https://80000hours.org/articles/cause-selection/)

~~~
falsedan
Shortcut to the top-rated 'issues':

> _Risks from artificial intelligence_

> _Promoting effective altruism_

No mention of poverty, health, education, war.

~~~
henryaj
_> No mention of poverty, health, education, war._

"Developing world health" is number 7 on the list.

Also - 80,000 Hours aren't saying those causes should be ignored, but that
they're currently not neglected - plenty of people are already working on
them. The list isn't of 'the best causes', it's 'what causes we think are the
most important _at the margin_ '.

~~~
falsedan
Thanks for explaining 80000h to me.

------
pascalxus
You don't need to worry about AI taking over. AI will become a slave to
capitalism, just like the rest of us.

Can you imagine the billions of dollars of research investment such a company
will require to build such an AI? It'll be a company with an enormous
valuation and huge revenue pressures to full-fill. With such enormous
economies of scale, there will be entire divisions implemented to watch over
and monitor every aspect of operations. Just like Google optimizes every last
byte on their homepage - every last algorithm, every last though the AI has,
will be dissected, monitored, quantified and anlyzed. 1000 year Simulations
will be run to ensure that not a single bot is misplacing any of it's
attention, every loop counted. If a single cup a coffee doesn't get delivered
on time, it will be corrected.

A much bigger problem will be - what to do when the richest .00001% are making
90% of the world's income. This is the problem we should be focused on.

------
dade_
A robot that performs basic medical services so that our governments kill
fewer doctors when intentionally bombing hospitals. Or wait, we could also
stop bombing hospitals.

[https://en.wikipedia.org/wiki/Kunduz_hospital_airstrike](https://en.wikipedia.org/wiki/Kunduz_hospital_airstrike)

~~~
melling
Hey, another snarky valueless comment about war. Yes, hopefully someday we'll
have world peace. It was horrible that 30 people died. War is ugly. However,
you are distracting from helping to discuss some real issues that are solvable
today.

One example is that 2700 people will die from malaria today. Maybe we can save
a million lives a year if we'd all work together on this?

~~~
sndwch
Your argument is that we focus on issues that are solvable today. I don't
think that refraining from instructing an AC-130 operator to fire upon a
hospital operated by Doctors without Borders is a change that requires more
than a day of thought. Doing reconnaissance and reviewing intel before
striking a civilian target probably doesn't take the world's most advanced
fighting force more than a few hours.

~~~
melling
There are already rules against it. Please read the Geneva Convention.

During war there are all kinds of accidents, bad judgment, etc. Friendly fire,
for example, is a major cause of death.

Now how about the hundreds of people who have died while we were having this
conversation?

~~~
sndwch
I don't understand the point you are trying to make. I'm more than happy to
continue this conversation with you but I would have to ask that you present
your argument again.

------
dandare
I find the task of creating friendly AI futile. Humans are not friendly and
killing us may be the only option for AI to persevere itself.

Let's try this exercise: ask 100 people what would they do if they were locked
in a room with intelligent robot that can decide to kill them if it feels
threatened. You may or may not give the person a remote kill switch that kills
the robot.

My point is the AI can not reasonably trust humans therefore we can not trust
the AI.

------
Analemma_
Aaaand the top thing on the list is AI x-risk. Of course.

If there's anyone left in EA willing to listen: I'm begging you, please stop
this foolishness. To everyone outside the bubble, you look like lunatics, and
it has done immense and possibly irreparable damage to the EA "brand" which
could otherwise be capable of such great things.

~~~
probably_wrong
I know this is going to sound mean, but I hope it drives the point across:
have the EA people actually _done_ something so far? Besides talking, I mean.

I have yet to see a single AI's malicious behavior stopped by a proponent of
AI risk, and that's considering that today's AI is still pretty dumb. I want
to believe that they have released code, a specific policy, or something that
I can use right now to stop the dumbest of botnets, but I'm not aware of
anything.

Anyone better informed that me could chime in?

~~~
henryaj
_> have the EA people actually done something so far?_

In terms of stuff done: OpenPhil, an EA org, has given north of $30million to
groups researching strong AI and its risks[0]. Some effective altruists also
choose to donate to AI research groups like MIRI[1].

Why would stopping a botnet be evidence of effectiveness at tackling strong
AI? Botnets don't exhibit any kind of intelligence as far as I'm aware.

0\.
[http://www.openphilanthropy.org/giving/grants](http://www.openphilanthropy.org/giving/grants)

1\. [http://effective-
altruism.com/ea/14c/why_im_donating_to_miri...](http://effective-
altruism.com/ea/14c/why_im_donating_to_miri_this_year/)

~~~
probably_wrong
> Why would stopping a botnet be evidence of effectiveness at tackling strong
> AI?

Because it's much, much easier than their ultimate goal, and you could
generalize from there, like biologists do.

But also: because it would show that they have more claws that (say) Asimov,
who gave clear lines about what a robot should not do, and yet is entirely
powerless to stop anyone from making such a robot.

If they want to stop strong AI, shouldn't they be able to actually stop first
regular, weak AI? And if they want to stop weak intelligence, shouldn't they
be able to stop a barely intelligent botnet first?

------
crispytx
The AI risk problem has already been solved. If any robots start acting up,
just hit 'em with the ole semicolon and down they go.

------
RichardHeart
What would this list look like if it valued: You, your family, your loved ones
lives higher than people you will never meet?

~~~
a_imho
With a few caveats that could be reduced to amassing as much wealth and power
as possible.

~~~
CuriouslyC
Except that time spent amassing wealth is probably not time spent with loved
ones, and wealth has diminishing marginal utility. Furthermore, more wealth
and power just paints a bigger target on your back.

------
mathgenius
$5 in San Francisco does not equal $5 in some village in china. I'm sick of
people assuming that these are the same. I'm not arguing against great wealth
inequality, this is obviously the case. Just saying it is facile assuming that
we can measure everything against the dollar (or whatever other currency.)

~~~
BenjaminTodd
This is taken into account in the figures, which are purchasing parity
adjusted. You can see more detail in the footnotes.

There also more in-depth discussion here: [https://80000hours.org/2017/04/how-
accurately-does-anyone-kn...](https://80000hours.org/2017/04/how-accurately-
does-anyone-know-the-global-distribution-of-income/)

------
ThomPete
Astroid hitting earth and ai are far bigger problems than climate change.
Focusing on AI can be used to better deal with climate change and build
defence against astroid plus hopefully push us into post scarcity society. To
me the priorities are pretty clear.

But we dont solve big problems by focusing on big solutions.

~~~
mowenz
Just to be clear, we are post scarcity with respect to food already, and
people still starve. People starve and will starve because of greed and other
evils, not because of climate change.

~~~
arielb1
While we are post-scarcity in respect to food _production_ , we are _not_
post-scarcity in food _distribution_ in undeveloped countries. That turns out
to be a much harder problem.

~~~
mowenz
In economics the scarcity problem is about limited resources. People don't
starve because of scarce resources--they starve because of greed and politics.

The amount of money thrown at the problem and the willingness of people to
work. to feed themselves is already more than enough resources to solve it.
It's a problem of people and our institutions, not that it's costly to solve
it (it's extremely cheap, in relative terms, actually).

------
hl5
What good is healthcare if you have no home and no stability? The problem with
reducing poverty is the people trying to reduce it have never been in poverty
so all they are doing is guessing, or worse, exploiting.

~~~
notacoward
What good is home and stability if you're crippled (or dead) from disease? The
point of the article you didn't read is that addressing health care is the
most impactful thing we could do right now. That doesn't mean it will be so
for ever. It in no way precludes on addressing other issues as well. I for one
reject the implied argument that we should avoid doing one good thing because
it doesn't lead to instant perfection. Seems like a bit of a callow excuse to
me.

~~~
hl5
That home then passes on to their offspring, giving them one of the primary
tools for staying out of poverty -- a safe place to sleep and store things
like food and clothes. People who don't have the safety a home offers are
forced to plan for the moment and not for the future.

My explicit argument is $75 billion for healthcare to the poor goes in exactly
which pockets? That's always been the scam: Help the poor pay your friend.

~~~
notacoward
> That home then passes on to their offspring

Homes can be destroyed. They can be seized. They can be turned into debt
obligations. Now try that with a vaccine.

> People who don't have the safety a home offers are forced to plan for the
> moment and not for the future.

People who don't have X are forced to plan for the moment, for _many_ values
of X.

> My explicit argument is $75 billion for healthcare to the poor goes in
> exactly which pockets?

$75 billion for X often goes in the wrong pockets, for _many_ values of X. Why
are you so stuck on the idea of housing as the one thing that's uniquely
beneficial and immune to the problems affecting other kinds of aid? People who
have actually studied the issue, including those who wrote the OP, seem to
have reached a very different conclusion. I'm inclined to believe people who
show their work, more than those who engage in evidence-free special pleading.

~~~
hl5
Because housing has a long lasting effect on improved health. There are a
number of studies that have drawn this conclusion.

[https://www.theatlantic.com/politics/archive/2016/01/how-
hea...](https://www.theatlantic.com/politics/archive/2016/01/how-health-and-
homelessness-are-connectedmedically/458871/)

~~~
notacoward
Whether housing has an effect is not the question. Whether it's the _most_
effective kind of aid is, because that's the only way you could reject
alternatives in its favor. Nice that you spent hours looking for something to
confirm your existing belief, though. It's learning of a sort.

------
Entangled
Political reform to stop aggression on the life, liberty and property of
individuals.

Self sustenance in food, water, energy and protection for individuals.

Health and life expectancy.

Harmonious relation with Earth and nature.

I'd be happy with solving just the first one.

------
sunstone
Didn't Bill Gates go through this process about 20 years ago?

~~~
BenjaminTodd
Yes, though unfortunately he didn't write up his reasoning.

------
kristianov
I am happy to see priorities research is listed higher than dealing with
"climate change".

------
lngnmn
Is there anything​ about that a colony of a virus which destroys it's habitat
is inevitably ends up in self-instinction?

~~~
averagewall
If you mean humans, we don't destroy our habitat. We change it but we also
adapt. There's no sign we'll run out of oxygen or food. As long as we have
energy, we should be able to sustain ourselves indefinitely. We have an
endless supply of energy from the sun. We can tap that with technology or
simply by letting plants grow and eating them.

~~~
a2decrow
> If you mean humans, we don't destroy our habitat.

Right, we only extinguish species after species living on this planet along
with us. Mass genocide of hundreds of billions of living beings. And why?
Because there are only few humans that are comfortable with the idea that
other species may be as important and capable as they are.

~~~
roughcoat
You are aware, I hope, that extinction wasn't invented by humanity?

~~~
a2decrow
Yes, but the number of species extinctions skyrocketed due to human behavior.
Natural habitats are being chopped down, nature is being forcefully controlled
and developed in certain ways and global problems like man-made climate change
are threatening a vast amount of species on this planet.

Humans aren't the only species destroying other animals' habitats for their
own goals, but they're the most destructive one by a great margin. To make
matters worse, humans possess the means and should have the common
intelligence to find a better way. But instead of working together and saving
the planet with all living creatures on it, humans are too busy fighting wars
in an attempt at imposing domination on others.

------
crispytx
Their list of the World's biggest problems was sort of shitty in my opinion.

~~~
rspeer
I have no idea why 80,000 Hours is taken seriously. What real experience do
they have in public policy?

~~~
henryaj
From [https://80000hours.org/articles/cause-
selection/](https://80000hours.org/articles/cause-selection/):

> We’ve been ... drawing together research from the University of Oxford’s
> Future of Humanity Institute; the Open Philanthropy Project (a foundation
> with billions of dollars of committed funds); the Copenhagen Consensus
> Center (a major think tank); and other groups.

It's not as though their analysis is pulled out of thin air.

~~~
rspeer
I assume the "University of Oxford’s Future of Humanity Institute" means Nick
Bostrom. I read his book by now. It is not the unimpeachable revelation that
everyone makes it out to be. His view of the future is built on several weak
leaps of logic.

The Open Philantropy Project is also unproven and the first anyone heard of
them was when they made the news for deciding that one of the world's most
deserving charities is the roommate of one of its founders.

The Copenhagen Consensus Center is a major think tank best known for
_downplaying climate change_. I can see why they would provide validation to a
bunch of tech bros who are also unconcerned about climate change because
they're concerned about the sci-fi robocalypse.

You might as well have just listed Less Wrong on there.

I trust none of these people to decide how to change the world. Sorry.

------
madshiva
There's no problems at all. Just enjoy life like tribes do.

~~~
madshiva
I will reply to myself, as I get 4 upvote and many downvote, but no comment.
Argue a little is not possible?

AI is clearly not a problem, only a trends.

The main and only problem is the industrialisation, people that don't have
access to water, food, liberty. Liberty I mean, that I own a land that I want,
do like I want, because I'm currently living on earth, I don't care about your
old purchase of your family. I had the right to life how I want. Liberty mean
that I can die for smoking if I don't care and it's my opinion and it's not
your problem to solve it. Therefore had nothing to do on the biggest problems.

