
Positively shaping the development of AI - sajid
https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/
======
Veedrac
It seems weird to focus the risks of a good attempt when the greater risk
seems to be from far earlier uses of weaker AI from corporations that aren't
invested in using these safeguards at the cost of lower profits. It seems
unless you solve the motive problem, the practicality aspect is meaningless.

~~~
TJ-14
Or hostile governments, or other bad actors.

I foresee a coming cyber war between the US and China fought entirely by AI.

~~~
asah
+1 - seems obvious. I'd also add that the Russians will (already have) created
autonomous heavy physical weaponry.

------
stestagg
An interesting area of research would be into how much of the 'dangerous'
aspects of human intelligence come from our historical survival needs, and how
much is an innate aspect of intelligence as we know it

Is it possible to build a self-aware intelligence that we as humans can relate
meaningfully to, without introducing the conflicting traits/balances (baggage)
that we've inherited from society/our parents

~~~
bobcostas55
Steve Omohundro has written on this topic, predicting that a certain set of
features (termed Omohundro Drives) will appear in all sufficiently advanced
AIs.

[https://selfawaresystems.files.wordpress.com/2008/01/ai_driv...](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)

------
FrozenVoid
Pragmatic AI tuned for maximum performance and optimization will not be
friendly - it will do what it values as best. Basically, any program given
control(of something) which has(or added ) built-in machine learning=unethical
monster that doesn't have morality or any concern for humans.
[https://www.reddit.com/r/frozenvoid/wiki/ai/super-
intelligen...](https://www.reddit.com/r/frozenvoid/wiki/ai/super-
intelligent/risks/frankensteins_monster) e.g. 1.a program which isn't
perceived as AI, like traffic control, vehicle software. 2.Given modules to
optimize problem X using machine learning. 3.Does exactly what it finds most
"optimal" 4.People start dying or get into harms way.

------
gonational
> We estimate that the risk of a serious catastrophe caused by machine
> intelligence within the next 100 years is between 1 and 10%.

VS

> Odds are 33.3% repeating of course.

In all seriousness, those are about the same.

------
Osiris30
Kevin Kelly - the AI Cargo Cult & the myth of a Superhuman AI:
[https://news.ycombinator.com/item?id=14205042](https://news.ycombinator.com/item?id=14205042)

