
A Frightened Optimist On The Future of Humanity - signor_bosco
https://iai.tv/articles/nick-bostrom-a-frightened-optimist-on-the-future-of-humanity-auid-1257
======
mmiliauskas
I just saw him on Joe Rogan podcast, he is selling another book isn't he?

~~~
kingofpee
Sound like that's the reason for this article aswell

PR campaign at its best

------
garmaine
Nick Bostrom is a philosopher, not an engineer. If he spent his time actually
making things instead of fear mongerng, he might have an appreciation for how
truly difficult it is to actually get things to work, and the challenges that
would still confront advanced AI. His instantaneous end-the-world outcomes are
not plausible and will never be realized. Yet he argues for draconian, fascist
controls on technology and society that have real negative ramifications.

Why do we continue to give him airtime?

~~~
saagarjha
I don’t think it’s necessarily a bad thing to let a philosopher muse a bit
about ethics and advise caution. Maybe we won’t have the AI singularity, but
some of the questions brought up (should we improve ourselves? Is technology
inherently good?) are quite important and contentious.

~~~
garmaine
> Should we improve ourselves?

I don't like dying of primitive diseases, and aging.

> Is technology inherently good?

It doesn't have to be inherently (intrinsicly?) good. It has good
ramifications, such as the literally billions of people that have had a chance
to live and be brought out of poverty that wouldn't have even existed
otherwise.

Those aren't insightful questions. They only _sound_ wise when you don't think
critically about them.

~~~
reggieband
I think it is fair to see both sides of technological advancement. For every
positive use of technology there is often some negative use. I think of
nuclear power vs. nuclear weapons. No one thinks we should stop investigating
chemicals and diseases just because someone could use the information to
create chemical weapons or biological weapons. However, we do have social and
political tools to attempt to deal with these negatives, whether they be
conventions or treaties. As a more recent data-point look at all of the hubbub
caused by social networks and their use by opposing countries to destabilize
political process. We are going to make real changes to technology to attempt
to mitigate those issues.

We need to think about the negative consequences of gene editing, robotics,
super powerful AI, etc. not to overshadow the positive benefits but to help
remind us to create the tools necessary to mitigate those negatives. If the
only possible ramifications were good then we wouldn't need to be having this
conversation.

It reminds me of interviews with Steven Pinker and his opinion on potential
negative outfall from AI research. He claims that engineers are generally
good-willed and they try to do the best they can, e.g. they don't design
bridges that will fall. Yet bridges fall down sometimes, occasionally quite
soon after they are built. I'm glad there is some external oversight on issues
that are potentially world-changing, just to have a sober second opinion.

------
raxxorrax
I think most concepts of transhumanism are yesterday's crave for hoverboards.

Still, I don't think all his fears are unfounded. While I am skeptical about
artificial general intelligence that suddenly surpasses our ability to reason,
I am more concerned about specialized agents that quantify nearly everything
of our lives and gain influence through human actors overestimating their
abilities and therefore creating a data burocracy that will indeed have
problems with bias.

Not arguing against the transhumanist tech lead that might have or might not
have a form of cocaine addiction that at least supports people dreaming of
more advanced AI.

More concerned about the people that buy into it and treat it as gospel. I
mean it comes from a machine that doesn't have these aweful restrains humans
are subjected to...

------
Gatsky
This doomsday futurism has an interesting characteristic - the author of such
prophecies can't lose. Either they are right and humanity (or what's left of
it) will look back on them as prescient and clear thinking, or they are wrong
and everyone will be too relieved and happy to care. The Y2K bug was another
example.

~~~
LocalPCGuy
There are 2 schools of thought on the Y2K bug, and the one you left out was
that quite a lot of money was spent fixing bugs prior to the year 2000, which
helped ensure that it was a minor event, with minor failures, instead of a
bigger problem. There are computer systems we are even just now starting to
hear had major problems that were addressed. Yes, there were people hyping it
like crazy, and the average personal computer was not really at risk of
anything major happening. But there were serious concerns that were addressed
prior to 2000.

Also, it's likely the 9/11 attacks had less of an impact because of
redundancies and backups put in place for the Y2K bug.

~~~
Gatsky
I don’t think you read my comment. I’m not passing judgement on whether Y2K
was a legitimate concern. The point is those saying the world was going to end
could never really be criticised no matter what happened. This is the
doomsayer’s privilege.

