
Ask HN: Are you concerned about AI? - nvahalik
Elon Musk says that if you aren&#x27;t concerned, you should be. If you are concerned, why? If not, then why not?
======
oldandtired
No. As someone who has watch AI development since the late 70's as both a
student, undergraduate and then real world programmer, every hyped-up break-
through in AI has led to minor improvements but nothing that takes the
programs beyond relative simplicity.

Even the systems that beat grandmasters in there respective games are having
to operate at gigahertz speeds, focussed on one simple task to beat the human
who has to operate in a highly sensory loaded environment. The human has many
things to focus on, is running at relatively low speeds (nerve impulses are
glacial slow compared to the impulse sequences in silicon hardware) and is
still only just being beaten.

We have little or no real understanding of how the brain works, what
sentience, intelligence and consciousness or free will is. Until we have this
kind of understanding of ourselves, this concept that certain people are
worried about (the singularity) will not happen. None of our computer systems
(or all of them combined) are of the complexity of a single human baby brain.

In addition, since I also believe that there is a Creator God who built and
designed the universe and that new intelligences are solely in His domain, I
have no concern over any such singularity ever being created by man.

~~~
fellellor
What we do have to worry about is the power of hype and marketing and the
stupidity of the management class falling prey to that.

That stupidity and short sightedness may cause more loss of livelihood than
the technology itself.

~~~
studentrob
Yes. It seems almost inevitable that companies will overhype their versions of
the tech in order to get more investment. The more success the tech has, the
more expectations the public will have.

Musk's rhetoric has come from some relatively minor advancement, in my
opinion, compared to what is coming, given that GPU/ASIC hardware is still
improving.

I think education can help, and maybe we should expect some dips. Perhaps in
the next dip, research into the tech will not fall as flat as it did in the
1980s during the last AI winter. Who knows what lies ahead.

------
nigrioid
No - we have only a very basic understanding of how the brain works, have no
idea what consciousness even is, and know little about human intelligence.

The only thing I can think of is there's a sinister agenda pushing these
constant fear-mongering stories about how we're all going to die from AI. Show
me a neural net that even comes close to doing anything resembling original,
organized thought and I might take these warnings more seriously.

~~~
jostmey
We still don't know everything about the biology of bird feathers but we can
make flying machines.

True AI is not going to happen tomorrow, but I expect it to come within my
lifetime. It's easy to get hung up on the complexity of biological systems
like the brain, but sometimes all that complexity is not necessary to the
function. Complexity is the result of an evolutionary process, and may not be
a necessary component of intelligence

~~~
studentrob
Evolution has taken us through an uncountable number of neurons and lifespans.

In my opinion, even if we had an AI that matched performance of the human
brain, it'd still be missing the historical data on earth (and the universe)
we used to arrive at this point, the level of parallelization that exists
across the earth across all life forms, and the capacity to input as much data
as all life on earth can. Data and input capacity are as important as the
model in machine learning. I think we sometimes forget how interdependent we
are. We learn a lot through sharing our experiences, and when there are more
of us, we make more breakthroughs as a species.

And, maybe the answer is 42.

------
bikamonki
I am concerned. People think AI = Skynet but in reality, the next couple of
decades, AI = millions of humans out of jobs. I am concerned the change will
be too fast for us to adapt. Not only do we need to invent a lot of new jobs,
we may need to re-learn basic paradigms like job = goal of life = success. I
suppose we can imagine millions of humans that are freed from work to devote
their time to hobbies, art, sports, entertainment, etc. What if that does not
happen? We plug them to media and keep them calm? Bored jobless youngsters are
dangerous, no?

~~~
gremlinsinc
The more I watch this unfold the more I feel we need a new type of
party...something more technocratic (uses ai/tech to streamline processes/end
bureacracies and make things run more efficiently) and socialize some
things..like medicine/basic income/education -- but STILL keep production in
the hands of businesses.

One alternative to single-payer could be hospitals = insurer. Hospitals would
have a max they could charge which would be a % of income for individuals
above poverty line and would be required to treat poverty level
families...maybe divy them out by hospital as best as they can to spread the
costs.

hospitals would benefit because they could set the price for
themselves....since they only have x amount of recurring income and they know
roughly x amount of patients come through door monthly for x, y, z
ailments...etc..

They could form coalitions with other hospitals to negotiate drug rates that
are more fair...etc...

All while ending insurance industry, ending insurance CEO pay, sales agents,
customer support jobs--all wasted money in healthcare... I think a MRR system
makes more sense to their bottom line. -- traveling? Your host
hospital/insurer would pay the out of area hospital.. they'd all set
comparative prices that benefit everyone...because they all have the extra
costs of making sure the system holds up...

------
macavity23
I actually am, but not about AI becoming smarter than us, or the
'singularity', for good reasons others have already stated.

I'm concerned what happens when human-sized combat robots become capable of
defeating a trained and motivated human opponent. Such a thing is certainly a
way off, but requires no fundamental advances in our understanding (very much
unlike skynet/singularity-style general AI) so I see it as a certainty sooner
or later, and the way everything is going probably sooner.

Throughout human history, the inherent power the rich have over the poor has
had an equally-inherent counterpower: there are many more poor people, and if
the rich make things too miserable for them, if they all rise up together,
they will win. To my knowledge this is a universal in human history (please
prove me wrong!), and in general tyrants and elites everywhere have to devote
SOME of their resources at least to maintaining the 'general welfare', which
is really a nice way of saying 'giving the poor enough to stop them rising up
and taking our stuff'.

If robots become capable of beating serious human opponents one-on-one, then I
think this age-old balance changes, because once you can build one such robot,
you can build a hundred thousand, and what are the poor people going to do
about that?

------
studentrob
In the sense that it will kill us? No.

In the sense that it may cause a lot of people to retrain? Maybe.

Regarding the first version,

Musk says this in order to acquire more AI talent, or get it cheaper. Jürgen
Schmidhuber does the same thing. What they have in common is they both run
organizations that they founded in order to build "AGI".

It is odd that they're warning against AGI and building it at the same time.

My theory is, if they can convince people that they're doing something
"causeworthy" or world-changing, then they can attract more AI talent. In
Musk's case, OpenAI also serves as a pipeline, or at least networking
location, for getting more in-house talent at Tesla.

I think it hurts the field of AI that they say things like this because it
gets people's expectations up. However, it also looks like it helps them
recruit and network in an area of work that is very, very competitive among
companies.

The guy who went from Waymo to Uber's self driving car program, for example,
had received a $100 million bonus from Google, and Google said this was not
out of the ordinary for engineers at that level.

Regarding #2, I'm a bit more optimistic than the average SV person. I don't
think anyone can tell the future. We just need to focus on making education
more accessible. There are no two ways about it, in my view.

------
rodgerd
_Actual_ AI? No. But if I _were_ super-rich I would be, because I'd be
concerned an actual AI would look at the problems that the world has and
conclude that resource hoarding at the top of the wealth pyramid would be the
first problem to solve.

The things being sold as AI? Absolutely. Weapons systems, policing systems,
and the like, being run by poorly-understood algorithms infected with the
unconsidered biases and assumptions of their creators seem like a great way to
make things worse, not better.

------
larkeith
True AI, not in the slightest. However, the current implementations of machine
learning I believe have the potential for disastrous results, especially as ML
is being used for ever more critical roles; We're due for autonomous weapon
platforms this year, and self-driving vehicles are soon to follow, while there
is limited research on the security of the algorithms, especially in the case
of induced edge cases.

------
Vanit
I'm not concerned about AI, but I am concerned about AI being improperly used.
In Australia there was a recent controversy with the govt using big data to
improperly correlate tax returns with information submitted to centrelink
(social security), to determine if people had received welfare when they were
ineligible.

It turned out that correlating the data was prone to error due to limitations
in both datasets, and it was worsened by the high error tolerance, such that
they automatically issued debt notices to all false positives and cut their
payments with no human oversight. This was made worse by there being no way to
appeal the debt to a human.

I think you're only going to see this kind of blindfaith incompetence more.

------
nvahalik
I think thus far the only thing I'm really concerned about is our
trust/dependence on it. Articles like this one
([https://www.autoblog.com/2017/08/04/self-driving-car-sign-
ha...](https://www.autoblog.com/2017/08/04/self-driving-car-sign-hack-
stickers/)) show just how fragile some of this ML stuff is.

Of course, I'm also assuming that ML = AI... that is a correct assumption,
right?

~~~
larkeith
IIRC AI will probably emerge from and incorporate ML, but it's unlikely that
AI will be entirely ML-based.

------
KnightLore
I'm not concerned about some kind of consciousness, but before machines were
unable to recognize something else than a very specific type of data. Now
machines can recognize faces, objects, even emotions. That attached to a
program and the existence of "big data" can be even more dangerous, because
human is dangerous.

------
williamle8300
Not afraid of an hostile take over by AI. Everything is data all the way down
so I'm afraid of those that wield that data.

------
Animats
We're already at "machines should think, people should work" for low-level
jobs, like fast food and order fulfillment. Robots are still not good at, or
cost-effective at, unstructured manipulation, but computers are great at
organizing work, communicating, selling, and accounting. What management used
to do.

~~~
oldandtired
Computers are not great at organising anything, cannot communicate, cannot
sell and cannot do accounting.

It requires a person to "program" these facilities. Without the human
generated programs, no computer is capable of doing anything. They are no
different to a hammer, a crane or crowbar. They are simply tools in the hands
of a human.

They are completely useless if not picked up and used appropriately.

What is unfortunate is that there are many people who act like the computer
and get blocked by GIGO. They are unable to work around this.

------
darylteo
My worry with any new technological leap is that we simply forget how to do
things that we needed to do prior. When it comes to the AI we have now:

* when self driving cars are mainstream, will we forget how to drive?

* when ai can answer all questions we need and want will we forget how to think and solve problems?

* when robots can print anything and everything, will we forget how to manufacture things with our own hands? What happens to agriculture and food production as well?

We don't even need to get to the point where we build weaponised AIs that kill
us SkyNet/Horizon style; We might just forget how to do stuff.

My theory for the near future is that society will need to form guilds of
arts/sports/craftsmanship to preserve the skills that brought us here.

~~~
jacknews
Of course we won't stop thinking, creating, conversing, and there will always
be hobbyist and academic interest in even obsolete skills.

But we have already forgotten how to make, how to farm, etc. I don't think any
single person can make even something as simple as a pencil, from scratch, let
alone something like a cell-phone, advanced drugs etc. Even bringing together
a collection of all the necessary experts, the task would likely be almost
impossible without computers, and the rest of modern infrastructure.

If humanity were to be somehow detached from even today's infrastructure, we'd
be in trouble.

------
barbarian
"Super Intelligence" made a nice point that AI will likely be very stupid
until it's suddenly very, very smart. I thought the AI worries I read about in
the media were a bit hyperbolic till I read this book - it makes some good and
sensible arguments as to how a human dangerous AI might come about. It doesn't
claim general AI is necessarily very likely, but more that, in the event it
does come about, it will be very sudden, very swift - it won't be the gradual
curve of innovation and improvement we've been used to elsewhere - and so our
time to react to it's birth and implications will be very short.

------
thesmallestcat
I worry about "good enough" AI winning in the jobs race against humans mostly.
You can see it now, like the other day where some non-extremist content was
being purged from YouTube. If a human were doing that job it wouldn't have
happened, or at least somebody would be accountable instead of "lol a bug." I
can see a future where all kinds of interfaces are aggravating and obtuse,
like when you have to speak carefully at a phone prompt for the speech
recognizer to understand you.

------
pryelluw
Yes, because even basic AI can be hacked at the hardware level.

------
paulcole
Not at all. I'll be long dead before anything happens.

