
What if AI is a failed dream? - errkk
https://www.madebymany.com/stories/what-if-ai-is-a-failed-dream
======
Tepix
I don't understand why he finishes with this paragraph:

 _I’m as excited by the idea of colonising Mars as much as the next nerd. But
not at the opportunity cost of not solving the problems facing us on Earth.
That’s a bet too big._

Making an effort to colonize Mars does not carry such a cost. People who are
interested in solving this issue will not magically do whatever you'd like
them to do instead. We will never solve all problems we have here on Earth,
that shouldn't stop us from venturing out into space.

~~~
lottin
My take is he means that trying to move to another planet isn't a good
survival strategy as a species, because of the high probability that such an
endeavour would end in failure.

~~~
QuantumGravy
We stay on this rock, we die on this rock. Confining ourselves to Earth isn't
a survival strategy, it's giving up. Rebuttals would be appreciated, though!

~~~
wu-ikkyu
Perhaps a counterpoint would be that the opportunity cost of colonizing mars
would be better spent by first solving the existential issues here on Earth.

Once we have figured out how to achieve homeostasis on a planet so plentiful
in resources as our own, then we could start looking to do so elsewhere in
less favorable environments.

Mars is extremely barren and has negligible capabilities for life support.

~~~
freeflight
> the existential issues here on Earth.

I don't think we have any issues like that as a species, as a species we are
pretty much thriving. That growth might not be built on the most sustainable
principles, but we are slowly getting there.

Short of something super apocalyptic ruining the whole planet for most life
(like a big asteroid hitting us), I don't see humanity eradicating itself
completely anytime soon, we are quite a sturdy bunch.

> Once we have figured out how to achieve homeostasis on a planet so plentiful
> in resources as our own

We've had plenty enough time trying to do that, maybe it's time to try a
different exercise with more constraints to motivate creativity? Mars could be
exactly that.

~~~
jellicle
> That growth might not be built on the most sustainable principles, but we
> are slowly getting there.

Despite what you hear about electric cars and so on, every year the human race
increases the amount of environmental damage it does compared to the year
previous. We're not just increasing the damage; we're increasing the rate of
increase of the damage. Heck, not just the first and second but also the third
derivatives are all positive.

So in fact we're not getting to sustainability at all; we're moving away from
it faster each year. We're accelerating into the apocalypse. Some day that
might change, but not this year. First we'd have to stop accelerating. Then
we've have to slow down. Then we'd have to start reversing.

~~~
freeflight
First part of solving a problem is recognizing that you actually have a
problem. I increasingly see that happen in regards to the finite nature of the
resources on Earth and it's rather delicate balance of climate vs pollution.
These are problems we've accumulated over generations and just recently
recognized we actually have, to me that's worth something.

------
delegate
I personally consider another possibility :

AI is already here and we're all slaves to it already, we're just in denial of
being in control. It's just a matter of interpretation.

Right now, machines dominate our life from the moment we wake up till the
moment we go to sleep with our phones beside us.

It runs the world - guides the ships, planes, cars, drilling machines,
financial system and so on.

The symbiosis makes us feel like we have a say - in practice, any of us can do
absolutely nothing to stop _it_ or change its course.

Our power starts and ends with the branch we're working on - then our pull
request will be merged into master - and that's pretty much the value of the
company we're working for - source code for apps that run and interoperate in
the cloud.

The illusion is that we're doing it for other people, the reality is that we
do what the machine requires us to do so that it's even more pervasive in
everyone's lives.

Overexposed to machines, people are too numb and apathetic to notice or care
about whatever new app or service or product.

Well, it's just another way to look at it, otherwise there're lots of great
things about the machine - maybe it will even save us from ourselves -
question is, why ?

~~~
wu-ikkyu
I forget where but I've read of this theory before: that what constitutes a
true AI is constantly redefined as the level of development that we have not
yet achieved, so as to be invisible

~~~
GuB-42
In other words, true AI is magic.

~~~
wu-ikkyu
Or at least the idea of what true AI _is_ is a constantly changing mythology

------
blackhole
I appreciate the point of view this post is providing. It is always important
to think about the potential limitations of AI, and the fact that once again,
we may hit the ceiling of our current attempts at AI much more quickly than we
realize, and no singularity will ever occur. We need to think about how to
solve our problems realistically, now, without waiting for a godly super AI to
come solve them for us.

However, what's bizarre is that he is painting a world that is already wrong.
In particular:

"We won’t have massive, perfectly coordinated networks with optimised flow and
distribution — think traffic networks (those self-driving Teslas that act as
taxis when you don’t need them)"

But... we will. We're already making it. It might not work very well at first,
but automated cars are a real thing, and they are going to happen. We have
preliminary automated cars _right now_. Perhaps he is instead claiming that
our autonomous cars won't scale, but this too makes no sense. Of course we can
make them scale. Once we've solved the hard problem, which is actually driving
the cars around and not hitting things, solving distribution and flow is
almost trivial. We have all sorts of systems for solving distribution problems
and finding maximal flow along a network. There's even an entire class of
algorithms to solve it with[1], which we currently use, right now, to solve
things like scheduling airplane flights.

And then he brings up a point that seems completely perpendicular to the
entire rest of the post:

"We won’t have total surveillance (à la Reynold’s Mechanism or Brin’s
Transparent Society)"

This has nothing to do with AI and everything to do with cryptography. Whether
or not the singularity does or doesn't happen is completely irrelevant.
Indeed, it currently looks like we're headed towards having total surveillance
with or without the singularity, unless we do something about our privacy
laws.

While we shouldn't assume AI will fix all our problems, the examples provided
in this post are bizarre, to say the least.

[1]
[https://en.wikipedia.org/wiki/Maximum_flow_problem](https://en.wikipedia.org/wiki/Maximum_flow_problem)

~~~
mabub24
Your first paragraph is spot on.

The "singularity" is really just 17th-century metaphysics. It's Rene Descartes
all over again imbued with religious undertones of salvation and
transcendence. Most of the "aspirations" of its occurrence are nonsense.
Kurzweil has fashioned himself like an AI prophet. For some reason, it has
caught on among people, some very smart, and has lead to this general passive-
ness to solving real problems today. The "singularity" becomes the answer for
everything, "it's definitely coming!"

~~~
beachbum8029
The only problems I'm interested in solving today are the ones that will make
me rich enough that I don't have to work when the robots take over.

------
rgejman
>But if you took someone from 1870 and had them wake up 70 years later, in
1940, the world would look entirely different.

>Now skip forward from 1940 to 2010: apart from our obsession with little
glass rectangles, the world would be fundamentally familiar.

Disagree completely. The technology and information economy revolution means
that a socially well-adjusted blue collar, white collar or even field-hand in
1940 cannot compete at all in 2010 at the same job.

~~~
le-mark
Lots of crop types haven't been automated partly due to the presence of cheap
labor, and the difficulty in automating. Some fruits are very difficult for
mechanical pickers to not bruise, and a human touch has been required up to
now. Of course if the economics change or human like mechanical picking hands
become common, that would change.

~~~
stevekemp
It'll be interesting to see if the UK invests in such automation, as
apparently migrant workers are less inclined to pick fruit for the UK, post-
Brexit:

[http://www.independent.co.uk/news/business/news/brexit-
lates...](http://www.independent.co.uk/news/business/news/brexit-latest-
british-strawberry-price-rise-fruit-farms-eu-workers-seasonal-labourers-
pickers-a7802616.html)

------
yorwba
> Now skip forward from 1940 to 2010: apart from our obsession with little
> glass rectangles, the world would be fundamentally familiar.

This is similar to summarizing the discovery of space-warping technology by
"Except for everyone's pockets now being bottomless Bags of Holding, the world
hasn't changed much."

It misses the fact that computers are involved in _everything_ today. When you
listen to music, the sound is undistorted thanks to computers. If advertising
shows a smiling woman, her face is prettier than life thanks to computers.
When you see a plane flying overhead, it is saving fuel thanks to computers.
Even mobile telephony requires computers. (Imagine having an operator in every
cell tower, manually routing calls.)

It is easy to overlook, but lots of little details of our modern lives would
seem utterly impossible to someone from 1940, and I don't think this kind of
change is going to stop anytime soon.

------
d--b
When we discovered electricity, some people thought we could revive the dead
with it. When we invented steam engines, some people thought that we could
make brains out of steam. When we invented computers, some people were quick
to say that computers would replicate human thoughts. Though these
technologies did not match the wildest dreams, they had massive impacts on
humanity.

While you can certainly criticize the hype around AI, you can't deny the
advances made by self driving vehicles. That in itself is a major
technological leap, and will revolutionize the transport industry.

I think that robots that perform some more complex tasks - such as filling
amazon boxes, picking fruits, cooking stuff, and so on - may not be very far
away.

General intelligence/consciousness is not there, clearly, but there are some
elements of technology that somehow resemble the way the brain works, and that
we can find some useful cases for. That's really all that matters

~~~
philf
We do revive the (recently) dead using electricity.

------
jfoster
The premise is wrong.

"We’ve been told the artificial intelligence (AI) revolution is right around
the corner. But what if it isn’t?"

Google Photos already picks out objects in photos to make photos searchable by
keyword, removes obstructions (eg. fences) from photos. Tesla cars already
automatically follow the speed limit, automatically brake, automate lane
changes, etc.

A few years ago this article might've had a valid point, but not anymore.

~~~
frgtpsswrdlame
And yet we don't have have an AI which could tell us if a photo is a bird
_and_ win at checkers (or tic-tac-toe). Our current AI is stuck in widget-land
and it might solve some small, interesting tasks but we don't know how to even
approach the harder stuff.

It reminds me of that problem where anytime we do something new in AI, it is
quickly defined as not AI. I think that is totally correct because what we're
doing isn't actual AI! We're going to enter another AI winter once lay people
begin to realize the limitations of the current state-of-the-art. My
prediction is that this will happen once progress on the self-driving front
stalls.

~~~
phaemon
> And yet we don't have have an AI which could tell us if a photo is a bird
> and win at checkers (or tic-tac-toe).

Google can. I did search by image and it identified my picture as a dog. And
if you search for "tic-tac-toe" it has a built in game which has difficulty
settings up to Impossible (which presumably plays perfectly).

~~~
frgtpsswrdlame
Presumably those are separate systems. I'm talking about multi-task learning.

~~~
jfoster
The point is that we didn't have this in products 5 - 10 years ago.
Advancement is being made. It might have a limit, but no one is claiming that
we have 100% solved AI; just that advancements have been made and products are
not yet taking advantage of all the new possibilities.

------
dalbasal
Whn it comes to reasoning about AI and automation, and their potential effects
on employment I think hjournalism is doing a terrible job. There's a
borderline dishonest blurring of the lines between has-happened, is-happening
and may-happen.

 _..automation is eradicating our jobs. But — unlike in the past — new ones
aren’t being created to replace them._

This is very often written about as if it had already happened or at least
halfway there. As of right now, this is still a speculation about a future
that may happen, not some known fact about the past or present.

This reason that I bring it up here is that this article is specifically about
"what if we're wrong about the future."

Predicting is hard. Predicting technology, predicting economy, predicting
culture... It's all hard and we need to remember that we're speculating.

If we travel back to the futures predicted 70 years ago (as he suggests), many
of them were wrong. Keynes predicted drastically shortned workweeks. The era
was named the "nulcear age" or "Space Age." There were also predictions about
the continuation of the mechanisation (a close cousin of automation) trend,
which did turn out to be true.

Keynes' workweek never happened, even thoug the workforce grew as women joined
it. The space & nuclear ages kind of happened, but so far nothing earth
shattering has resulted. The continuation of the industrialization-
mechanisation trend has resulted in much cheaper durable and consumable goods.
Cutlery, soap and such is very cheap.

~~~
calafrax
> automation is eradicating our jobs. But — unlike in the past — new ones
> aren’t being created to replace them

He presents no evidence to support this claim, and it is most likely false.

If you remove resources from some sector, and people become unemployed, but
total resources were increased due to improved efficiency then those displaced
people will more than likely figure out how to get some of those resources
rather than starving to death.

~~~
DiThi
Your argument is as speculative if not more. Resources are not all equal. The
ex driver or ex cashier is not going to starve to death. But it's not getting
the same salary, if any at all.

~~~
calafrax
My argument is speculative except that technological progress has always led
to increasing generalized prosperity in the past.

The adjustment periods have been on the scale of generations though so you can
definitely have localized decreases in well being for large segments of the
population due to technological change.

~~~
DiThi
> has always led to increasing generalized prosperity in the past

Because people always had something else to do. But what will happen when
machines can do pretty much everything? That never happened in history before.

~~~
calafrax
This is a kind of "end of history" argument that assumes no further advances
can happen. I just don't buy it.

We will invent new things to do. Or, god forbid, maybe just spend some time
relaxing and enjoying life instead of working ourselves to death.

~~~
DiThi
> that assumes no further advances can happen

...by humans.

> We will invent new things to do. Or, god forbid, maybe just spend some time
> relaxing and enjoying life instead of working ourselves to death.

With some luck we'll be good pets.

------
taneq
I don't see how they go from 'but it might not happen yet' to 'failed dream'.
The fraction of things that humans can do which machines cannot is shrinking
monotonically. One of two things will happen - either humans are unbelievably
close to some 'maximum possible intelligence', or computers will one day
handily beat us at everything. My money's on the second scenario.

~~~
rwallace
> The fraction of things that humans can do which machines cannot is shrinking
> monotonically.

That's only true until it's not. Our little enclave of civilization that
currently exists on this planet has a finite lifespan, just like every other
civilization that's ever been run by uplifted killer apes. Either we will
achieve a takeoff point within that lifespan, or we will not. If not - and I
think that's the more likely outcome - there is no reason to expect another
industrial revolution now that the easily accessible fossil fuels are all
gone. Our descendants will eke out a living as subsistence farmers until
evolution eliminates the overhead of general intelligence or the sun
autoclaves the biosphere.

------
skrebbel
@idlewords has a very nice talk about this: "Superintelligence, The Idea That
Eats Smart People".
[http://idlewords.com/talks/superintelligence.htm](http://idlewords.com/talks/superintelligence.htm)

Also don't forget the filter bubble we're in. This article opens with "We’ve
been told the artificial intelligence (AI) revolution is right around the
corner", but only a rather specific in-crowd actually believes this. I bet if
you'd interview the average world-citizen they'd not be so convinced.

~~~
dirkc
From the link you reference:

> Premise 2: No Quantum Shenanigans

> ...

> the mind arises out of ordinary physics. Some people like Roger Penrose
> would take issue with this argument, believing that there is extra stuff
> happening in the brain at a quantum level.

And some other people would take issue with the idea that you can talk about a
simple physics that excludes quantum physics.

> But for most of us, this is an easy premise to accept.

I'm out.

~~~
DiThi
I read it as "everything can be explained with classical physics alone", the
same way we can safely assume the Earth is flat for making a small house.
We're not going to complicate math by using the curvature the same way we're
not worrying about tunneling, spin, entanglement, etc.

~~~
dirkc
That is exactly why I'm skeptical. Photosynthesis involves quantum
entanglement. I'd be surprised if quantum entanglement doesn't play a role in
consciousness.

~~~
haltingthoughts
Saying that quantum effects are present and saying that the human brain is a
quantum computer (or more?) and will get exponential speedup are two very
different things.

------
ThomPete
AI is no more a failed dream than normal intelligence and i chsllenge anyone
to come up with an argument for why we could evolve from basic elements of the
universe to what we are today but that this couldnt happen with "machines". AI
is already a reality its not just wishfull thinking. And people would notice
the difference quite a as it would have become way more granular.

~~~
pixl97
This is exactly my thinking. Things like faster than light travel or whole
object teleportation are wishful thinking. Things like intelligence explosions
are not, because one has happened in the last 100,000 years. That intelligence
has come to dominate every animal on this planet and the biosphere, even to
the point of escaping the planets gravity well itself. To imagine that we are
the pinnacle of the optimization of intelligence (especially when we cannot
tweak our bandwidth or input devices much) sounds like hubris.

------
logicallee
Your brain weighs about 3 lbs and uses about 20 watts. It's an analog device.
It does not have optical interconnects and the switching speeds of the
components within it are governed by chemical rather than electrical
processes: signals within it propagate at under a few thousandths of a percent
of the speed of light. It takes about 3 years to boot up and begin to be
sensible and over 12-15 years it achieves roughly human intelligence. Well
before those 15 years it surpasses all of our AI in a variety of tasks for
which AI is not intelligent enough.

The brain does lots of things, but some of them are quite small and well-
defined intelligent tasks, such as judging the meaning of a sentence it is
parsing in a language it has learned, or other "AI"-type work. We can easily
estimate whether it is doing so correctly. (For example through reading
comprehension tests, which we have standardized.) Human brains are able to
pass these tests and our best AI fails these tests.

The only way that there is no digital device that can ever model these aspects
of this analog device well enough to make the same meaningful calculations
(such as deciding what a sentence means in the context of human culture), i.e.
the only way the analog device has a monopoly on the calculation and judgment
it performs, for not only the next 10, 30, 50, or 100 years, but 1,000 or even
10,000 years, is if this analog device is a keyfob to a magical ethereal plane
where our souls and consciousness do all the real intelligent work, only
communicating back to our corporeal selves through an antenna which is our
brain.

Under that scenario it is certainly possible that AI will be a failed dream
forever. After all, rather than 3 lbs of analog device doing work, our
ethereal selves could each be the size of a billions of our universes.

Then it would be silly to imagine we could ever accurately model any part of
that. I don't think it's a false dichotomy here - I think it's one or the
other.

In my personal opinion the latter scenario is unlikely. In my personal
scenario anyone who says that nothing digital will ever capture the
calculating power of 3 lbs of meat is living on the same side of history as
Lord Kelvin when he announced "Heavier than air flying machines are
impossible".

To the exact and same extent, true AI is impossible.

~~~
beachbum8029
This guy gets it.

------
thinkfurther
To be honest I didn't read the actually because I gotta run, but right from
the outset there is one bullshit bit that always annoys me: there is no "the"
singularity!

For those who are in it, there is nothing else, they might not even have a
concept of the singularity. For those who aren't in it, there could be lots --
it's not called singularity because there can be just one of them. It just
means falling into a well you cannot get out of, and that an outside observer
can't distinguish you from the well either. Okay, so I made that up. But
whatever the best way to describe it may be, nothing about that inherently
prefers fulfilling all our dreams to torturing us forever or something else
entirely, that's quite orthogonal to something being a singularity.

~~~
Tepix
In this context the term singularity means a point in time in the future where
certain technologies become available and open up such vast possibilities that
it's impossible to predict what will happen beyond that point.

~~~
tormeh
Have you seen old sci-fi? If that's the definition of singularity then we've
been there for some time.

------
Pigo
I'll admit I fall into the camp of not believing the "Johnny Depp
Transcendence AI" is right around any corner (neither is an Martian apartment
complex for that matter). But I also believe it's the journey to such AI
that's important. Even if we will never reach this ultimate goal.

We're already reaping the benefits of what we have learned from this struggle.
And I think we gain so much insight into ourselves as we try to reverse
engineer the most important part of our meat vehicle.

~~~
brador
> we must strive to be more than we are

To what end? Is a computer with our collective brains as AI orbiting a sun
more or less than what we are? what should we be striving for as an ultimate
utopian end to progress? Does/can such an end even conceptually exist?

~~~
Tepix
I think the answer can be venturing out into space. It's a logical next step
for mankind.

~~~
brador
Why? Truth is we're hoping to find a miracle out there when it appears that
it's all just rocks and nuclear fireballs. What do we do after space?

~~~
DiThi
Survive a cosmic event.

------
nthcolumn
We have been dreaming this dream a long time now and are no closer really. I'm
sure you've all done this - one of the very first things I ever made was what
was known way back then as an 'expert system'. It determined which particular
disease you had by means of a series of interview questions from 'Robodoc'. I
drew flowcharts and planned it all out with the limited set of diseases,
symptoms and remedies available to me and was as you can imagine nothing more
than an horrendous spaghetti of if-then-else (or maybe even switch-case
statements) the fall through answer (when all diagnoses failed) being 'take
two aspirin and go to bed'. Even then I thought 'bah not enough data - the
bane of the computer scientist!' I think I was about eight at the time.
Looking back now I think it was cute. Anything other than flu and rabies and
you were in trouble. Is Robodoc closer to Watson than Watson is to HAL? People
are conflating AI and machine learning. They think AI is already here.
Personally I don't think any one team or project will ever solve AI as real
intelligence is an emergent property.

------
throw2016
I think many continue to underestimate human beings. It reflects a curious mix
of lack of self awareness, a certain capacity for self aggrandizement and
hubris, reductionism of basic human tasks and confusing under-achievement in
others with general human potential.

A lot of the current hype far from demonstrating a firm grasp of the problem
rest on reductionism and betray a shallow child like perspective of humanity.

AI is going to take a far greater understanding of ourselves and our
environment that we currently possess and like all advancement it will be
exciting when and if we achieve it. Self driving cars will happen, but in far
constrained and controlled environments than our roads today putting a
perspective on current capabilities.

Car manufacturers have been extremely slow to adopt technology and have been
stuck in a time warp for nearly 20-30 years. Had they been faster a lot of the
tech and sensors that deliver better situational awareness in self driving
cars today would have made our roads far safer than they currently are.

------
lefnire
I wouldn't compare AI to mars colonization. AI is coming in strong, we've made
tremendous progress - mars colonization is still in its infancy / theoretics.
AI's a constant-moving target of a definition; by all accounts, we've
"achieved" AI already if you'd ask someone from 50 years ago. Art, music,
conversation, research, ... If he wants to say "what if the Singularity never
happens," that's fine and good - but it just seems weird to me to say "what if
AI never happens." It's like saying "what if self driving cars never happen"
just because he's not yet driving one.

False-starts: in this regard, AI is like VR. VR had its own winter too, after
Virtual Boy and the like. We're in VRs second stand; same as AI. And in both
cases, both are making a very strong case, and making lots of money. I'd put
my money on both horses now.

------
lngnmn
The classic AI (as in the AIMA book), it seems, got it right - heuristic-based
search, guided by feedback (to improve an heuristic) is what intelligence is
in general. Every living organism posseses intelligence (by the process of
trial-and-error, which is a search, to "learn" an "optimal" heuristics with
selection by the process of evolution) of some kind relative to its
environment. This is how a bacteria fought viruses, for example.

The problem is to find a good-enough heuristic, or to "extract" one from the
actual (not imaginary) features of the environment and "train" it. This is,
roughly, how ensimes has been made.

This second goal is murderously hard, because to select right set of features
which adequately represent some aspects of reality (as it is, not as we
imagine or know it is) is where humanity is still failing miserably.

------
SirHound
At this point, the velocity is too great to stop AI before it can solve most
of the problems listed in this post (which are, from today's vantage point,
relatively low-hanging fruit).

I can see an argument that we might not make it to super-intelligence but
we'll still solve a bunch of problems on the way. Weird post.

------
lr4444lr
Though there may be real limits to AI, the author's superficial treatment of
the matter includes few facts, and no understanding of how the techniques and
mathematics underlying modern machine learning are substantially different
than what researchers were focusing on in the 80s.

~~~
baybal2
>substantially different than what researchers were focusing on in the 80s.

??

Weren't the neural network stuff and evolutionary algos all hip in mid-
eighties?

~~~
lr4444lr
Simple perceptrons, yes. Feed forward, RNNs, CNNs, SVMs, gradient methods, and
the rest? Not so sure about that. I know that genetic algorithms do sometimes
get discussed today, but they are a small part of the community discussion,
IMO. Not to mention that research in non-linear optimization and attendant
numerical methods definitely made some breakthroughs in the 1990s.

------
Spooky23
AI is amazing. Hype isn't.

Look at something like smartphones. We all pretend that the iPhone invented
the segment... but I had a shitty PDA that was a 2nd cousin to the iphone a
decade before, and a functionally equivalent if not polished iPaq in the
2003/2004 timeframe. I used Google maps on a AMPS data plan in 2005.

AI is a tool, and just like with "natural intelligence" that we walk around
with in our heads, its foundational but only transformative when we apply
intelligence to solve problems. There's no magic.

------
virtualized
The Judgement Day scenario is a strawman. Of course humanity will not be
replaced by killers robots over night.

When we learn how to connect our brains to computers - without our eyes, ears
and hands as bottlenecks in between - humanity's views about the world and
life itself might change significantly. Strong AI is not even required for
that to happen. There will be brain mines in rural Chinese sheds.

------
thinbeige
I like that some people dare to challenge AI _and_ get upvoted.

There were def some achievements in AI the last decades but when I look how
the brain works--or to be more precise: Nobody has a clue how our brain works,
we only know that it seems to be so different to a semiconductor and before we
do not know more about the brain how should achieve real AI?

------
stevehiehn
Isn't A.I. already 'real'? I use Amazon's & Netflix's recomendation systems
everyday?

~~~
virtualized
"You just ordered a lawn mower from us. Here are more lawn mowers for you"

"You might like this movie because you have watched other movies with _the_ in
the title before"

~~~
stevehiehn
I guess this comment means you consider making good recomendations a simple
problem? Or maybe not real A.I?

~~~
mrkrabo
I think he means those problems are real AI, and current solutions stink.

------
baybal2
Machine learning is American version of the "5th generation computing" :
[https://en.wikipedia.org/wiki/Fifth_generation_computer](https://en.wikipedia.org/wiki/Fifth_generation_computer)

------
cfrs
Interesting links, though blaming AI does not "solve the problems of today"
seems naive. It does.

P.S. Funny that google suggests to search for "what can't we do without AI"
instead of "what can’t we do without AI" ;)

------
jmull
"If"?!?

Don't worry, uninterrupted exponentially improving AI certainly will not
happen.

Exponential growth requires a uniform medium to support it, which -- of course
-- it quickly exhaust.

The idea that smart AI will produce smarter AI which will produce even smarter
AI seems wildly simplistic and I'm surprised it has any traction. Why assume
it would be a linear progression... a given advance might require a leap that
even the smarter AI can't make directly. Or, the problem may become
exponentially more difficult at a rate outstripping the advances. And those
are just really simple objections. In the real world, progress on big things
is messy and complex.

------
Eerie
A dream can't be failed. A dream is a dream.

~~~
xyrnoble
Some dreams are nightmares.

~~~
rounce
It's still a dream, just not a very pleasant one.

------
julianmarq
> A flatworm can dream. Can't ze?

What was that at the end? was that tumblrspeak? Isn't "it" used for animals
and this person forgot because "ze" spent too much time talking with people
better left alone? Likely.

In any case, I stopped reading at that point, I guess after the "second shift"
comment (as a male living alone) and the false claim of jobs not being created
to replace the old ones, those two letters were just too much baseless
moralizing.

------
beachbum8029
I imagine the AI of the future will have a good laugh when it stumbles onto
this thread. If it can laugh...

------
mdevere
TLDR: Don't assume singularity will come along and solve growing "today"
problems.

------
beachbum8029
What are people going to do when AI learns to write alarmist click-baity blog
posts?

------
JohnJamesRambo
We are an organic based AI. It is possible.

~~~
virtualized
We will make humans brains more computer-like instead of making computers more
human-like. Some day, the last cyborg will replace their remaining flesh with
metal. Strong AI the hard way.

~~~
pixl97
Uh, ever heard the statement that premature optimization makes changes harder
further down the road?

The problem with the body/mind system is the insane level of interconnection
and feedback loops. "We made you 20% smarter!, uh, sorry about the 60%
incident increase in cancer though". The size of the problem space when
dealing with human minds is astronomical, we will likely need intelligent
systems to solve the problem, which means the silicon Strong AI will come
before the Wetware AI.

~~~
DiThi
If we figure a way to copy wetware AI, the interconnection problem is pretty
much solved. Cancer again? Get a new body.

------
xxxdarrenxxx
The problem with AI is that the word has become layman-y.

AI at a glance seems so heavily focused on both technical aspects (ie.
computing power), and modeling a human's cognitive train off thought.

So many neural networks around which outwit/out-strategize humans, but how
about other aspects humans use to solve problems.

People doing something "crazy", based on a "gut instinct" for example.

I have yet to see a serious neural network (company/research) that models "gut
feelings", because in the "real world", many people have found success and
solved problems/made decisions in every branch and form based on a "gut
feeling".

To add a few more to the list, I'd love for someone to give me a link to a
neural network emulating the following traits which very much have proven to
drive humans to advancement and problem solving:

\- intuition

\- motivation

\- "taste"(opinion)

\- empathy; A judge reducing or increasing sentence based on the intricacies
off an isolated situation, and not on pure objectivity (IE. 1 murder 5 years,
2 murder 10 years etc.) The law seldom is this absolute, precisely because off
human traits, and being able to acknowledge and take into account this
variable is a case off empathy moreso than math / logical deduction. Nuance by
it's very definition is not absolute.

\- inspiration

\- hope

etc.

A different perspective:

Imagine your on a springboard at the pool for the first time.

You don't have any memory off you doing a jump on your brain's "hard drive",
so the uncertainty and the fear off jumping is valid.

A computer might end up in a crash, because it loops infinitely false, until a
memory is found containing the info that a jump can be successful, which will
never come.

However, you look behind u, and there is social pressure to jump. The fear off
being laughed at intervenes in this loop, and u make the jump.

Humans rarely "crash", because we are not as bound to logic as a computer.

This is why a computer excels in terms off reliability with math. Because a
computer is absolute, 1 + 1 will always result in 2 on every calculator ever,
but as soon as unknown parameters are introduced, it becomes that much more
difficult to keep it running.

Our emotional side is as double edged as can be, without it we would crash if
we can't logically solve something, but equally reduce reliability ("human
error"), when emotions overrule logic entirely.

Off course one might say, just solve it with an automatic breakout after 10
iterations, but now you wrote an (conscious) edge case. Humans improvise.

I think almost every person has been in or witnessed a situation, where they
(logically/rationally) concluded that "he should not do it / bad odds", then
proceed to see this person make it anyway resulting in succes.

TLDR;

AI has seen many advancement, modeling a humans capability off functions that
can be compared to a humans prefrontal cortex, but the hippocampus, amygdala,
cerebellum to name a few are parts which I have yet to see a promising
existing computer version off, but are vital to our ability to become as a
race to the point where we are at.

As a final anecdote, there was this guy which through an accident had his
brain split between the front and the rest (emotional parts). Nurses came in
with 2 meals to choose from, but he could no decide.

The nurses found this peculiar.. "just pick one", but he just froze, BSOD on
them. He said there was no logical reason to pick one or the other.

He's absolutely right, there isn't, yet he'd die if he doesn't pick one. Our
brain never ends up in absolute false. When it does, emotions pick up and
solve it, perhaps imperfectly, but "life goes on".

