
An understanding of AI’s limitations is starting to sink in - martincmartin
https://www.economist.com/technology-quarterly/2020/06/11/an-understanding-of-ais-limitations-is-starting-to-sink-in
======
mattlondon
The trouble is that people have been sold this idea that ML/AI can do amazing
things, without properly being told that really the things it can do are quite
narrowly-scoped. They've been sold the Star Trek computer idea.

For example, years ago I was working on a prototype/proof-of-concept thing for
instrumenting industrial machinery with stick-on small computers. Simple stuff
- attach accelerators, temperature, humidity etc sensors to existing machines
and collect the data and send it back.

The management thought we'd be able to apply machine learning on the data to
get "business insights" from the all-powerful machine. They didn't know what
these insights might be, just that ML/AI would generate them and therefore
make the business a fuck-ton of money because AI generated novel new "business
insights" that no one had thought of before and so transform the business.
They thought it was just a magic box that would generate unbounded magic
answers for their needs by passing in just some temperature and humidity
readings or whatever, and then it would tell them they need to make more brown
bread and less bagels in the North East region etc.

In reality, as I understand it, currently ML/AI requires us to know what the
possible answers can be before we even begin training the network. So the
classic example is it needs to know that the possible MNIST digits are 0-9, or
that you are looking for one of 100 image classes etc.

You cant train a network with the MNIST digits, and then have that network
tell you what shares to buy or sell.

Sure you can lop off the final layer and repurpose some of the middle layers,
but you still need to train it to classify the inputs into categories you
define up front. It won't give you a novel answer that you have not trained it
for.

... at least that is how I understand it. Things may have changed over the
past 5 years or so.

That said, I do agree there have been some cool things lately like machine
vision etc. I don't think it will be that huge an industry though - it feels
like a lot of it is largely just commoditised now (which is good) and it will
be just like any other library you pick - like picking a UI framework for a
web app. Just pick up a pre-trained network from modelzoo and get on with your
real business requirements for 99% of people using ML, while the other 1% (at
FAANGs et al) and academia churn out new models.

~~~
TrackerFF
That's actually something ML is incredibly useful at, when it comes to
machines with sensors - failure prediction / anomaly detection, etc.

In the industry, (preventive) maintenance takes up a pretty huge chunk of
resources. It's something techs need to do often, and it's often a laborious
task, but it's obviously done to reduce downtime.

So the business insight, as they like to call it, is to reduce costs tied up
to repairs and maintenance.

All critical applications have multiple levels of redundancy, so that a
complete breakdown is very unlikely, but it's still a very expensive process
if you're dealing with contractors. If you can get techs to swap out parts
before the whole unit goes to sh!t, then that's often going to be a much
cheaper alternative.

But, in the end, it comes down to the quality of data, and the models being
built. A lot of industrial businesses hire ML / AI engineers for this task
alone, but expect some magic black-box that will warn x days / hours / minutes
ahead that a machine/part is about to break down, and it's time to get it
fixed. And they unfortunately expect a near-perfect accuracy, because someone
in sales assured them that this is the future, and the future is now.

~~~
nmyk
Yep, you and the user you're replying to are both right in different ways. One
thing's for sure - machines don't generate "insights" on their own.

Let's define an "insight" as "new meaningful knowledge", just for fun. We
could talk about what comprises "new" and "meaningful" but it would be beside
the point I'm making.

In a supervised learning problem, the range of possible outputs is already
known, meaning the model output will never be categorically different from
what was in the training data. The knowledge obtained is meaningful as long as
the training labels are meaningful, but it can never be new.

Unsupervised learning doesn't have a notion of "training data" but that means
an unsupervised model's output requires additional interpretation in order to
be meaningful. It is possible to uncover new structures and identify anomalies
in new ways, but this knowledge isn't meaningful until someone comes in and
interprets it.

Applied to the specific example where sensor data is used to try to generate
insights about machine functionality: Either you can only predict the types of
failures you've already seen, or you can identify states you've never seen but
you wouldn't know whether they mean the system is likely to fail soon or not.

It's the Roth/401(k) tradeoff. For model output to be useful, someone must pay
an interpretation tax. The only choice is whether it is paid upon insight
deposit or withdrawal.

~~~
Veedrac
This is demonstrably false; AlphaGo made significant new discoveries, for
example.

~~~
nmyk
Yeah this is where it would have helped if I had discussed what I meant by
"new".

AlphaGo is a supervised learner that outputs optimal Go moves given opposing
play. It yields new discoveries in the same sense that a model designed to
predict mechanical failures from labeled sensor data would: I didn't know what
the model was going to predict until it predicted it, and now I know.

But what the factory owners want is a machine that can take raw, unlabeled
sensor data and predict mechanical failures from that. They want _insights_.
"Why not just feed all our data into the model and just see what comes out?"
they ask. "I don't see why we need to hire at all if we have this neural net."

The reason you need a human somewhere in the system if you want insights is
because someone needed to program AlphaGo specifically to try to win at Go. At
the factory, someone needs to tell the machine what a mechanical failure is,
in terms of the data, before it can successfully predict them.

Then, neither "winning at Go" nor "mechanical failure" are states that the
system hasn't already been programmed to recognize. That's what I mean when I
say a supervised learner cannot generate "new" output.

------
mtgp1000
I'm not sure how anyone who's watched the exponential growth of a brand new
domain can pick a point today to and say that things aren't as good as we
expected. What may have happened was that some eager CEOs have overpromised on
timelines and resources. But the revolution is coming, ML is already starting
to change society.

We're building the tech. Right now. The author does not even realise the
immeasurable potential that general purpose function approximators are having
_right now_ across all domains. The applications are not only limitless but
they're being realized right now and it's insulting to have my work dismissed
so shallowly by someone who speaks with such authority.

There's far more to do with ML and AI than self driving cars and shitty ad
recommendations. It's just that the people who live at the intersection of
mathematics, programming, and third domains are rare; give us some time. Even
$16T in value by 2030 is 2-4 US GDPs from something which basically did not
exist 5 years ago.

~~~
MiroF
Absolutely. To be perfectly honest, it surprises me the extent to which ML
naysaying seems to be popular on HN. The evidence of enormous progress seems
pretty obvious to me.

~~~
bsaul
Siri still isn’t able to understand « do NOT set the alarm to 3pm », and many
image classifier produces aberrations that no human would ever commit.

Many people feel that ML has so far only produced « tricks », but still
doesn’t show any sign of « understanding » anything. As in, provide meaning.
It may be unfair, but i think that at this point people would be more
impressed by a program « smiling » at a good joke than by something able to
process millions of a positions per second, or « learn » good moves by playing
billions of games against itself.

~~~
MiroF
Sure, ML is not at human-level intelligence - it is very much a tool-AI where
we give a task to the machine and let it get very good at that.

Nonetheless, it seems like progress on that front has been made incredibly
quickly. Sure, Siri might not always "understand" what you're asking but the
ML is able to very accurately transcribe your voice to text, something not
really possible a few decades ago. Image classifiers produce
misclassifications, but so do humans - it is a very alien technology to us, so
we view some of the machine's misclassifications as absurd, but at the same
time the machine might view some of our misclassifications as obviously off-
base as well.

None of what you said seems to explain the consistent negativity on HN to what
has really been a transformative technology in a lot of ways.

~~~
foldr
Progress in speech recognition has been slow and incremental, not fast and
impressive. Hidden Markov Models did a decent job in the 90s. Now we have more
data and more computing power.

~~~
MiroF
In the past three years we have dropped error rates by a factor of 3, I don’t
really think that claim holds water.

We have seen huge progress in a number of other fields as well. Anecdotally,
voice recognition has definitely gotten way better as well.

~~~
bildung
Both voice recognition had first working research machines in the 1950s. Of
course present models are _way_ better than these, but fundamentally these are
the same as those in the '50s, "just" with tremendously better hardware and
algorithms. But there is zero intelligence in these, the models have no
internal concept of language or the world around them.

This will certainly produce many great specific solutions for specific
problems, but generic problems like _driving a car_ are most probably not
solvable without an internal world model.

I'm one of the naysayers and have a background in educational science, I'm
constantly baffled why AI research seemingly never looks at theories of human
learning (though it may very well be that I haven't looked closely enough!).
What essentially all AI approaches do is modelling associative learning
([https://en.wikipedia.org/wiki/Learning#Associative_learning](https://en.wikipedia.org/wiki/Learning#Associative_learning)),
but employing more and more processing power for training. This is akin to
having a fruit fly brain and copy-pasting that over and over again, in the
hope somehow a higher order will organize from that by itself. But most things
humans reason about are not learned this way, but rather by infering meaning
from things and situations, i.e. one learns how a steering wheel works not
through hundreds and hundreds of trial and error cases, but through single
Aha! moments in which ad-hoc generated mental models (concepts) are validated
in the environment. And GPT-3 has no knowledge organized in a hierarchical
system of concepts, just as ELIZA didn't have one.

~~~
govg
There are tons of research projects that do exactly what you're talking about,
initializing an agent with zero knowledge in an environment which just provide
rules of discovery and reward. The agent then takes actions in that space and
learns from its own experiences. It's just that it is hard to compare with
humans, who have had the benefit of evolution over millions of years. A basic
example is that of AlphaGo Zero[1], which learns to play a top level game by
being given enough time and just the rules of the game. This is similar to how
a child learns to walk, it is just that we only have the capability to model
toy situations (a board game in this instance) right now, and access to harder
instances (movement in the real world) will slowly come about. There are cases
of robots being programmed to poke / move / pick objects to try and learn
about their shapes[2], in case you are interested in another such example.

[1] -
[https://en.wikipedia.org/wiki/AlphaGo_Zero](https://en.wikipedia.org/wiki/AlphaGo_Zero)
[2] -
[https://bair.berkeley.edu/blog/2019/03/21/tactile/](https://bair.berkeley.edu/blog/2019/03/21/tactile/)

~~~
bildung
AlphaGo Zero definitely falls in the same category as GPT-3 - yes, this is
unsupervised learning, but it still is fundamentally the same approach,
exactly because of the way discovery and reward work on the model.

This isn't how a child learns to walk at all: A child a priori has the concept
of walking, the concept of self, the concept of movement in space, the concept
of willing to walk etc. - it just doesn't have the motoric control. The small
part of training motoric control through repeated trial and error is indeed
similar to what unsupervised learning models, but the important part is
missing.

------
dreamcompiler
Here we are in 1989 again.

The cycle keeps repeating. A new advancement in computing power, networking,
or algorithms means there's a new batch of low-hanging fruit for AI to pick,
so we pick it. Investors say "What about the high-hanging fruit?" and we say
"No problem. We just need a slightly longer ladder."

Two years later everybody finally realizes the high-hanging fruit is on the
moon.

~~~
tgv
My first AI teacher (even before '89) compared solving AI with neural nets to
teaching pigs to fly by throwing them from a tower. Improvements come from
building higher towers.

There's a recent NLP model that was trained on a trillion words. It would take
us 10,000 years to read or listen (no breaks, no sleep) to that many words.
Problems like attention, and the relation between memory and sequential
thinking haven't been cleared up at all. Even semantics, i.e. basic
understanding of a utterance or a scene, is in its infancy.

Large neural nets can help with interesting problems, but it's not going to
mimic our style of thinking any time soon.

~~~
mjburgess
It isn't ever going to model our style of thinking. A "neural network" is just
high-dimension linear regression; the idea it has anything to do with the
brain is metaphorical nonesense.

No algorithm running on digital hardware can emulate the biological process of
animal intelligence.

~~~
machiaweliczny
Why not?

~~~
mjburgess
What algorithm running on a digital computer would make the computer
transparent?

------
unreal6
"The result is an artificial idiot savant that can excel at well-bounded
tasks, but can get things very wrong if faced with unexpected input."

I think this gets to the core of what is still a limitation of current
technologies. Venturing into the unknown is still a deeply relevant task that
seems unlikely to be replaced by computers anytime soon.

~~~
londons_explore
> can get things very wrong if faced with unexpected input

Alice: Bob, Can you translate "Eat my shorts" into latin for me?

Bob: No. I don't speak latin.

Alice: Go on - try anyway.

Bob: "Eatus mine shortus"

Alice: Wrong! The answer is "Vescere bracis meis". You're totally wrong Bob! I
was expecting more of you!

~~~
Olreich
If models actually were able to tell you what they know and don’t know, then
sure. But instead they just give you an output for any input you give them,
whether they have a clue or not.

~~~
0xBABAD00C
This is such a simplistic view of a vast and evolving field of scientific
research, it's too cartoonish of an argument to even warrant a real response.
Which is why the conversation around ML gets negatively selected against
actual researchers and practitioners, who get headaches from opinions like the
one above.

~~~
memexy
What's wrong with their view. They're right that such models will always give
an answer whether it makes sense or not. Has anyone tried asking Bob to solve
sudoku? What happens? Does Bob fill in the blank spots according to sudoku
rules and is the final solution an actual solution for the given board or does
it end up being a random list of numbers?

------
code4tee
In probably 99% of AI/ML use cases the AI/ML is basically just a commodity
item and the real “expertise” comes from getting and preparing good datasets
for analysis, and having a clear problem to solve. The strategy behind
something like AWS SageMaker is based entirely around this idea.

The problem is that too many companies believed it was the opposite so they
built and hired all these AI/ML “experts” that just wanted to “built models”
but didn’t want to focus in the messy hard stuff like finding and cleaning
data. Nearly all of these AI/ML “experts” inside companies were also broadly
just applying off the shelf tools and algorithms, perhaps with a bit of
ensembling, rather than actually building new AI/ML approaches.

As a result, the big investments inside most companies produced a flash and
puff of smoke that got people briefly excited followed by a lot of money spent
with little business value returned.

I’m a big believer in ML approaches, but in most cases companies need to be
focusing in their data first against clear business problem and just use off
the shelf tools for the rest. That’s good enough for nearly all needs.

There’s a big bubble at the moment with all these “AI/ML” teams that’s going
to crash hard as businesses realize the above and reset to focus on stuff that
works and generates tangible value for the business.

------
blackbear_
This is a good article. I don't understand why people find it questionable and
get so defensive about it. It does not deny the fast progress nor the
potential of current/future techniques. It is simply an explanation for the
layman of things that should be obvious to any practitioner in the field.

~~~
Barrin92
> I don't understand why people find it questionable and get so defensive
> about it

because people in the technology sector have a distorted view of their own
importance. Despite the extremely meagre impact of the "computer revolution"
on both economic growth and anything outside the world of bits, people in
'tech' fashion themselves to be sort of world-changing figures. AI on top of
it is a sort of ersatz religion, the rapture of the nerds as Charlie Stross
put it.

------
eanzenberg
This is so strange. If you use facebook, google, netflix, apple, microsoft,
amazon, tesla or a whole host of other products and services you are
interfacing with AI all the time, sometimes as the core product of the
service. To think there’s no value there is asinine. Comes up a lot on HN.
Seems like people who get excited for these types of articles are set in their
ways and don’t want to progress forward.

~~~
tomp
Aside from possibly Google, all of these products / services would have just
as much, if not more, value without any AI beyond basic statistics.

~~~
hamsterbooster
Many AI systems are used in the backend to increase revenue. Netflix has a
very complex recommendation algorithm based on deep learning/statistics.
Amazon uses a lot of machine learning to optimize transportation (NP-hard
problem!) , sales, etc..

~~~
Carpetsmoker
A while ago I watched a 4-part documentary about the every-day lives of
ancient Egyptians. Pretty interesting.[1]

For weeks after that my recommendations were filled with bullshit such as
"PROOF ALIENS BUILT THE PYRAMIDS!" and such.

So yeah, maybe those "very complex recommendation algorithm based on deep
learning/statistics" is perhaps not always such a great idea. In this
particular case, it's just a mere annoyance for me, but imagine a 13-year old
watching a few genuine documentary videos on Egypt and then seeing this
bullshit; they don't have the capacity I have to see it's bullshit.

And imagine if it was on a more serious topic than who built the pyramids...

If I were to ask a YouTube engineer "why did I get this recommendation
specifically?" then the answer would probably be "dunno".

An additional issue is that the YouTube of yesteryear was much better in
browsing random videos. Now everything is based on what I've watched before,
instead of just "give me a list of science videos" or whatnot. This is also an
issue I have with Netflix (or rather, had, since I no longer have an account).

It seems to me that inscrutable mindless AI learning has a part in the spread
of misinformation and bullshit. I'm not sure how large that part is, but I
suspect it's significant. I'm hesitant of the total value in these cases,
regardless of what it may do for the bottom line in terms of revenue.

[1]: I'll just drop the link to it here in case anyone's interested:
[https://www.youtube.com/watch?v=hnsNwwwHm2I](https://www.youtube.com/watch?v=hnsNwwwHm2I)

~~~
arethuza
My biggest issue with YouTube is that it shows me adverts for the same product
for a about 2 months at a time and I therefore end up hating those products
and would _never_ buy them.

------
cjhanks
We have also been watching these machine learning models for 6 months:

\- increase the volatility in virtually every financial market they touched

\- be exploited by adversarial learning networks to amplify funded propaganda
as news

\- use poorly contrived sentiment analysis to generate incomprehensibly
meaningless news headlines

These non-linear "function approximators" have absolutely unpredictable and
insane non-linear behavior where learned information was non-existent or
sparse.

God help us all if one of these artificial intelligence devices is driving the
road and sees a red stop sign that is a square, rather than a hexagon.

~~~
EE84M3i
Serious (and likely ignorant) question - what does linearity have to do with
anything here? linear over what and why does non-linearity make something
'unpredictable'?

~~~
YeGoblynQueenne
Linear models have more bias, so they represent current data less well and are
more predictive of future, unseen data (think of a straight line through a
point cloud).

Non-linear models have more variance so they represent current data better and
are less predictive of future, unseen data (think of a line snaking around a
point cloud).

An added complication is that deep neural net models are, in practice, vectors
(or, well, tensors) of numbers so they are difficult to interpret. This and
their extreme variance makes it hard to know how they will behave in the
future.

~~~
perl4ever
I'm not good at math, but I'm confused by the association of AI with non-
linear stuff, setting aside the association of non-linear with "bad". I
thought ML involved linear algebra or something (says xkcd!) which would
presumably be...linear?

~~~
klipt
The underlying derivatives are linear (like all derivatives) but neural
networks' ability to approximate arbitrary non linear functions is one of
their biggest strengths.

~~~
perl4ever
Yes, so I'm left wondering, when making the association of the math to the
badness, how do you decide if the linearity or the non-linearity is the
salient part?

~~~
coldtea
Mathematically, you can think of "linear" AI problems as "easy to solve", and
non-linear as "difficult". That's part of what the parent means.

Some function being linear means it's easier to guess. If a real world
phenomenon is tied to a linear function, then it's easy for AI to
guess/approximate.

------
aljgz
Economist editor to writer: This year, X is on the down slope of hype cycle
and no one is talking about it, gimme something ASAP.

Economist editor: opens their "pessimist template"

"X has over-promised and under-delivered, A, and B have not commercialized
yet, and may never be. X cannot do C yet. The challenges of X are D, E and F"

Replaces A...F with the most prominent examples they can find, Boom, we have
an article.

I read through the article, hoping for a bit of important info. I wish we had
a vote in HN: "Isn't worth your time, don't read it"

~~~
LetThereBeNick
It sounds like you are ready to program an NLP to generate these articles

------
gwbas1c
My general guidelines about AI (Machine learning, programming):

1: Computers can't read minds! Your algorithm might know that I like the
Beatles because I listen to them a lot, but it can't predict that I woke up
today craving to listen to some music from my childhood.

2: You don't know what you don't know! Your algorithm might make 24 frames per
second film look smoother at 60 fps, but if something like wheel spokes move
backwards at 24fps, it'll have a tough time getting the wheel to move the
right way at 60fps.

3: Just because you have the information, doesn't mean you know how to write a
program that can extract the knowledge you're looking for.

Which really means we need a super advanced AI with a worldview and context in
order to automate certain kinds of information processing. I don't expect this
anytime soon.

------
g_airborne
Like others are saying the progress towards AGI isn’t great but each
individual subdomain is seeing great advances all the time. Object detection,
facial recognition and NLP with GPT are much better than they were a few years
ago. Each of these can provide business value to a certain degree, but I would
agree that something resembling AGI holds the most business value. For this to
happen all of the pieces have to be put together somehow - right now research
focuses on specific subdomains and improves the SOTA on them. Once someone
figures out how to make everything work together, it could mean a second, much
larger wave of AI. So the question is, when will that happen?

------
asutekku
Apart from a true GAI, the research has progressed really fast. We’ve seen a
huge leaps in e.g. image generation and speech synthesis during the last few
years that I’m really interested on what the future will bring.

------
randcraw
For a rather complete view of what deep learning still does not do, I
recommend the work of Gary Marcus and Ernest Davis. While seeming solely
critical, I think they make very good points about the limits inherent in deep
learning as we know it now, and how it needs to grow to overcome those
deficits.

"Rebooting AI" [https://www.penguinrandomhouse.com/books/603982/rebooting-
ai...](https://www.penguinrandomhouse.com/books/603982/rebooting-ai-by-gary-
marcus-and-ernest-davis/)

And a few articles, for audiences both popular and technical:

Deep Learning: A Critical Appraisal
[https://arxiv.org/abs/1801.00631](https://arxiv.org/abs/1801.00631)

The Next Decade in AI
[https://arxiv.org/pdf/2002.06177.pdf](https://arxiv.org/pdf/2002.06177.pdf)

How to Build AI We Can Trust [https://www.nytimes.com/2019/09/06/opinion/ai-
explainability...](https://www.nytimes.com/2019/09/06/opinion/ai-
explainability.html)

And a HBR podcast ("Beyond Deep Learning")
[https://hbr.org/podcast/2019/10/beyond-deep-learning-with-
ga...](https://hbr.org/podcast/2019/10/beyond-deep-learning-with-gary-marcus)

------
mD5pPxMcS6fVWKE
Maybe it's because everyone talks/chats every day with "virtual assistants" at
banks and every other organization, and never ever finds them useful. Their
main purpose is to frustrate you enough so you give up trying to connect to a
real person.

~~~
ksaj
Judging by the number of times and different places I've heard the sentence
"Please listen closely, as all our options have changed" even when they
haven't changed in years suggests that this is new whine in old bottles.

------
somewhereoutth
Previously, when "The computer says no" it was possible that an engineer
somewhere might know _why_ the computer said no.

With AI, _nobody_ knows why the computer said no.

------
neonate
[https://archive.vn/USQ6M](https://archive.vn/USQ6M)

------
ur-whale
[http://archive.is/RZD7w](http://archive.is/RZD7w)

------
cageface
I think there's an opportunity right now for new, human curated indexes in the
style of the old Yahoo index. AI content generation is getting too good at
fooling AI curation and I'm getting less and less value in broad searches on
Google. I would pay a monthly fee for hand-vetted lists of the best content on
topics I'm interested in.

------
renewiltord
It's limited but so effective! The other day a friend asked for photos of her
sister (also a friend) that I had because it's her birthday and she wanted to
make a collage. I just searched on Google Photos by her name and it found a
bunch because of the face classification. That's some good shit.

~~~
andbberger
works great until I add epsilon of adversarial noise and turn your sister into
an ostrich

~~~
webmaven
The hard part of that would probably be getting unauthorized access to the
photos in order to modify them in place, rather than the adversarial image
perturbation per-se.

------
msla
These days, you can translate text by pointing your phone at it and taking a
picture. Thirty years ago, this would have been unambiguously AI, because it
would have been not only impossible, but _stupid impossible_ like something
out of a soft SF novel where little self-flying robots deliver stuff to your
house, or you can ask a computer a question in a natural voice and reasonably
expect a civil, natural-language answer sourced from global databases.

Ah, but all that _works_. If AI is perpetually defined to be "that which does
not work" then it's perpetually potential, perpetually postulated, perpetually
possible perfection. It never has to be compared to a clunky translation, or a
drone that gets shot down. Unwritten novels never have story problems in the
third act, unwritten programs never give ludicrous output.

~~~
lambdatronics
Yes, but you're missing the point. In 1950's sci-fi, those marvels were
possible because there was imagined to be something like a _general_
artificial intelligence behind the technology. We've achieved _narrow_ AI, but
the perception is that in order to get it, we would already have _general_ AI,
which is why people are disappointed.

~~~
robbrown451
Whether 50s sci-fi imagined or implied other things is irrelevant to the
question of whether or not the current capability described (point camera at
sign, get translation) qualifies as AI.

The point is we have current things that are quite amazing, and would at one
time have been considered to be the sort of thing that only an AI would be
able to do, and yet we keep moving the goalposts. As if AI is defined as "that
which humans can do but machines can't, done by machines".

~~~
jpttsn
But was it the “capability” that qualified as AI in the 50s? Or was the
capability just one example of what the AI could do?

Suppose we said we’ve invented Jesus because we’ve invented ways to walk on
water and turn it into wine.

~~~
robbrown451
I don't get your Jesus analogy, I mean Jesus is a proper noun for an
individual.

A technology is different. Its capabilities pretty much define it. Unless you
are going to try to get all philosophical about it and say it doesn't count if
it doesn't experience qualia or something (which is nonsense) Or, unless you
are defining it in ways that specifically call out the implementation details.
A helium balloon isn't a hot air balloon, not because it doesn't have
capabilities, but because you've specifically said in the name that it must
use hot air.

~~~
jpttsn
If a client/muggle asks me to build an “AI” I’ll be wary that their “spec” is
sometimes just examples of what the “AI” should be able to do: play chess,
write a poem.

In their mental model, the AI is far from defined by these capabilities. They
won’t be happy unless there’s an actual AGI whose capabilities happen to
overlap with the spec.

So my point is historically we have cheated our way out of defining
“intelligence” and instead given necessary but insufficient examples. I think
this is the mechanism behind the goalpost-shifting in “AI”.

That’s the metaphor: it would clearly be absurd for engineers to define Jesus
by some examples of capabilities. People who are waiting for the second coming
won’t be satisfied.

In the lab we maybe should define AI by some set of capabilities. But clients,
journalists etc. picture the Hollywood version, and narrowly fulfilling their
spec won’t actually satisfy them.

------
zby
This sounds like what was writtein in 2001 about Internet. I don't know -
videoconferencing is still a problem, but there is progress.

------
seibelj
I’m a longtime AI skeptic who has been arguing passionately against the doom-
sayers, many in my own family and in casual conversations with laymen, for
several years. I stand by this article I wrote which summarizes my views
[https://medium.com/@seibelj/the-artificial-intelligence-
scam...](https://medium.com/@seibelj/the-artificial-intelligence-scam-is-
imploding-34b156c3537e)

The hype on AI was absolutely astonishing. I’m glad it’s finally coming back
to reality.

~~~
fxtentacle
That article is well argued and I am inclined to agree. But then I noticed
that we seem to strongly disagree on other topics.

I find it fascinating that you can both see AI as what it truly is (a
rebranding to solicit investment) yet still cheer for cryptocurrencies, which
I personally believe to be not much more than a pyramid scheme to dupe those
investors that buy in late. In my opinion, both AI and crypto are quite
similar in the ways that they mislead investors by promising a golden future.

~~~
seibelj
I am a longtime crypto guy, and totally 100% understand how you see the links.
I have a very nuanced opinion on this, but it's difficult to explain briefly
here. There are a lot of scams in crypto, but if you are a believer in
Austrian economics and are not a fan of Keynesian theories, fiat currency, and
the Federal Reserve, then Bitcoin / crypto is very attractive.

But that's a separate discussion, unrelated to my opinion of AI and is
primarily a philosophical issue. Cryptocurrency / blockchain is fascinating
but we are not promising robots cleaning your house.

~~~
fxtentacle
I agree that AI has been way worse with the overpromising.

But when I hear IPFS = interplanetary file system and then wee how poorly it
performs in practice and that it's mostly used for illegal content, I cannot
help but think that the crypto side also likes to oversell their practical
utility.

I believe I have yet to see an application where the Blockchain is truly a
critical component. In most cases, it seems that people end up caching its
data in sql to speed things up, meaning that they're working on their own
private copy now.

------
ilaksh
People are going to complain that AI is lame right up until the point that it
gets general enough to make them all irrelevant in terms of work productivity.
Then rather than modifying society to distribute the gains, they will leave
the outdated structures in place and try (too late) to suppress it.

At no point (until it's too late) will there be be effective legislation
discouraging the creation of fully general and autonomous digital persons that
compete with humans.

~~~
LetThereBeNick
With that attitude I would like to pledge my undying allegiance to you

------
logicslave
Amazon retail is currently in the process of baking advanced deep learning
models into every area of their retail process. Its all becoming automated.
Just because its not talked about openly doesnt mean its not happening.

These people at the economist are completely out of the loop.

------
rb808
I'm still not sure what successful AI implementations there have been. Stuff
like Amazon/Spotify recommendations seem sensible. Is there anything else out
there that is impressive?

~~~
amelius
Google Search?

~~~
sixQuarks
Google search has gotten worse over the years though. Perhaps that's because
the amount of information is growing exponentially. So much of the search
results seems spammy these days.

~~~
luisvictoria
My guess would be because everyone's doing SEO, and since everyone's competing
to be on the top of people's Google searches, they're often inadequate

------
panabee
AI is overhyped, but there is much promise.

maybe the best way to conceptualize this, inspired by @random_walker, is to
compare AI in the 21st century to machines in the 20th century.

in the industrial age, machines automated rote physical tasks.

in the information age, AI could automate rote mental tasks.

the more objective and templatized the task, the more vulnerable it is to AI
displacement. conversely, the more subjective and creative the task, the safer
it is from AI displacement.

------
mlthoughts2018
The problem with articles like this is that they don’t properly represent the
spectrum of what an ML project can be for a business.

If all you’re talking about is self-driving cars or voice-operated assistants,
then sure, the article’s mostly right. Modern techniques that have
revolutionized ML in the past ~15 years have not translated to massive new
economic gains in many areas they were anticipated to affect.

But this is the vast vast minority of all ML projects.

Many of the most economically successful ML projects I’ve run in my career are
very simple, and ruthlessly focus on business value from the outset. A lot of
them involve automating inefficient manual processes, things like spam
filtering, phishing detection, fraud detection, automatic keyword tagging,
automatic metadata classification in images or text, simple time series
forecasting for logistics or consumer demand, simple models for customer
churn, and a wide variety of different customized search engines for big &
small content collections.

Just for one example, I worked on a project to automatically validate metadata
about human models appearing in images, to flag discrepancies between
documented ages / ethnicities within legally required model release
documentation and the real appearance in images, to find fraud (especially
when minors were used in stock photography).

This saved _millions_ of dollars annually in human review & legal costs for
when that platform incorrectly approved photography with invalid or fraudulent
accompanying release documents for the human models.

In just one project, a team of six engineers paid for itself about 5 times
over and the delivered software requires minimal maintenance and only became
more valuable as the platform grew larger. In fact that was one of the only
times in my career when a non-finance company chose, discretionally, to pay
larger bonuses than in employee job agreements as a reward.

That project did happen to use deep neural network for image metadata
prediction, but it was fairly mundane and easily trained on 2 average GPU
machines from a dataset of only a few hundred thousand images.

Edit: added below

I’ve also observed across several companies that there’s a big variation in
outcomes and success of ML based on the level of investment in infrastructure.

It’s not about pumping money in for some crazy GPU cluster or huge framework
for massively parallel training, but you do need to separate ML operations
away from the ML engineers who research solutions for products and internal
stakeholders.

It’s a situation where domain specialty has to be used efficiently or you’ll
waste a ton of money and time. If you hire an expensive senior engineer for ML
(salary easily north of $200K in large cities), but you task them with
managing a database or operating kubernetes or debugging partitions in HDFS,
you probably won’t get a good return on your investment.

------
mrfusion
What’s the next big thing after deep learning?

~~~
spaetzleesser
There should be something like an X prize for a recycling device. Take a trash
pile, sort through it and get things ready for recycling. This would solve a
big environmental and would deliver progress in robotics, vision and AI. I'd
be more excited about this than self driving cars.

~~~
discjockeydom
[https://zenrobotics.com](https://zenrobotics.com)

------
martincmartin
The next article in the series, see the menu on the left, describes Donald
Knuth as "a programming guru." :)

~~~
johnwheeler
You want “ _the_ programming guru”?

~~~
albntomat0
A "programming guru", to me at least, doesn't cover his accomplishments in
algorithms, among others. It conveys someone who produces a lot of good, high
quality code, but not novel research.

------
blackrock
I heard that some big company was selling an AI powered database.

Like you can just talk to your database, and the AI will magically return you
the results.

At that point, I think their marketing team had totally lost the narrative.

There is no spoon.

------
inimino
So now that it's not the latest hype train, we're back to calling it "AI"? For
the love of Christ, dear Economist, ML and AI are not the same thing and we
have words for both of them. Use your words, people!

An understanding of AI's limitations is as far from sinking in as the average
MBA is from comprehending _Finnegan 's Wake_. An understanding that machine
learning is not the entire 60-plus-year-old field of artificial intelligence
would be nice to see for once from an institution that supposedly prides
itself on precision of language and accuracy of reporting.

An understanding of Crichton's Gell-Mann amnesia effect is slowly starting to
sink in, though, to at least one former subscriber.

------
ipiz0618
[https://en.wikipedia.org/wiki/AI_effect](https://en.wikipedia.org/wiki/AI_effect)

------
juliettebe
Article with out paywall:
[https://outline.com/TDeWbY](https://outline.com/TDeWbY)

------
pengaru
Is it just me, or is AI never capitalized in TFA?

Every time I encountered the word while reading it was like a cache miss for
my brain...

~~~
gwern
Do you have NoScript or some other blocker on? As far as I can tell,
Economist.com is using smallcaps for all acronyms but for some reason, the
smallcaps are implemented solely through JavaScript, so if you read without
that, you just see lower-case. The lowercase is because if you have uppercase,
smallcaps does nothing - it can't become 'small capitals' because it's already
large capitals, as it were. Letters need to be lowercase to be smallcaps. So,
that's why you see 'ai' instead of 'AI'. So the 'ai' can properly transform
into 'ᴀɪ'.

This is bad because JS is not necessary, and it is _also_ not necessary to
write in lower-case in the first place! I know this because I do a similar
thing on gwern.net: in addition to manually-specified smallcaps formatting, a
rewrite pass while compiling automatically annotates any >=3-letter acronym.
However, I do it purely via CSS, and I also don't need to lowercase anything.
How? Because a good font like Source Serif Pro will include a special feature
which will smallcaps just capital letters: 'c2sc' in `font-feature-settings`
CSS. So to do smallcaps correctly, of regular text & uppercase, you have 2 CSS
classes: "smallcaps" and "smallcaps-auto". "smallcaps" gets the normal 'smcp'
font feature and operates the usual way, lowercase gets smallcaps, uppercase
is left alone. "smallcaps-auto" is used by the acronym rewrite pass, and it
does ''smcp', 'c2sc'' instead, so "AI" does indeed get smallcaps.

This way, I need 0 JS, I don't need to write everything in lowercase, copy-
paste works perfectly, I don't need to manually annotate acronyms unless I
want to, and everything Just Works for every reader.

(I don't, however, smallcaps two-letter things like "AI". That's just silly
looking.)

~~~
pengaru
Ah, yes, I am using noscript. That explains it, javascript is destroying the
web.

~~~
quicklime
I just disabled JavaScript by setting Firefox's "javascript.enabled" flag to
false, and it still renders fine for me. Here's what the HTML source looks
like when I grab the page using curl:

    
    
      predicts that artificial intelligence (<small>AI)</small> will add $16trn to the global economy by 2030
    

So it looks like it's coming through in caps from the server. I don't think a
lack of JavaScript by itself is causing the problem...

~~~
gwern
> I don't think a lack of JavaScript by itself is causing the problem...

<small>, however, does not do lowercasing. All it does is make the font
smaller. 'AI' should be caps regardless of whether it is wrapped in <small> or
not. That implies that something in the CSS or HTML is overloading <small> to
make it lowercase, so it can be correctly transformed to uppercase smallcaps
for the reasons I explain above, which it assumes will be done by the later JS
(which however doesn't run under NoScript). Using <small> just makes it an
even uglier unnecessary hack...

------
m0zg
Crucially, it's not "beginning to sink in" at the Economist, since they don't
have a faintest clue what they're talking about. If there's anything the past
3-4 months have taught us (or at least the smarter subset of "us"), one should
be careful about predicting future state of exponential processes _even if_
one is expert in the field.

------
dustingetz
In the near future, gradient descent optimization of simple targets like
PROFIT and REVENUE - with even less accountability to negative externalities
than human CEOs have today - is going to FUBAR literally everything

------
corporateslave5
Nlp is getting better at a rapid rate. Articles like this are worthless click
bait. No one knows what ml will look like in ten years. And for the record,
ten years is a short period of time for a technology that could have such a
massive impact.

~~~
distant_hat
People don't understand what exponential improvement means. GPT-3 is a 175B
with B parameter model. Another few rounds of doubling and we could be seeing
models spit out short stories and novellas.

~~~
ur-whale
Yet, the darn thing still can't reason.

~~~
jbay808
Five years ago we would have said "the darn thing still can't write a cohesive
paragraph".

~~~
wnoise
That's still true. It can write a paragraph that's usually grammatical, and
can stay on topic, but it's missing things like facts, or even the ability to
remember which side of an argument it's taken previously.

~~~
jbay808
I think over the course of several paragraphs that's true, but within one it
tends to be pretty good.

