
The BS-Industrial Complex of Phony A.I. - scottlocklin
https://gen.medium.com/the-bs-industrial-complex-of-phony-a-i-44bf1c0c60f8
======
atoav
At the Biennale in Venice (one of the most important art shows there is) I saw
a work which looked like this:

There was a metal frame holding two glas plates with ventian sediment
inbetween (sand, soil, mud). In the center there was another metal frame which
formed a hole. There also were to PCB boards with ATMEGA micro controllers.

In the text the artist claimed she controlled the biome of the soil with an AI
using various sensors and pumps.

This was clearly a fake, as you could see nothing like that on the PCB.

Accidentally (?) she managed to create the best representation of AI I have
seen in art: all that counts is that you _call_ it AI even if it is a simple
algorithm. AI is the phrase behind which magic hides and people _love_ magic.
Everything that has the aura of “humans don’t fully understand how it works in
detail” _will_ be used by charlartans, snake oil salesmen and conmen.

If even artists slap “AI” onto their works to sell it, you know we are past
the peak now.

~~~
YeGoblynQueenne
>> Accidentally (?) she managed to create the best representation of AI I have
seen in art: all that counts is that you call it AI even if it is a simple
algorithm.

Backpropagation, which most researchers will agree is an AI algorithm, is a
"simple algorithm".

So are many other AI algorithms, some of which are simple enough to be
understood so well that most people don't recognise them as AI anymore: search
algorithms like depth- breadth- or best-first search, game-playing algorithms
like alpha-beta minimax, gradient descent/ hill climb, are the examples that
readily come to mind.

I think the above article and your comment are assuming that, for an algorithm
to be "AI" it must be very complicated and difficult to understand. This is
common enough to have a name: "the AI effect". A few years down the line I bet
people will say that "this is not AI, it's just deep learning".

There's no reason for AI algorithms to be complicated. Very simple algorithms
can create enormous complexity, even infinite complexity. The state of
deterministic systems with even a couple of parameters can become impossible
to predict after a small number of steps if they have the chaos property.
Language seems to be the application of a finite set of rules on a finite
vocabulary to produce an infinite set of utterances. Complexity arises from
very simple sources, in nature.

~~~
atoav
The point was that her PCB wasn’t connected to anything at all. She claimed
there were pumps and sensors, but there was literally nothing. There were
cables etc and it certainly would fool someone who has no idea of circuit
design and electronics, but I happen to know a bit about it and the circuit
almost certainly didn’t do what it claimed it did.

~~~
YeGoblynQueenne
Ah, I see. I must have misread your comment. I thought you meant that the PCB
didn't have anything like (a hardware implementation of?) an AI algorithm on
it, not that it had nothing at all on it.

------
dreamcompiler
This happened right before the first AI Winter in the late 80s: AI (in the
form of expert systems) solved a number of hard problems and was hyped as
being able to solve _every_ problem. Reality set in when we figured out:

1\. It didn't scale and

2\. Getting 80% of the problem solved was easy, but getting that last 20% was
very, _very_ hard. Maybe several orders of magnitude harder than the first
80%.

Nowadays we don't seem to have problem 1 quite so much, but problem 2 is still
there in a big way. Witness self-driving cars, where driving on an interstate
highway in broad daylight is easy, but driving through a snow-covered
construction zone at night is impossible. Or just dealing with a bicyclist on
the road without killing them.

We're not going to have AGI any time soon.

~~~
tintor
AGI is not needed for self-driving.

None of what is mentioned above is a deal-breaker for self-driving car
service:

\- lidar at night works just fine

\- plenty of cities with no or very little snow

\- construction zones: blacklisting, remote monitoring & manual mapping,
detection of cones, barriers, re-painted lanes

\- self-driving cars with 360 degree view and plenty of patience and no
distraction are safer for bicyclists than manually driven cars

~~~
cm2187
> _plenty of cities with no or very little snow_

But what happens that rare day it snows? Hundreds of deads? Cars get recalled
for much less than that.

~~~
laichzeit0
Drive it in manual mode like we do right now? I mean even an autonomous car
that could handle 80% of normal driving just fine would be great. And by 80%
I'm talking about driving on a freeway without crashing into a barrier in
broad daylight.

~~~
mcguire
Gonna suck for those people who don't own cars and are relying on the fleets
of privately-owned pseudo-taxis.

------
pron
In 1949, some years after the invention of neural networks, Norbert Weiner,
one of the leading minds of the time, was convinced that AI (AGI as you may
call it) or a full understanding of the brain is no more than five years away.
Alan Turing thought Weiner was delusional, and that it may take as much as
fifty years. Seventy years later, we are nowhere near insect-level
intelligence.

I don't see any fundamental barrier preventing us from achieving AI, but if
someone from the future came to me and said that AI will be achieved in 2130,
I would find that quite reasonable. If they said it will be achieved in 2030
or 2230, I would find those equally reasonable. Our current scientific
understanding is that we have no idea how far we are from AI, we don't know
what the challenges are, and we don't even know what intelligence is. We
certainly have no idea whether the approach we are now taking (statistical
clustering, AKA deep learning) is a path that leads to AI or not.

In the sixties, the leading minds of that time were also working hard on the
problem and did not find it any further away from us as we do today. That some
people are optimistic is irrelevant. The fact is that we just have no idea.

~~~
cr0sh
> Seventy years later, we are nowhere near insect-level intelligence.

That's arguable: For instance, we have the entire connectome of c. elegans
mapped out; we can easily simulate it, and it seems to act the same as the
actual nematode. So, in one sense, we are at that level.

However, we still have no clue how such a simple system actually works to
produce the level of "intelligence" it has. So in that sense, we're not at
that level at all.

> We certainly have no idea whether the approach we are now taking
> (statistical clustering, AKA deep learning) is a path that leads to AI or
> not.

One clue we do have:

We may not be on the right path with that method; it's something the
"grandfather" (or whatever) of AI (Hinton) has mentioned, and which I have
stated before about...

That is, the fact that we currently have no understanding of the mechanism by
which biological neural networks implement anything like "backpropagation".
From what we currently understand, as I currently understand it, we have yet
to find such a mechanism that would allow for it.

It's also one of the leading reasons why our current artificial neural
networks consume so much power, as compared to biological systems...

~~~
pron
> For instance, we have the entire connectome of c. elegans mapped out... So,
> in one sense, we are at that level.

Well, whatever "intelligence" C elegans has, I think everyone would agree that
it's far from insect-level; it's microsocopic-nematode-level. But I am not
sure a _simulation_ of C elgans rises to the level of "artificial". As you
note, we don't understand it yet. But we may have already built systems that
are more "intelligent" (whatever that means) than C elegans, and we may have
done that decades ago.

> From what we currently understand, as I currently understand it, we have yet
> to find such a mechanism that would allow for it.

True, but our path to artificial intelligence may not end up going through
neural networks at all. We've not achieved flight by mimicking biological
flight. I'm not saying it won't, either, but we cannot say for sure that it
will. We really don't know.

------
YeGoblynQueenne
>> Deep learning algorithms have proven to be better than humans at spotting
lung cancer, a development that if applied at scale could save more than
30,000 patients per year.

It's not easy to scale deep learning because deep neural nets have a very
strong tendency to overfit to their training dataset and are very bad at
generalising outside their training dataset.

In a medical context this means that, while a particular deep learning image
classifier might be very good at recognising cancer in images of patients'
scans collected from a specific hospital, the same classifier will be much
worse in the same task on images from a different hospital (or even from a
different department in the same hospital).

To overcome this limitation, the only thing anyone knows that works to some
extent is to train deep neural nets with a lot of data. If you can't avoid
overfitting, at least you can try to overfit to a big enough sample that most
common kinds of instances in your domain of interest will be included in it.

So basically to scale a diagnostic system based on deep neural net image
classification to the nation level one would have to train a deep learning
image classifier with the data from all hospitals in that nation.

This is not an easy task, to say the least. It's not undoable, but it's not as
simple as having someone at Hospital X download a pretrained model in
Tensorflow and train its last few layers on some CT scans.

~~~
bodono
This statement is false, as recently demonstrated by DeepMind on retinal
scans. Not only did they generalize outside of the training dataset _but they
were able to use the features learned by the model on an entirely different
type of scanning device_.

[https://www.nature.com/articles/s41591-018-0107-6.epdf?autho...](https://www.nature.com/articles/s41591-018-0107-6.epdf?author_access_token=PAbvHEuv_YYmrPVbG5HqKdRgN0jAjWel9jnR3ZoTv0P43NEH20hFuvBoJk6cvICihn8kmL6tmejFlnuPlbT_0KmJgK6N07SPh_ZLy0Nxb0-LAGIDBaH1fjJTkD9ahUEQpRlEudtlG9E1v3ca9xNQcQ%3D%3D)

"Moreover, we demonstrate that the tissue segmentations produced by our
architecture act as a device-independent representation; referral accuracy is
maintained when using tissue segmentations from a different type of device."

~~~
YeGoblynQueenne
In the paper you link to, the researchers trained an image classifier on data
collected from 32 sites of the Moorfiel NHS trust. The trained model was
tested on, presumably held-out, data from the same dataset.

This is an example of scaling a model beyond a dataset collected from a single
site. It is not contrary to what I say in my comment.

The researchers further tested their model on data obtained from a different
device than it was originally trained on. This data was collected from the
same hospital sites. The original model performed poorly on this new data and
was re-trained to improve its performance.

This does not demonstrate an ability to generalise to unseen data- only an
ability to adjust a model to new data, by re-training.

~~~
bodono
It contradicts your statement: "it's not as simple as having someone at
Hospital X download a pretrained model in Tensorflow and train its last few
layers on some CT scans" Because in this case it _was_ as easy as taking a
model from a totally different modality and retraining the first (in this
case) few layers to accommodate the new device. Furthermore the original
training used 15k scans and the retraining only required 152 scans. This is
totally reasonable and clear evidence of transfer and generalization.
Moreover, even human operators require retraining on new devices!

~~~
YeGoblynQueenne
My Tensorflow comment was a bit unclear. I meant that you can't just download
a generic model like the kind that is readily available, e.g. one trained on
ImageNet or CIFAR etc, and expect that you can retrain it easily and get a
diagnostic tool that is competitive with an expert. The models in the paper
you link were specifically trained on medical imaging data.

My point is that you need a lot of work to make this work even for one
hospital, let alone scale to many, even more so scale at the level of a
national health service. I don't see that the paper you link contradicts this.

Edit: if I may summarise: I said "it's not simple" not "you can't do it".

Transfer learning is not generalisation to unseen data. If the pre-trained
model and the end model don't have any common instances it doesn't work [Edit:
"don't have any instances with a common feature space" is more clear].

Also, you're talking about generalisation to new devices. My understanding is
that this is only one aspect of the difficulties with scaling image
recognition for medical diagnoses to data from different sites.

------
Barrin92
In my opinion the term intelligence itself is misplaced for machine learning
tasks. Every problem that is solved with ML and "big data" appears to me to be
a perception problem (which wouldn't be surprising because the mechanism is
inspired by human vision, not cognition, which it lends itself to naturally).

As a specific example, a few months ago or so openai released their text
generation tool and branded it as "too dangerous too release", claiming it
could , with the help of AI, generate believable texts.

But what it generated was simply natural sounding gibberish. There were plenty
of sentences in the text along the lines of "before the first human walked the
earth, humans did..""

What, for me at least, lies at the core of intelligence is understanding
semantics. An intelligent system can recognise the sentence above as flawed
because it could extract _meaning_.

Everything coming out of the field of ML seems to me just like sophisticated
statistics. In many ways symbolic AI to me still seems more valuable, profit
aside.

~~~
Wiretrip
In the text generation tool outlined above (and indeed many of the convnet-
based visual networks), the hidden layers are there precisely to extract
'meaning'. The lower layers (closer to the source input) deal with syntax and
feed upwards to hidden layers that extract semantic features, which in turn
feed upwards to more layers, each with a bigger overview of the semantic
features and thus ultimately the context. That's the idea anyway.

~~~
foldr
>the hidden layers are there precisely to extract 'meaning

That is just wishful thinking, no? I mean, there is no particular reason to
think that the hidden layers will actually do this with any high degree of
success.

------
xiaolingxiao
I can attest. while doing research in a T1 university, all the professors were
mildly disgusted by the hype pushed out by startups, and even Google's own
internal marketing department.

Nonetheless they too are minting the same nonsense in the "introduction" part
of academic research, it's a clear case of everyone is playing the game, so "I
have to play or be left behind."

~~~
ImaCake
I used to work on fundamental molecular microbiology. We looked at what
happened when DNA replication went wrong in E. coli.

What I used to do when writing or speaking about it was to start with cancer
or antibiotic resistance as if anyone in my field gave a crap about either of
those topics. Sure, we do care about those things in the broad sense, but we
didn't consider ourselves to be on the front line of solving either of those
problems.

------
solidasparagus
The author seems confused about what artificial general intelligence is.
People have not meaningfully moved towards AGI - it's still a distant pipe
dream.

The closest we've gotten is probably a Dota bot that's pretty good as long as
you give the bot a huge advantage. Which is an incredible piece of technology,
but about as close to AGI as an ant is to a human.

~~~
Causality1
Not even an ant. If AGI is a human then what we have is the equivalent of
synthetic RNA molecules.

~~~
klmr
What? Ants don’t have general intelligence, and even ant colonies’ decision
making (= simple swarm intelligence) is readily replicable in a programmed
system, and has been, for a while.

I don’t think a gradual scale is very helpful because I don’t think that the
progression from current-generation AI to AGI is going to be gradual (it will
require at least one paradigm shift). That said, if you want to compare AI
progress to actual animals then our current-gen AI _way_ beyond ants. Note
that, while we haven’t fully mapped the neurons/connectome of ants yet, this
is unnecessary to emulate their decision-making power. And we _have_ mapped
(and can simulate) the full connectome of simpler animals (e.g. _C. elegans_ ,
_P. dumerilii_ ) so we’re definitely a long way beyond single molecules.

~~~
computerex
If you are referring to the open worm project, then the conclusions you have
drawn are exactly the opposite of what I have drawn.

As I understand it open worm is a hodge podge of statistical and numerical
methods to try and replicate the sensorimotor behavior of c elegans. Open worm
is neither complete, accurate nor elegant, despite knowing c elegans
connectome and having mapped the some 900 cells in the worms body.

~~~
klmr
I wasn’t explicitly referring to that, it’s just one of many efforts. Anyway,
you’re certainly right that none of the existing efforts are “elegant” but
that’s hardly relevant. What matters is that the connectome is fully mapped,
and that we _can_ accurately simulate arbitrary behaviour. The issue with
projects such as OpenWorm is that they have so far not been successful in
generating new _insight_ (this may be connected to your issue with lack of
elegance) but this is distinct from being able to accurately simulate
behaviour. Another issue is that of simulating the physical environment
because — surprise, surprise — simulating the worm neurons without any
realistic external stimuli is a pretty pointless exercise for most purposes.

But pick any set of stimuli you like, feed it into the models and you get a
response that corresponds exactly with empirical observation. I’d therefore
definitely call the neuronal model itself accurate and complete.

~~~
computerex
No actually we can't do arbitrary simulation of c elegans. Can you link me
towards a publication which contains validated results supporting your
assertion?

------
derka0
The hype is BS but narrow AI in the context of automation is here. Job are so
specialised nowadays (driving, cashiers, fulfilment, paralegal, diagnostician
...) that a narrow AI (i.e. a glorified automation algorithm) that can do just
10% better at a cheaper cost will take down the job. The confusion is real
(AI, AGI, terminator...) but pattern recognition softwares powered with big
data has already proven business value and are here to stay.

------
mindgam3
> The technologists know it’s bullshit. Fed up with the fog that marketers
> have created, they’ve simply ditched A.I. and moved on to a new term called
> “artificial general intelligence.”

Not to detract from an otherwise excellent BS takedown, but unfortunately the
author fails to mention that there’s a non-zero possibility that AGI itself is
merely taking the bullshit to the next level.

It continues to astound me how some technologists actually believe AGI is not
just inevitable but around the corner. When to my naive perspective (as a
machine learning rank amateur but with several decades experience as a
professional human being) all I see is machines that can do some form of
pattern recognition, but nothing resembling the common sense that the words
“general intelligence” seemed to indicate at one point.

Minor quibbles about truth and meaning of words aside, I have to support any
article that skewers the soft underbelly of the phony AI ecosystem as
effectively as this one does.

~~~
roenxi
The real issue we are facing is that everything that we thought was not going
to be pattern matching and tree search has turned out to be pattern matching
and tree search. I remember my father telling me computers were never going to
be able to play Chess, because it required creativity for example. Nowadays a
neural network with tree search plays chess that looks remarkably human. A lot
of problem domains have fallen to what is basically pattern match and tree
search.

Extrapolating the trend of the last 30 years, there _is_ evidence that
computers will be able to solve every task a human can using pattern matching.
If that isn't AGI, it might turn out to be better than intelligence.

The technological future is unknowable, so believing AGI is certain is too
much. But believing it certainly isn't around the corner is also too little.
If computers can do anything a human can intellectually, they have reached
AGI. The list of discrete tasks (games, decision making once the parameters
are defined) a computer can't do is a very short list.

If someone finds an objective function for deciding what decision parameters
are important AGI could be upon us very quickly. As a postcript, I think
people radically overestimate human intelligence.

~~~
rhacker
I kinda see it this way:

AGI is Data in Star Trek TNG - trying to be human, making decisions to want to
be alive, eventually dreaming and finally using an emotion chip. Another
alternative here would be Moriarty or the various doctors in Voyager.

AI is the Ship in TNG - lots of heuristics to figure out what the user is
trying to do. Past usage of commands and relating major events outside the
ship with algorithms for battle, life support, etc.. Events categorized by
importance and automatic handling to save lives when necessary. Basically an
extremely advanced Siri that doesn't really misunderstand you - while at the
same time not really caring about you or knowing anything about being alive
other than the priorities built into its software.

I think for the next 100 years we're going to have AI progressing like the
ship in TNG. I don't think we'll have AGI until maybe 100-200 years.

Then again when I was born no one had a fucking clue eventually we would have
something like the iPhone and talk to someone in China with <1 sec lag. So my
estimates could easily drop to half.

~~~
TomVDB
Your comment reminds me of this xkcd cartoon:
[https://xkcd.com/1425/](https://xkcd.com/1425/)

It's 5 years old now (coincidentally the time span quoted to develop a
solution), but recognizing a bird was already considered a solved problem 3
years ago, less than 2 years after the publication of the cartoon.

Predicting the future is hard.

~~~
AstralStorm
Surprisingly, recognizing a bird is much harder when you count rare species
and running birds. Thus, sparse data. Does it recognize penguins too?

AIs today still fail at it. Some folks were trying to train one to match
endangered species and they had to pull mighty tricks to have some 70%
accuracy. I think it was here on HN some time ago, but can't recall a link.

~~~
visarga
And yet, average humans can classify even fewer species.

~~~
tomp
With the same amount of training/data as NNs? I doubt it...

~~~
TomVDB
Do you take into account billions of years of training due to evolution?

~~~
tomp
Technically that’s network architecture, not training data... admittedly
though humans are “pre-trained” from birth.

------
lkrubner
I've been collecting examples of where the ads that I see are based on
extremely simple algorithms of the type that could have easily been supported
30 years ago, and yet I keep reading articles that suggest that the
advertising industry is deploying sophisticated tools to target ads to me. I
wrote about this recently:

\-------------------------

Despite much talk about Machine Learning and AI improving advertising results,
what I’m seeing is getting worse and worse. Despite billions invested, the ads
shown to me are much less relevant than that ads that I saw on the Web 10
years ago.

I hired 3 developers from Fullstack Academy. They were all great, so I went
and checked out the website, curious about the curriculum. And now, every
website I go to, I see an advertisement for Fullstack Academy. (See
screenshot.)

I’ve been writing software for 20 years. I’ve written semi-famous essays about
software development. I am not going back to school. I do not need to go to a
dev bootcamp. So why show me ads, as if I’m thinking of going to school?

For the last several years I’ve been seeing articles about the surveillance
economy. In theory, advertisers know more about me than ever before. In
theory, they know about my entire life. And yet, the ads I see are less
targeted than what I used to see online 10 years ago.

[http://www.smashcompany.com/business/when-will-machine-
learn...](http://www.smashcompany.com/business/when-will-machine-learning-and-
ai-improve-advertisings-ability-to-target-people)

~~~
onion2k
_So why show me ads..._

To keep the brand in your head so you post about it on Hackernews.

~~~
lkrubner
That’s a “just so” story. You’re looking at something that is easily explained
by incompetence, stupidity and irrationally, yet you’re working to transform
it in your head into something rational. Take a moment to think of the money
they’ve wasted and what do they gain? How likely is a sale? Did Fullstack
imagine this scenario when they authorized their marketing firm to spend this
money? Or is the marketing firm simply trying to spend money so they can bill
for something?

~~~
onion2k
I was being a bit facetious about you posting on here, but the point I was
making was serious. In advertising there's a thing called the "effective
frequency"[1] which is the number of times you need to see an ad before it has
an impact on you. Obviously this series of adverts has worked on you - you
know the brand and you use it as an example of which ads you remember. If the
company is advertising in order to raise the level of engagement they're
getting that's a fail; if their ads are intended to get people talking about
the company that's actually a pretty good result.

There are more reasons to advertise your business that simply "getting more
sales". Indirect communication is _very_ useful.

[1]
[https://en.wikipedia.org/wiki/Effective_frequency](https://en.wikipedia.org/wiki/Effective_frequency)

------
benreesman
If everyone sophisticated enough to be on this site would just use the term
“applied computational statistics” (even just in their own thoughts) instead
of “deep learning” or AI, the world would be a better place. Gradient descent
finds some fun minimia (my current venture is heavily based on that idea) but
to assign more agency to Adam or RMSProp than they merit is just an exercise
in feeding the trolls.

~~~
YeGoblynQueenne
Could you please explain in what sense deep learning is "applied computational
statistics"?

What about classical planning, SAT solvers, automated theorem proving, game-
playing agents and classical search? Could you please explain how one or more
of those are "applied computational statistics"?

Further- I don't understand the comment about "agency". Could you clarify? Why
is "agency" required for a technique or an algorithm to be considered an AI
technique?

~~~
plaidfuji
I don’t know anything about the underlying algorithms for the examples you
rattled off, but deep learning trains a graph of neuron weights such that they
are statistically optimized to minimize error in computed output labels for
some domain of input data. Very much “applied computational statistics”.

~~~
YeGoblynQueenne
The examples I gave are classic AI algorithms that are very easy to look up on
wikipedia. They do not compute any statistics.

I'm not sure what you mean about "neuron weights that are statistically
optimised". Modern-era, deep neural nets train their weights with
backpropagation, which is basically an application of the chain rule, from
calculus. They do not use statistics for that.

For example, calculating the mean of a set of values or calculating the
pearson correlation coefficient of two variables are computations typical in
statistics.

Could you please clarify what you mean by (applied) "computational
statistics", so that I don't have to double-guess you?

Edit: Do you really not know what a SAT solver is? Not to be rude but if that
is the case, from where do you draw your confidence about the correct
terminology to use for AI?

~~~
tnecniv
He means that neural networks are applied statistics in that they solve a
statistical regression problem. It's not conceptually different from classical
methods of regression like least squares. The phrase "statistically optimized"
is certainly a funky one, but regression is certainly as much a part of
statistics as the two problems you mentioned.

~~~
YeGoblynQueenne
That doesn't sound like what the OP was saying.

------
kranner
[https://outline.com/FP487e](https://outline.com/FP487e)

------
waynecochran
Getting ready for the next AI winter.... this is a cyclic phenomena.

~~~
raverbashing
Hopefully we don't take decades again for a simple but important change like
changing tanh to relu activations.

~~~
dijksterhuis
my bet is on capsule networks, Hinton is usually on point with his stuff

------
AstralStorm
Next: NoAI, like NoSQL. All natural real intelligence, fully organic and
explainable. Just add caffeine. ;)

~~~
j88439h84
Brilliant.

------
DonHopkins
In 1996 I made this AIML (Artificial Intelligence Marketing Language) parody
by taking an actual VRML article from some shameless trade rag, and globally
replacing "Virtual Reality" with "Artificial Intelligence".

(from "ArtificialPostModernIntelligenceInterActivity", V2 #4 April 1996, p.
20)

[https://www.donhopkins.com/home/catalog/text/SupportForAIML....](https://www.donhopkins.com/home/catalog/text/SupportForAIML.html)

Another closely related technology is BSML: Bull Shit Markup Language. (Note:
most of the features described in the BLINK tag extension were eventually
implemented by FLASH!)

[https://www.donhopkins.com/home/catalog/text/bsml.html](https://www.donhopkins.com/home/catalog/text/bsml.html)

At one point years later, somebody actually emailed me, asking me to take it
down, because they were developing a "real AIML [TM]" product, and found my
parody of their unique original idea to be beneath their dignity, distracting,
and confusing to their potential customers using google to search for their
prestigious "AIML" product.

------
throwaway287391
> In this way, Dynamic Yield is part of a generation of companies whose core
> technology, while extremely useful, is powered by artificial intelligence
> that is roughly as good as a 24-year-old analyst at Goldman Sachs with a big
> dataset and a few lines of Adderall. For the last few years, startups have
> shamelessly re-branded rudimentary machine-learning algorithms as the dawn
> of the singularity, aided by investors and analysts who have a vested
> interest in building up the hype. Welcome to the artificial intelligence
> bullshit-industrial complex.

As an AI researcher, I think a lot of people are a little too sensitive to the
term "AI" and make a lot of big assumptions upon hearing it. It's a very
general term that doesn't really imply any particular degree of complexity or
sophistication. Labeling simple machine learning algorithms and heuristics as
"AI" isn't at all unique to this era of hype that began in the last ~5 years
-- rather that's how the term has been used in academia for many decades. If
you took a college class called "AI" or looked up some of the most popular
textbooks on AI [1], you'd find that a lot of it is dedicated to search
algorithms (breadth-first, depth-first, A*), linear classifiers, and feature
engineering. If you think "artificial intelligence" is a bad name for these
things, fine -- but don't blame the recent wave of hype, this is what the term
AI means and has pretty much always meant. So go ahead and call your startup's
linear regression "AI", and if the VCs leap to fund you under the impression
that it means you'll be behind the singularity, that's on them. AI != deep
learning. AI != AGI.

[1] e.g., "Artificial Intelligence: A Modern Approach" by Russell and Norvig

------
Iv
"Deep Learning projects are typically written in Python. AI projects are
typically PowerPoints."

------
ackbar03
Of all the hypes going around (blockchain mostly lol) I think ai is going to
have the most substance to it though. I would say the breadth of problems
being solved are much wider and there is still a lot of research which hasn't
really found its way to actual implementation yet

------
ecmascript
I think honestly think westworld (yes the tv-series) has the best explanation
of why general intelligence is a hard problem to solve.

They mention consciousness but I think the same apply to intelligence in
general. Humans in my mind aren't different from say a program you write
except that we have a lot more input and possible outputs depending on a much
larger variant of external variables.

If we could build machines that have eyesight just as we do, muscles just as
we do etc I'm sure we could reverse-engineer the human being.

[https://www.youtube.com/watch?v=S94ETUiMZwQ](https://www.youtube.com/watch?v=S94ETUiMZwQ)

~~~
toxik
I find this analysis reductionist. You're basically saying "brains aren't hard
to reproduce once you have biological sensors and actuators." Why not? They're
_extremely_ delicate, intricate organs.

Claim 2 is also a difficult one: of course you can easily claim consciousness
doesn't exist, but it is impossible to argue by logic. You need a metaphysical
philosophical framework, and then it's already left the realm of empirically
observable truths.

~~~
dspillett
I'm not sure the claim is that consciousness doesn't exist. More that it is an
emergent property of complex systems rather than something that is (or can be)
deliberately programmed.

~~~
ecmascript
Precisely. It's the complex system that gives us an illusion of consciousness.
At least, that is what I naively believe in since there is a lack of evidence
for anything else.

~~~
goatlover
So you think experience is itself an illusion? When you kick a rock and feel
pain, you're not really experiencing pain? Is the rock also an illusion?

~~~
ecmascript
Well it depends on how you view it. You feel the pain from kicking the rock
and remember it, so you won't kick the same rock against a few minutes after.

That is an experience to me. An experience is simply a memory of an
event/feeling etc. Without any memories, you won't remember any events or
feelings and will gladly kick the rock again since you won't have any memory
of it hurting you.

Or how else would you define an experience? A memory isn't an illusion, there
is definitely something physical in your brain that say that that specific
event has happened. But you can also remember things that haven't happened,
which is probably why a lot of people believe in ghosts, religion etc.

I don't know why, but it probably serves a biological purpose and people are
probably more likely to survive if they are afraid of things and are careful.

------
mattigames
Everything is bullshit until is not, humans were talking about transportation
without using animal forces for decades before it became a reality, and a lot
of people were highly skeptical of such thing being even possible until it
actually happened in 1804 (first steam train), same thing happens with
Artificial Intelligence, and we are in such uncharted territory that someone
could say AGI is just 10 years away and someone else say 100 years away and
both get the same amount of credibility, meaning near none cause we don't even
know what is that we don't know to achieve AGI.

~~~
dboreham
Your example isn't quite as it seems : "trains" (cars running on rails) were
used in mining for hundreds of years prior. The steam engine was first
documented in 1698. What happened in 1804 was someone figured out the
manufacturing processes to make a steam engine light enough and powerful
enough to usefully pull a train of cars over some reasonable distance.

------
m0zg
There is a lot of froth, as in any hot field. However, unlike before, there
are many cases where AI actually works now. Some perceptual tasks work better
than a human, in fact. We can quibble about the naming and whatnot, but that's
not something you can say about the last AI winter. It's sort of like dotcom
bust of 00, sure things imploded back then, but there's no sign whatsoever
e-commerce will implode at any time in the future because unlike before it
actually works this time.

~~~
ethbro
_> Some perceptual tasks work better than a human, in fact. [...] that's not
something you can say about the last AI winter_

Eh. I'd say that's somewhat apples to oranges.

A) There were some useful and successful expert systems.

B) Things seemed to be going swimmingly, until they hit a fundamental wall.

C) We're working with a few orders of magnitude greater compute than they had
access to.

~~~
m0zg
Sure, but we did figure out how to make things more robust and generalizable,
at least for perceptual tasks so far. Knowledge representation and
probabilistic reasoning are still non-existent, though. Moreover, nobody is
even working on any of that, for fear of being compared to Doug Lenat.

~~~
Quetelet
Representation learning and probabilistic methods are huge sub-areas of modern
machine learning, just take a look at the proceedings of ICLR2019.

~~~
m0zg
Representation learning != knowledge representation, probabilistic methods !=
probabilistic reasoning. I'm talking foundations of AGI, which as far as I'm
aware, nobody is seriously working on at the moment.

------
yonkshi
AGI is a gradient, not an arbitrary threshold.

We are not capable of recreating human level intelligence yet, but our modern
algorithms had become magnitudes better at generalization and sample
efficiency. And this trend is not showing any signs of slowing down.

Take PPO for example (powers the OpenAI 5 dota agent), the same algorithms can
be used for robotic arms as it does with video games. Two completely different
domains of tasks now generalizable under one algorithm. That to me is a solid
step towards more general AI.

~~~
taurath
It’s a gradient but according to the marketers it’s basically going to
overtake humanity any week now.

~~~
misterman0
"AI any week now"

What marketers proclaim that? Are they saying that or are they saying there is
_utility_ in AI, now? Because me thinks, there is real utility, now, but it's
going to take years until it overtakes us. Years!

~~~
tim333
I'm not sure anyone said any week now but Musk probably came closest
[https://www.entrepreneur.com/article/323278](https://www.entrepreneur.com/article/323278)

------
arbuge
Consider this article:

[http://fortune.com/longform/single-family-home-ai-
algorithms...](http://fortune.com/longform/single-family-home-ai-algorithms/)

If you read it, you'll find that their methods to value homes and renovations
are based on algorithms written to value mortgages in the 80s, 90s, and early
00s.

I'm going to bet that there's not much of what the average HNer would think
constitutes AI going on in there.

------
galaxyLogic
What's the most difficult thing AI should be be able to solve but can not as
of yet?

I would say it is writing a program which writes an AI program. Why? Because
it is so difficult for us to define what exactly an AI program should be able
to do.

This shows that we have an issue with not being able to ask the right
question. If we could answer exactly what the AI should be able to do then it
would be much easier to create such a program and also create a program that
writes such a program.

We could say that an AI program should pass the Turing Test and many have
written programs that more or less pass it. But so now, write a program that
writes several different programs that all pass the Turing Test one better
than the previous one.

I don't really have an idea how I would start writing such a program that
writes a program that passes the Turing Test better than previous AI programs.
That makes me guess we are still far off from General AI. But I of course may
be wrong, just because I don't know how to do something does not mean others
would not.

~~~
chrshawkes
We know what we want it to do, we want it to have some basic ability to think
for itself. That is something we just simply can't do. Back propagation is far
from a spanking for acting out of line. AI has no ability to understand it's
acting like a fool or how to deal with uncertainty with emotions which cause
us to act without regards to consequences and in many cases reality. It lacks
understanding of what future consequences its trying to prevent such as our
daily decisions to get up and go to work each morning. The AI has no
understanding of it's future and the consequences of not going to work until
it's fired 40,000 times for not showing up or it's children are taken from
him/her/it.

I'm glad people are finally waking up to the fact that AI is not ML and AI is
all hype at the moment. Google used algorithms quite effectively to adapt and
learn, but they have no greater understanding of what we want, just what we
and others have wanted in the past.

------
moneytide1
These types of AI promotion seem to be a sort of cop-out that suggest we all
look forward to a hands-off future where computers will be able to do
everything for us.

Then human minds will be allocated away from thoughtful interaction with their
environment and into an all-hands-on-deck scenario where neural net operations
are given top priority so they can churn out some answers.

~~~
tachyonbeam
My main short-term fear is that increased automation will lead to an
increasingly isolated society. I can already get almost everything delivered
through amazon, order takeout through an app without speaking to anyone. Watch
movies on Netflix without needing to go to a video store. What's the world
going to be like when drone deliveries become a thing, and I don't even have
to speak to a delivery driver? How will it affect kids if they do all their
schooling online?

I think that, even before AGI happens, AI assistants will become placeholder
friends for a lot of people. You'll be able to have a conversation with Siri
or Alexa. Eventually, people might have pseudo relationships with robot
boyfriend/girlfriends. Imagine having a friend who is anything you want them
to be, does everything you want, and most importantly, never challenges you or
tells you anything you don't want to hear. People will get used to that, and
it will become difficult for them to have real human relationships.

In other words, technology is enabling everyone to function without directly
interacting with others. People might choose not to interact with other humans
out of convenience, insecurity, fear. Japan already has a population of
"herbivores", people who choose not to get into relationships, and the rest of
the world could become like that too. I hope we find a way to reverse this
trend.

Short documentary on hikikomori in Japan:
[https://www.youtube.com/watch?v=wE1UIK85E3E](https://www.youtube.com/watch?v=wE1UIK85E3E)

~~~
jcranmer
I think your fear is misguided. People have been complaining about how
technology is causing humanity to become more socially isolated for literally
thousands of years, and the actual evidence has been that those complaints are
unfounded. If anything, we've probably become more socially interconnected,
but that's more due to the increased population density of our environs than
technology changes.

What a lot of people miss, I think, is that human beings are fundamentally
social animals, and we crave social interaction. And I say this as a strong
introvert--as someone who has to be alone to recharge myself emotionally.
Things like distance learning or working from home are not well-received by
most people, especially not on a long-term basis. Sure, some people will find
it comfortable, but those people are a tiny majority, and I should point out
that it's not a new phenomenon: Emily Dickinson for the last 10 years or so of
her life or so refused to meet visitors face-to-face and rarely left her
house, which is more severe than most hikikomori.

------
bernardv
I totally agree with the gist of this article. This hype is being propagated
by a lot of folks who are willingly clueless, as for example, in the data
science crowd. This band-wagon is crowded and isn’t stopping any time soon.

It irks me to no end to comme across tutorial-style articles proclaiming to
teach an AI algorithm, also known as ‘linear regression’.

What bugs me the most though, are the countless ‘influencers’ on LinkedIn
which spew rubbish about machine learning, AI and all the wonderful things
that are just around the corner.

Lastly, it doesn’t help when countless articles/books are written on the
subject of AI dangers, AI ethics and are ‘robots coming for us?’. These add
fuel to the fire of hype.

In the end, this behavior will only guarantee the eventual blowing-up of the
bubble, when promises are not delivered.

------
nottorp
Is Medium for pay now? They told me to sign up to get "one more free story".

~~~
Veedrac
Medium lets writers opt-in to a paywall. It is not the default, but does come
with some perks for authors.

------
mikorym
So can I call it "second year linear algebra" now instead of "AI"?

------
plaidfuji
Sure, “AI” as it is used today implies “software that codifies decision-making
using data”. No, it’s not the T3000. But as the author acknowledges:

> Dynamic Yield can pay for itself many times over by helping McDonald’s
> better understand its customers

Ok, so it’s not hype - it is delivering real value. “AI” is just a marketing
term to help C-suite suits and Silicon Valley sales reps get on the same page
about what’s being sold with as few words as possible. What’s being sold is
software that helps make optimal decision using data.

AI isn’t a rigorously defined academic term, so people will use it how they
want. It’s only hype when real value isn’t delivered.

~~~
epr
> What’s being sold is software that helps make optimal decision using data

Doesn't this apply to virtually all software?

~~~
plaidfuji
In an extremely reductionist sense, maybe. Do I use Microsoft Word to automate
decision making? No. Does Facebook help me make important life choices? Heh.

How about this: Amazon.com is not AI, but their recommendation engine is.

------
cirgue
There is a massive positive, though, for the 'geeks building the future": AI
is where everyone else is looking. If you know where you _should_ be looking,
you have a decisive advantage over the rest of the market.

~~~
bombingwinger
Doesn’t your last sentence go for literally everything?

~~~
cirgue
Of course it does, but we can say with confidence that attention and capital
are misallocated toward a specific, identifiable set of activities. _That 's_
rare.

------
soobrosa
Been deleted, cached is at
[https://webcache.googleusercontent.com/search?q=cache:RV8OOz...](https://webcache.googleusercontent.com/search?q=cache:RV8OOzgmjJsJ:https://gen.medium.com/the-
bs-industrial-complex-of-phony-
a-i-44bf1c0c60f8+&cd=1&hl=en&ct=clnk&gl=de&client=safari)

------
chewz
> The Turk, also known as the Mechanical Turk or Automaton Chess Player
> (German: Schachtürke, "chess Turk"; Hungarian: A Török), was a fake chess-
> playing machine constructed in the late 18th century. From 1770 until its
> destruction by fire in 1854 it was exhibited by various owners as an
> automaton, though it was eventually revealed to be an elaborate hoax.[1]

[https://en.wikipedia.org/wiki/The_Turk](https://en.wikipedia.org/wiki/The_Turk)

------
colechristensen
Progress of civilization could be summarized in the slow march of BS
elimination parallel with the creation of creative new forms of BS (people
don't actually learn anything, they just form the same crazy opinions about
something new)

Strikeout "of Phony AI." The BS-Industrial Complex is huge and the rise of the
Internet has made it worse by empowering the less-informed to share ideas.
That is somewhat the price you pay for progress.

The hopeful idealistic information superhighway myth of the 90s turned into
something else.

~~~
ethbro
I look at BS as an inevitable symptom of the Singularity.

As we approach the capacity of human reason, fewer people are able to keep up
with the world, and are therefore more susceptible to it.

~~~
colechristensen
I don't know, look back two thousand years and you see plenty of it. More like
it's a symptom of humanity. Animals are stupid machines, humans aren't nearly
as far away from them as we'd think ourselves.

------
nl
AGI will arrive as soon as someone can arrive at a reasonable definition of
intelligence.

Try it. Everything I've seen is already achievable by computers.

~~~
AstralStorm
Solving novel problems. Show me.

By novel I mean multiple categories. A system that can serve as archive,
mathematician, calculator, can move a robot, drive a car and additionally make
coffee from scratch. Oh and talks (speaks and understands and acts upon
orders) in 3 human languages at decent levels plus can roughly explain what
it's doing. Oh and can learn more unrelated skills.

Hey, people do it all the time.

~~~
nl
_A system that can serve as archive, mathematician, calculator, can move a
robot, drive a car and additionally make coffee from scratch. Oh and talks
(speaks and understands and acts upon orders) in 3 human languages at decent
levels plus can roughly explain what it 's doing. Oh and can learn more
unrelated skills._

I'm a bit unclear if this is supposed to be a definition of intelligence.

Stephen Hawking would fail this test, but no one would argue he isn't
intelligent.

------
bjoernbu
Imho it has gone further. In a way, all the things described as not actually
AI now "are" AI, because the term AI has been used in that way so many times.

I don't think we'll ever use a better (more accurate) term for the ML- and
data-driven value current systems create. Instead "true" AI will get a new
fancy name to build the next hype around in several year.

------
dr_dshiv
We should be focused on designing "smart systems" that optimize measurable
outcomes

Who cares how complex the algorithm is! What matters is that it _works
better_. Is there a measurable outcome that matters? Can the system optimize
that outcome over time, through a coordination of human processes and
technology design?

That is what organizations need. Not hyperparameters.

------
nsajko
It seems the author has deleted the post. Maybe Dynamic Yield asked him to
take it down? Anyway, currently it is accessible through
[https://outline.com/FP487e](https://outline.com/FP487e)

------
a_imho
It is pretty much spot on, but I'm not convinced anyone should really care.
When was software not hype driven?

------
JustSomeNobody
This is no different than anything else. You hype what you're working on so
people get interested and throw money at you. AR/VR glasses, AI, self driving
cars, it's all the same. You generate interest, make lots of money and who
cares if it ever gets to market.

------
orpep90nxkfo
This reminds me of the article the other day about the internet being an SEO
wasteland

Basically our business networks run the same way (not a shock at all):
sycophants spam aristocratic investors with half assed bullshit solutions to
juice the odds of hooking one

------
East-Link
Rudimentry machine learning algorithms are indeed AI, by common usage.

Try typing into Google Images something like "ai machine learning deep
learning venn diagram" and you'll see that by common usage, machine learning
is a strict subset of AI.

------
Wiretrip
For a real emperor's new clothes moment, look at SpinVox!

[https://en.wikipedia.org/wiki/SpinVox](https://en.wikipedia.org/wiki/SpinVox)

------
diehunde
The problem is when you work at a company that tells you, "we are different,
we are not BS like the other A.I. companies"

------
tabtab
Just AI? IT is _filled_ with BS and fads. Dilbert is a documentary, not just a
comic strip.

------
holografix
Repeat with me: Machine Learning != AI

------
dijksterhuis
I _despise_ the term Artificial Intelligence. This is all _PROBABILISTIC
MODELLING_. Nothing to do with AI/AGI/whatever.

The computers aren’t thinking or learning. It’s just modelling fancy
probability statistics.

E.g. classical neural networks are basically a load of linear regression
equations with an activation function stuck on the end of each of them. No
magic. Just lots of linear regression.

This stuff only works when:

1) you are trying to solve a specific problem that is suited to probabilistic
models

2) you have a data set that is sufficiently large, varied and specific

3) the model is developed, trained, tested, implemented and updated in a
rigorous and sensible manner

~~~
Quetelet
Actually most modern neural networks are not probabilistic, they are
deterministic function approximators.

Also your point 3) isn’t quite correct either, often a “standard” architecture
and training procedure (e.g. ResNet50 with Adam) will work on a new task with
sufficient training data and minimal modification of the model.

~~~
dijksterhuis
The only nnets I mentioned were “classical” as a purposefully over simplified
example. Yeah, they can model any function, but historically they were used
for probabilistic density functions (if I remember correctly).

Most of what the article talked about can be done with much simpler models,
which is what I get peeved about.

Also, yes, you can transfer learn with resnet. But if I throw my bank
statements at it, it’ll do bugger all.

Similarly, if I throw new images at resnet in a silly way, it won’t transfer
properly.

~~~
Quetelet
You might be confusing the historical use of the sigmoid activation function
with probabilistic modeling, neural networks in the 80s were used similarly to
how they are today, albeit at a much smaller scale due to hardware limitations
at the time.

The development of neural networks is a major contribution of the machine
learning community, so even if you’d like to split hairs about whether the
“computer is learning” (“learning” has a a precise technical definition by the
way), NNs are not “just statistics.”

~~~
dijksterhuis
Ok, it seems like there are some crossed wires or missing context here. Also,
widely off topic.

I never said anything about the term machine learning. Check my bio, see what
I’m working on. Fully aware of neural network contributions.

I’m all for machine learning. Just not “AI”. “AI” is hype bullshit.

“Learning” when used by the people who spout this BS is not the technical
definition version, and is what I was referring to.

Could probably have made that clearer, but I’m 1.5 days without sleep.

What does feeding test data into a network yield? Inference results. Inference
seems vaguely familiar from probabilistic modelling?

Bayes rule applies to neural nets too. Two different models may give vastly
different results. Whilst they can be very good approximators, they can also
be very unreliable if care is not taken during training.

G(x) ~ f(w.f(w.x+b)+b) is literally a fancy weighted sum. A linear regression.
It is some easy stats combined together with a few other things that aren’t
explicitly necessary, eg activation function can be identity to cancel out
f().

EDIT Both the parameters of a network and the training data are variables in
the application of Bayes rule. Which inherently deals with likelihoods
(probability). /EDIT

So at their fundamental, they are “just some stats” stuff. They may have a few
more bells and whistles to make them complex (and better) systems, but they
still output a classification/regression based on inference.

You can, of course, approximate many functions with them. I’ve built a network
with only weights of +1/-1, for example.

But those examples have extremely specific use cases that are not applicable
to anything the article discusses.

------
luc4sdreyer
Seems like the post has been taken down.

------
wolfi1
if there is no natural intelligence around, you need an artificial one

------
module0000
TLDR; _machine learning == "AI"_, just as much as _colocated servers ==
"cloud"_

------
macawfish
Ben Goertzel.

------
antonvs
Paywall.

~~~
3xblah
Not when you have Javascript turned off.

------
stareatgoats
> The BS-Industrial Complex of

Brilliant! This is really a thing, and the computer industry is (and has
always been?) rife with it.

~~~
harry8
IBM Global Services. Oracle. Accenture. Any company with 100+ employees who
does consulting involving the design, implementation and maintenance of
computer systems for _any_ government bureaucracy.

Is there anyone around here who thinks this industry sector is something else
than industrial grade BS and if every single one of those companies
disappeared overnight that we would not be in a better place as a civilization
very, very quickly as we were forced to pick up the pieces.

Industrial quantities of BS are the norm, right? Most of us do startups to do
_something_ more than to schmooze, threaten and ultimately bilk customers
paying with other peoples money. We kind of want to do tech.

~~~
adev_
> Most of us do startups to do something more than to schmooze, threaten and
> ultimately bilk customers paying with other peoples money.

Do you know Theranos ? That's the definition itself of bullshit and it was a
"startup".
[https://en.wikipedia.org/wiki/Theranos](https://en.wikipedia.org/wiki/Theranos)

Bullshit comes from 5000+ employees to companies with 5 dudes. Scale does not
change anything.

 _Business culture_ , _profit as only value_ and the culture of _fake it until
you do it_ are the source of the problem.

And against that their is not magic solution, excepted trust _a lot less_ the
ones that speak and trust _a lot more_ the one that do. In the good old Nerd
world, we named that _Show me the code_

------
gok
"I was able to bullshit about A.I., so the whole field is bullshit."

