
Our field isn't quite “artificial intelligence” – it's “cognitive automation” - vo2maxer
https://twitter.com/fchollet/status/1214392496375025664
======
hprotagonist
As usual, the monks know what the laity doesn't, and aren't particularly
afraid to talk about it. Also as usual, there's still a yawning gap between
what domain experts are up to and what non-domain experts think they're up to.

That this is true in AI is not surprising; humility comes from knowing that my
domain expertise in some fields (and thus a clearer picture of 'what's really
going on') is guaranteed to be crippled in other fields. Knowing that by being
in some knowledge in-groups requires me to also be in some knowledge out-
groups is the beginnings of a sane approach to the world.

~~~
Invictus0
The author is just correcting a misnomer. It is not really accurate to say
that machine learning is intelligent at all, so why label it as such? It's
confusing for everyone and leads to great misunderstandings.

~~~
PeterisP
Machine learning is a particular narrow result of studying the wider field of
artificial intelligence. Just as expert systems, or rdf knowledge
representation, or first order logic reasoners, or planning systems - none of
them are 'intelligent' but all of them are research results coming from (and
being studing in) the discipline of studying how intelligence works and how
can something like it be approached artificially.

There's lots in the field of AI that is _not_ 'cognitive automation' \- many
currently popular things and use cases are, but that's not correcting a
misnomer, that's a separate term for a separate (and more narrow) thing - even
if that narrower thing constitutes the most relevant and most useful part of
current AI research.

A classic definition of intelligence (Legg&Hutter) is "Intelligence measures
an agent’s ability to achieve goals in a wide range of environments". That's a
worthwhile goal to study even if (obviously) our artificial sytems are not yet
even close to human level according to that criteria; and while it is roughly
in the same direction as 'cognitive automation', it's less limited and not
entirely the same.

For example, 'cognitive automation' pretty much assumes a fixed task to
execute/automate, and excludes all the nuances of agentive behavior and
motivation, but these are important subtopics in the field of AI.

But I am willing to concede that very many people _are_ explicitly working
only on the subfield of 'cognitive automation' and that it would be clearer if
these people (but not all AI researchers) explicitly said so.

~~~
joe_the_user
> _Machine learning is a particular narrow result of studying the wider field
> of artificial intelligence._

I beg to differ, at least as far as terms go now. Neural networks lived in the
"field" of machine learning along with Kernel machines and miscellaneous
prediction systems circa the early 2000s. Neural network today are known as AI
because ... why? Basically, the histories I've read and remember say that the
only difference is now neural networks are successful enough they don't have
to hide behind a more "narrow" term - or alternately, the hype train now
prefers a more ambitious term. I mean, the Machine Learning reddit is one go-
to place for actual researchers to discussion neural nets. Everyone now talks
about these as AI because the terms have essentially merged.

> _A classic definition of intelligence (Legg &Hutter) is "Intelligence
> measures an agent’s ability to achieve goals in a wide range of
> environments"._

Machine learning mostly became AI through neural nets looking really good -
but none of that involve them become more oriented to goals, is anything, less
so. It was far more - high dimensional curve can actually get you a whole lot
and when you do well, you can call it AI.

~~~
PeterisP
What do you mean by "today are known as AI" and "became AI" ?

Neural networks have always been part of AI, machine learning has always been
a subfield of AI, all these things are terms within the field of AI since the
day they were invented, there never was a single day in history when those
things had not been part of AI field.

Neural networks were part of AI field also back when neural nets were _not_
looking really good - e.g. the 1969 Minsky's book "Perceptrons", which was a
description of neural networks of the time and a big critique about their
limitations - that was an AI publication by an AI researcher on AI topics.

Your implication that an algorithm needs to do well so that "you can call it
AI" is ridiculous and false. First, _no_ algorithm should be called AI, AI is
a term that refers to a scientific field of study, not particular instances of
software or particular classes of algorithms. Second, the field of AI
describes (and has invented) lots and lots of trivial algorithms that
approximate some particular aspect of intelligent-like behavior.

Lots of things that now have branched into separate fields were developed
during AI research in e.g. 1950s - e.g. all decision making studies (including
things that are not ubiquitous such as minmax algorithm in game theory),
planning and scheduling algorithms, etc all are subfields of AI. Study of
knowledge representation is a subfield of AI; Probabilistic reasoning such as
Kalman filters is part of AI; automated logic reasoning algorithms are one
more narrow subfield of AI, etc.

~~~
ozim
I think what parent poster means was that for people who don't know better
"neural networks === AI". For people who now a bit more, there is bunch of
other stuff than just neural networks, and neural networks are not some god
sent solution for AI.

------
godelski
I think it is also important to remember that intelligence isn't clearly
defined. It seems a lot of people interpret it in different ways and the
definition is closer to pornography (I know it when I see it).

I often see two camps, one that defines intelligence to be more human like.
Limiting it to really cetaceans and hominids. Maybe including ravens. The
other group gives too vague of a definition.

Personally, I do not see a problem with having lots of bins. I don't think
many disagree that intelligence is a continuum. So why restrict it to very
high level bins? Because that's the vernacular usage? I for one vote for the
many bin and continuum approach. In this I think you could say that ML has
some extremely low level form of intelligence, but I would generally say lower
level than that of an ant. In that respect, a multi agent system with the
intelligence surpassing that of ants I believe would be extremely impressive.

~~~
Barrin92
>So why restrict it to very high level bins?

I don't think it's a quantitative issue (the level of the bin). It's a
qualitative issue. When people say that ML lacks intelligence what they're
saying is that it lacks robustness, common sense, agent like behaviour, the
ability to reason and so on.

Intelligence (in humans or animals) does not appear to be just data driven
pattern matching. I think we can say this with some confidence given that even
the fanciest ML algorithm still hopelessly sucks at performing tasks that are
trivial even for barely intelligent animals.

~~~
m00x
> Intelligence (in humans or animals) does not appear to be just data driven
> pattern matching. I think we can say this with some confidence given that
> even the fanciest ML algorithm still hopelessly sucks at performing tasks
> that are trivial even for barely intelligent animals.

Do you mind expanding on this? A lot of the tasks primitive animals can be
replicated by Deep RL algorithms given the same environment.

Humans adapt to sensor information, memory and a reward/penalty system just
like RL. We're just much more advanced and have sophisticated systems to sense
and act.

~~~
Barrin92
>A lot of the tasks primitive animals can be replicated by Deep RL _algorithms
given the same environment_.

Highlighted the last part because this is the key difference. Humans and
animals can navigate unknown and unstructured and open-ended environments. You
can identify a dangerous predator without having to see ten thousand images of
mauled bodies.

Humans and animals can generalise in a way that is robust and independent from
'data' because they understand what they see. RL algorithms have no
understanding of the world. If you let an RL system play breakout but you
resize the paddle by five pixels and tilt it by 2 degrees it cannot play the
game.

Daniel Dennett helped popularise the notion of _Umwelt_ , which is losely
translated someone's self-centered world as subjectively perceived by an
organism and filled with meaning and a notion of how the agent relates to it.
It's distinct from an objective 'environment' that everyone shares.

Machines lack this notion, they have no real concept of anything, even the
fanciest algorithms. Which is also why conversational agents have only really
made advances on one front, which is understanding sound and turning text into
nice soundwaves. They have made virtually no progress in understanding irony,
or ambiguity or anything that requires having an understanding of the human
_Umwelt_. We don't even have any idea how the mind constructs this sort of
interior representation of the world at all, and my prediction is we're not
going to if we continue to talk about layers of neurons instead of talking
about what the possible structure of a human or animal mind is.

~~~
wilg
Well, what if you trained on the unbelievable amount of data that enters the
human brain?

Like, say inputting years of high definition video, audio, proprioception,
introspection, debug and error logs, data from a bunch of other sensors, etc.
Then put that in a really tight loop with high precision motion and audio
output devices, and keep learning. Also do it on a really fast computer with
lots of memory. Also make the code itself an output.

If thats not enough, you could always try self replicating to try to create
more successful versions over million year timescales.

~~~
Barrin92
That's trivially true in the sense that this is how we can only assume humans
and animals came to be but I don't think there's any guarantee that this can
be replicated in a silicon-based software architecture which is very different
from analog and chemical biological organisms. Today already energy and
computional costs are high with model computation cost going into five or six
figures even in just one domain.

But more importantly, I think the problem with this approach is that it's
essentially a hail mary of sorts with potentially zero scientific insight into
how the brain and the mind works. It's a little bit like behaviourism before
the cognitive revolution with AI models being the equivalent of a Skinner box.

~~~
wilg
I don't know the history of it and am not an expert in the field, but it seems
to me that it's valid to call the things ML can already do "intelligence" in a
generalized sense, and that there is nothing categorically different between
that and human intelligence, it's just a matter of how complicated humans are.

That seems like a problem you can sort of throw hardware at for a while until
it gets good enough to help you figure out how to make something smarter.

~~~
jsinai
I think there is little evidence to suggest that neural networks and human
minds behave in the same way modulo complexity.

In science there is a tendency to come up with models of the world -
simplifications which we can observe and quantify - and then fall into the
trap thinking that these models explain the world.

While neural networks are inspired by biological neural connections between
synapses and neurons, the converse that neural networks are therefor
intelligent does not hold.

~~~
wilg
I'm not suggesting NNs should be considered intelligent because of their
inspiration from biology. Just that they should be considered intelligent
because of what they are currently capable of (though that is way less
intelligent than humans or animals of course).

But it seems like there is a plausible path to increasing their "intelligence"
by dumping more data and hardware at them.

Like GPT2 shows quite a surprising amount of structural complexity even though
it's very dumb in other ways – it feels like if you could pump a million times
more data into it you'd get something that seemed really quite intelligent.

------
nickpinkston
I see a lot of AI engineers who seem concerned with this particular issue,
which I never really understand.

Is it because of a perception that most regular people are likely
overestimating the speed of which AI is going to overtake human intelligence?
Or more about corp management wanting miracles that aren't possible?

Why does this matter and always seem to be talked about?

~~~
UncleOxidant
Because there's a history of overhyping ML/AI (whatever you want to call it)
leading to AI winters. Winter in this case being kind of like a recession in
economic terms - most research funding dries up, etc. We essentially had one
of those winters from the late 80s until about a dozen years ago. A lot of
laymen now think of AI as being "magic" that can do anything and that's not a
good thing when the reality turns out to be different.

At this point I don't think we'll see an AI winter as deep as some of the
previous ones. But we could certainly see an AI Fall.

~~~
YeGoblynQueenne
>> Because there's a history of overhyping ML/AI (whatever you want to call
it) leading to AI winters.

Note that past AI winters have not occurred because of overhyping _machine
learning_. They occurred because of overhyping of _symbolic AI_ that had
nothing to do with machine learning. For example, the last AI winter at the
end of the '80s happened because of the overhyping of expert systems- which of
course are not machine learning systems.

Machine learning is not all, not even most, of AI, historically. It's the
dominant trend right now, but it was not the dominant trend in the past. The
dominant trend until the 1980's was symbolic reasoning.

~~~
radarsat1
But symbolic reasoning mostly worked, did it not? However, its Achilles heel
was that for it to be useful, it's necessary to distill a lot of domain
knowledge into a format that can be processed by an expert system. That means,
writing 10s upon thousands of rows of "if then then that".

Machine learning is different in that it is more amenable to distilling those
rules from the data automatically. It is successful where symbolic reasoning
failed because it can go from the raw data. A good portion of machine learning
research is in new ways to preprocess and format data into a structure that
can be further consumed by linear algebra, which turns out to be a lot easier
and practical than figure out a huge database of sensible first order
predicate logic statements.

If ML techniques can be used to feed symbolic systems, the latter would show
promise again, which is already happening in recent trends in causal inference
and graph networks. The marriage of these two fields is inevitable, and has
already started.

~~~
blackrock
> That means, writing 10s upon thousands of rows of "if then then that".

LOL.. It's full of ifs.. [1]

[1] [https://qph.fs.quoracdn.net/main-
qimg-89cfa17ca63ddac683e04f...](https://qph.fs.quoracdn.net/main-
qimg-89cfa17ca63ddac683e04f0852336e47.webp)

------
sebastianconcpt
Brilliant observation.

And it's even harder than this.

The problem is not that we have a problem. The problem is that we have
problems. So the solution is not finding a solution to a problem. The solution
is finding a metasolution that is valid across time and tribes. Bam! the
challenge of being an intelligent being in this universe. No way we can
automate that. Only mimic a portion of it and call it intelligence doesn't
make it really intelligent.

~~~
carapace
FWIW, check out the Gödel machine:

[https://en.wikipedia.org/wiki/G%C3%B6del_machine](https://en.wikipedia.org/wiki/G%C3%B6del_machine)

[http://people.idsia.ch/~juergen/goedelmachine.html](http://people.idsia.ch/~juergen/goedelmachine.html)

> No way we can automate that.

It's a very very interesting question. Personally I believe that "automaton"
is _almost_ the opposite of "being". But that's just me, not science or other
authority. Certainly, somewhere between virus and human _something_ comes into
being (no pun intended.) I don't know of any non-metaphysical argument that we
couldn't find some _other_ way to create non-biological general AI.

I think we could genetically engineer human DNA to create wetware G"A"I but I
put the "artificial" in quotes to indicate that I'm not saying whether that
would count as AI or not. I know of a few efforts to create "Daleks" out of
human brain organoids, but I don't think anyone has gone beyond the
speculative/hype stage with it so far.

~~~
jacinabox
I was waiting for somebody to goerdlize that

------
GuB-42
From the Merriam-Webster dictionary.

Definition of cognitive 1 : of, relating to, being, or involving conscious
intellectual activity (such as thinking, reasoning, or remembering) 2 : based
on or capable of being reduced to empirical factual knowledge

Using "cognitive" instead of "intelligence" puts the emphasis on data
processing rather that adaptability, which may be a bit more in line with how
things are done today. However, it doesn't addresses the core of the debate.
The usual "[technology] isn't [AI/cognitive automation] because it can't do
[thing humans do], it is just [thing computers do]". Both terms relate to
consciousness, and are generally considered fundamentally human qualities.

I think there is simply no way out of that debate. Maybe use a term that it
sounds completely unrelated to human activity, maybe something like "Big Data
Statistical Matching".

------
jokoon
Intelligence doesn't have a lot of scientific ground either. It's pretty hard
to define what intelligence is, or at least have a scientific definition that
is precise enough. The Turing Test is only a measure, it doesn't help to reach
a definition.

Practical research will always hit a ceiling if scientists cannot try to
define what they're looking for.

Even machine learning is not a good definition. There are other attemps, like
"sophisticated statistics" or "statistical prediction".

Kudos for this tweet.

------
cmarschner
There has been a lot of debate about the state of AI in recent weeks on
Twitter (see #AIDebate). A lot of it was about naming. It occurred to me that
very opinionated people had no idea what people in the community are actually
doing, and so the whole excercise seemed like a learning experience for them,
which seems to be a good thing.

The goals of AI (which Google trends classifies as “field of study” - I think
that captures it quite well) haven’t changed in decades - to reverse engineer
the miracle of human cognition. A certain number of people (like the teams of
Yoshua Bengio or Demis Hassabis) have the clear mission to work on just that.
The progress in this area is much slower than the perception of the last 5-10
years would suggest. It was just that work from the 90s and 2000s were put to
test and quickly outperformed other approaches - symbolic or what we now call
“classic machine learning” (e.g. in speech recognition, image
classification/detection, machine translation, Information retrieval). All
these areas had important and valuable applications in industry and have
sucked up a lot of money.

But this is only a tiny part of what human cognition entails. Areas around
memory, reasoning, consciousness etc. are completely unsolved. Where are we on
a scale of 0 to 1000 of solving the problem? Perhaps somewhere between 20 and
50, nobody knows. AI is a north star, and it is a weird development that
people have started to call it “AI” again (it felt totally weird about 3-4
years ago when this happened).

So, I think the field is still rightly called “AI”. Call the current state of
it “system 1”, “differential programming”, “deep learning” or whatever.

------
proc0
Well said. The definition of intelligence is bastardized for virtually all
current AI applications. They are glorified statistical heuristics /
stochastic descent as has been mentioned before. The key to approaching actual
intelligence as we know it, will be a system that can dynamically model its
environment and actors in it, since even insects are able to do this to some
extent.

------
blackrock
When did Machine Learning become Artificial General Intelligence?

When did SVMs become AI?

I'm going to take the dissenting viewpoint here. I think AI as it is being
sold today (e.g. the Deep Learning).. is bullshit. It's the new snake oil.

Everyone is pouring in all this money because of FOMO (Fear of Missing Out).

Yes, it's producing some fancy new toys. Beating the best human player at Go.
Or doing some facial recognition. Or winning at StarCraft.

But I don't even see the point at mastering Chess past a certain level, and
I'm certainly not going to bother with mastering a RTS game like StarCraft.

The scary thing is if some military planner thinks the StarCraft AI is
sufficiently smart enough, to put on military weapons systems, and used to
hunt down other humans.

Now, if we keep AI to be for these constrained things, then yes, it can
produce more toys and products, that can be sold. It's the next evolution in
smart products. Corporate America can keep cranking out new and evolved
products to sell, and slap an AI sticker on it.

And have you noticed? Everyone is slapping an AI sticker on everything. It's
like Microsoft and the .Net branding, or Sun and the Java branding all over
again. But this time, everyone is calling their little algorithm, AI.

But beyond that, it's just another gimmick.

Deep Learning is an advanced form of OCR. Facial Recognition is an advanced
form of OCR of the face. Do we consider OCR to be AI these days? No, we don't.
We just think of it as a wonky pattern recognition engine, that half the time
doesn't work, and the other half is frustrating enough, that we just type it
out ourselves. In fact, OCR uses a neural network algorithm.

It's not an evolution that we need, in order to achieve AI. It's a revolution.
And nothing I've seen so far, has convinced me that it can be achieved. In the
meanwhile, if you're a newly certified 'AI Expert', then cash in as much as
you can. But.. Beware. Winter is coming.

~~~
lm28469
> When did xyz became AI

When it started to become a buzzword for PR and more investors money.

------
randomsearch
Recently I’ve been reading a lot about AI and ML.

It’s quite clear to me what ML is - solving classification and regression
problems. There are some fuzzy edges, but that’s true of any discipline. Maybe
you invoke Mitchell’s definition and say “well, it’s improving on a task” (as
many introductions to ML do) but that’s completely out of step with what
people actually treat as ML.

There are lots of interesting “learning automation” areas that we’re
neglecting - symbolic reasoning being the glaring one for me.

AI just seems like a nonsense term to me. Maybe it’d be better if we stopped
using it.

------
cjauvin
Following the recent "AI Debate" between Yoshua Bengio and Gary Marcus [0],
there was a lot of discussion about the exact definition (or redefinition
even, as some argued) of some labels like "deep learning" and "symbol" (what
do we mean exactly by these?), I find that it is quite relevant to this
discussion.

[0]
[https://www.youtube.com/watch?v=EeqwFjqFvJA](https://www.youtube.com/watch?v=EeqwFjqFvJA)

------
jariel
I don't really agree, and think the misnomer should be applied in the opposite
direction: AI should be called 'adaptive algorithms' and it should be just
another tool the box of CS people.

We're not doing anything that we were not before.

There is no new paradigm shift. There is no AI. There's just a slightly new
approach to solving problems. That's it. There's somer really nice
improvements in computer vision ... and a few other things ...

... but all this talk of 'intelligence' etc. should be brushed aside, it's
misleading to everyone.

There will be no 'general AI' with our current approaches for a whole variety
of reasons.

I'm embarrassed at how so many intelligent colleagues drink the kool-aid on
this.

Take classical ML: it was hyped for a while, now it's not as exciting as 'Deep
Learning'. Well, in few years, I think that DL will be there as well: just a
tool in the toolbox.

------
YeGoblynQueenne
>> Our field isn't quite "artificial intelligence"

True, but so what? We call it AI and that's that, really. We've been calling
it that for 70 years now and it's never been a problem.

And let's be absolutely clear that it's not the _name_ that's confusing the
public but the way that industry luminaries promise autonomous cars and
robotic maids in the next -3 years, or the way that the technology press -the
_technology_ press- can't get its shit together to figure out the difference
between "machine learning", "deep learning" and "AI" as fields of research and
as category labels. Of _course_ the lay public is going to be confused if
people who are paid to elucidate complex concepts make a mess of it.

~~~
joe_the_user
> " _True, but so what? We call it AI and that 's that, really. We've been
> calling it that for 70 years now and it's never been a problem._"

That isn't even ... _true_. AI became "machine learning" in the late 90s/early
2000s and that change happened because the chorus of criticism of "artificial
intelligence" had become extremely loud and a less ambitious term served as a
refuge.

~~~
YeGoblynQueenne
AI was renamed into many things in the '80s and '90s, for example "Intelligent
Systems" or "Adaptive Systems" etc, and that indeed was done to dissociate
research from the bad rep that had accrued for AI. But "machine learning" has
been the name of a sub-field of AI since the 1950's and it's never stood for
the whole, at least not in conferences, papers or any kind of activity of the
field.

For example- two of the (still) major conferences in the field are AAAI and
IJCAI: the conference of the "Association for the Advancement of Artificial
Intelligence" and the "International Joint Conferences of Artificial
Intelligence". Neither of those is in any way, shape or form a conference for
machine learning only and neither uses machine learning a byname for AI. By
contrast, machine learning has its own journal(s actually) and there are
specific conferences dedicated to machine learning and deep learning (NeurIPs
and ICLR).

Additionally, there are many sub-fields of AI that are not machine learning,
in name or function: intelligent agents, classical planning, reasoning,
knowledge engineering etc etc.

The only confusion between "AI" and "machine learning" exists in the minds of
tech journalists and the people who get their AI news exclusively from the
tech press.

P.S. As a side note, the name for what the tech press is doing, referring to
the field of AI as "machine learning", is "synecdoche": naming the whole by
the name of the part.

------
amrrs
François Chollet's discussion with Lex Fridman (first half) is an interesting
one on AGI - Video -
[https://youtu.be/Bo8MY4JpiXE](https://youtu.be/Bo8MY4JpiXE)

------
dr_dshiv
One thing I find strange is how much we emphasize the artificial nature of the
intelligence. AI and automation always occurs in the context of human
processes. Nothing is truly autonomous, so why design it as if human
involvement is a failure? We can easily design artifacts to enhance human
intelligence or team intelligence. Why the focus on the machine part and not
the overall system that functionally accomplishes the desired work?

~~~
joe_the_user
> _One thing I find strange is how much we emphasize the artificial nature of
> the intelligence._

We really don't know what intelligence (sans qualifications) is. AI has been a
term for effort emulate what we roughly think of as "intelligent" behavior.
It's far from successful so far and the lack of a "theory of intelligence" is
probably part of that. But it's pretty clear what "AI" researchers and systems
are doing now is far from intelligence.

> _AI and automation always occurs in the context of human processes. Nothing
> is truly autonomous, so why design it as if human involvement is a failure?_

This argument makes as much sense as "we'll never exceed the speed of light,
why act like faster transportation matters". An automated factory still
requires some maintenance but it's creation certainly is significant.

> _We can easily design artifacts to enhance human intelligence or team
> intelligence. Why the focus on the machine part and not the overall system
> that functionally accomplishes the desired work?_

Both approaches matter and since there's really nothing keeping people from
doing both of these, people pursue each separately. Moreover, I'd say AI
research could do well to cross-pollinate with human-computer interaction
theory.

But overall, you seem to just not understand why automation matters -
automation has brought vast productivity in a variety of fields. It may or may
not be possible other further fields but if it is, it will transform the world
equivalently.

------
qwerty456127
I'd rather call it cognition imitation if you insist to associate it with
cognition or intelligence. In fact it's just brute-force statistics.

~~~
savanaly
Are we so sure human cognition isn't this too?

------
mcculley
I have been wondering why we don't use the term "synthetic intelligence":
[https://enki.org/2019/08/18/artificial-intelligence-is-a-
dum...](https://enki.org/2019/08/18/artificial-intelligence-is-a-dumb-term/)

~~~
UncleOxidant
That just replaces the first word with what is essentially a synonym. It's the
second word "intelligence" that's the issue.

~~~
mcculley
"artificial" and "synthetic" aren't exactly synonyms in my mind. If I
synthesize glucose, there is nothing artificial about it. It just didn't come
from a process developed by evolution. Conversely, artificial leather is
nothing like real leather.

I'll have to think about that some more.

~~~
TheRealPomax
But again, it's the "intelligence" part that's the misnomer. Except for John
Carmack, no one's trying to invent general intelligence. Every single bit of
work is merely automating tasks that _when performed by humans_ requires
intelligence... except that too is a misnomer because as humans we literally
can't do anything, no matter how mundane, without it "requiring intelligence".

------
radarthreat
Cognitive automation is a bit much, progressive pattern matching is more
accurate.

------
liamcardenas
In my opinion, even calling it “cognitive” is too generous.

What makes it “cognitive” instead of just “normal” automation? Because it’s
dealing with information rather than the physical world?

I think a better term is statistical or digital automation.

------
dsr_
All programming is, is the reification of decision making.

------
ratsmack
I like this comment:

>At the end of the day, "AI" is just glorified statistics (running on
increasingly powerful computers).

~~~
raven105x
Depends on what you mean by "AI". SageMaker / GCP / Azure / other ez-bake
business "AI", sure.

Alas at the forefront, it's much harder to attribute GAN and Transformer
performance to statistics.

------
choonway
Nope. It's just pattern recognition.

~~~
sgt101
What is? I assume you mean machine learning? Ok... What about one shot
learning like lake and tabembauns bpl? What about optimal resource allocation
in auctions? Is this recognising patterns in event spaces larger than the
number of atoms in the universe?

------
shmerl
I think a better term you are contrasting it with is artificial mind, not
artificial intelligence.

------
eanzenberg
Call it whatever you want, I don't care. It's working and improving year over
year.

------
jeromebaek
nobody likes "artificial intelligence". laypeople are scared of it. the media
blames it for all evils. it brings connotations of frankenstein, arrogant
atheism, etc. it's reasonable to want to hide behind a different term.

------
deesep
When machines transcend beyond their programmed limitations to shape their
environment in their own image, then they become truly intelligent.

~~~
ForrestN
Why would a non-human intelligence necessarily have a drive to “shape their
environment?” Maybe a non-human intelligence would discover the inevitable end
of the habitable universe and opt to just do nothing?

~~~
whatshisface
Nobody's going to pay for AWS hours for a lazy robot. They'll keep changing it
does something. The human drive, which is not essentially rational, will give
birth to the machine drive, which won't be rational either.

------
0xff00ffee
I hear this a lot: "Wellll, Machine Learning isn't TRUE artificial
intelligence..."

Seriously people: are you THAT insecure? I realize it is a cutthroat hiring
market and companies are stuffing extra zeroes onto signing bonuses to get
anyone with an AI background, so maybe there is lots of jealousy and FOMO. I
dunno, but it seems awfully gatekeep-y all of a sudden.

------
knolan
It’s curve fitting.

~~~
gfodor
It seems unclear if your brain is also curve fitting. Time will tell
hopefully.

------
LoSboccacc
not even, it's multivariate regression analysis optimization

~~~
anticensor
It is automated (not _automatic_ ) cognition using multivariate regression
techniques, after all, there is something to automate.

~~~
airstrike
I think many take issue with the use of the word "cognition" in that
definition.

------
0xdeadbeefbabe
it's automation

Edit: artificial automation

Edit: computer science

Edit: pseudo science?

