
AI is mostly about curve fitting (2018) - Anon84
https://diginomica.com/ai-curve-fitting-not-intelligence
======
ipsa
"Machine learning" used to be a safe haven. You could flee there to escape the
Terminators and brain-on-a-chip graphics. Business PR deliberately killed
that. They wanted their ML algorithms to be refered to as AI, so they could
fully ride the hype train.

AI used to be a tight quirky community. Having the brain as inspiration led to
all sorts of anthropomorphizing. This was ok. Researchers understood what was
meant with "learning", "intelligence", "to perceive" in the context of AI.
Nowadays, it is almost irresponsible to do this, not because you'll confuse
your co-researchers, but because popular tech articles will write about
chatbots inventing their own language and having to be shutdown.

Still, as a business research lab, it is good to get your name out there, so
all the wrong incentives are there: Careful researchers avoid
anthropomorphizing, and lose their source of inspiration -- you can not be
careful with difficult unsolved problems, you need to be a little crazy and
"out there". Meanwhile, profit-seeking business engineers and their PR
departments, obfuscate their progress and basic techniques, all to get that
juicy article with "an AI taught itself to X and you won't believe what
happened next".

The researchers actually busy solving the hard problems of vision, natural
language understanding, and common sense, do not have time to write books
about how AI is not yet general. Nobody from the research community ever
claimed that, nobody came forward to claim they've solved these decade-old
problems. It is people selling books railing against the popular reporting of
AI. Boring, self-serving, and predictive, and you do not need to fit a curve
to see that.

All this quarreling about definitions and Venn diagrams and well-known
limitations is dust in the wind. Go figure out what to call it on your
Powerpoint presentation by yourself, and quit bothering the community.

~~~
erikpukinskis
What’s wrong with anthropomorphizing?

I’ve noticed at least as many people under-anthropomorphize as over. People
who seem obsessed with human exceptionalism and are personally offended at the
idea that plants and animals (and computers!) might have subjective
experiences like our own.

But to me it seems obvious we are far more alike “lower” species than we are
unlike them. I would say the cases of human exceptionalism are actually
extremely rare. The main source of our uniqueness is that we amalgamate other
species, not that we have transcended them.

My theory is that we are terrified that we might be simpler than we think,
because socially we behave as if we are so singular. If we are simple, and
animals and machines are like us, then maybe we should be treating them with
more reverence.

But being afraid of that is OK for a random person. For a machine learning
researcher I would hope they are more careful about what we have evidence for
(the similarities between us) and what we don’t (that there is some ineffable
magic about humans).

~~~
projektfu
Anthropomorphizing is dangerous because it leads to metaphor that can both
ascribe too much to the subject and create blind spots in the minds of
researchers. Saying, for example, "Dogs want love," is fine for the owner but
problematic for a researcher because love, as we understand it, is a human
state. We'll never really understand what it means for a dog to feel loved. To
the ethologist that is not to say that there are not similar emotional
processes for dogs, it's to say that they cannot be understood by analogy to
the human ones.

It's sort of like the color perception problem [1]. Dogs and machines do see
colors, but what do they see?

1\. [https://newrepublic.com/article/121843/philosophy-color-
perc...](https://newrepublic.com/article/121843/philosophy-color-perception)

~~~
dghughes
Or people who say "The computer thinks...". No it's a machine that only does
what people make it do.

~~~
roenxi
We've seen that threshold crossed with neural agents like AlphaGo which can be
reasonably described as thinking. It decides if moves are good or bad after a
little pause for processing, its decisions improve with time, it has an
opinion on the state of play, the opinion is formed using basically the same
data as a human, different iterations of the neural network can have a
different opinion but there is a link between it and the previous one.

I don't see a test that majorly distinguishes it from a human. It appears to
be following the same process with a few tweaks around the edges. There are
some exceptions in the 2-5 situations in Go where a human can actually use
optimised logic to determine what will happen; but they aren't the meat of the
game.

~~~
drongoking
> We've seen that threshold crossed with neural agents like AlphaGo which can
> be reasonably described as thinking.

I don't recall ever reading in a technical paper, or in an interview, a leader
in the field of ANNs claim they were thinking. If you have, I'd like to see a
reference. Most are fairly honest about the differences between artificial
neurons and real ones, and between human cognition and what ANNs are doing
with data.

~~~
erikpukinskis
Is “thought” even a well defined scientific term? I doubt neuroscientists
write about it either.

------
Veedrac
The article is less awful than the title. In short, the thesis is that ML
seems only able to learn associations, rather than stronger, causal models.

It should be fairly obvious that ‘curve fitting’ is a misleading
category—these models are clearly learning highly meaningful latent spaces
that no prior approaches ever did. But I would agree that the actual high-
level ability to make causal inferences seems to be lacking.

Where I disagree with Pearl is simply with the idea that these stronger models
won't emerge through future research. It's too early to say this, after barely
a decade of large-scale AI research that has been undergoing continual rapid
progress. Greater generality and more powerful models are some of the most
well-established goals of the field.

~~~
darksaints
> But I would agree that the actual high-level ability to make causal
> inferences seems to be lacking.

That should be expected. Humans also lack the ability to make causal
inference. The vast majority of us have extremely primitive causal reasoning
abilities and get even simple causality wrong. Reasoning about causality in
complex systems still isn't a solved problem for humans, and we have entire
fields within philosophy trying to make sense of the hundreds of paradoxes
within causal reasoning. It's not a solved problem, and it's not clear that it
ever will be.

~~~
lowracle
I am glad to read this, because it really sounds weird to me that so much
people blame AI for not being able to do causal reasoning, while us human
don't even do in my sense "strict" causal reasoning. I feel like we just
learned a small subset of causal rules just like AI algorithms, by having
experienced a lot of events that followed those rules.

------
cs702
This article conflates two separate, _very different_ issues into one:

* Issue #1: There is a tremendous amount of hype, noise, and snake oil surrounding the moniker "AI." Pretty much _everyone_ agrees with this statement. (And anyone who doesn't agree with it... is probably selling snake oil.)

* Issue #2: Is intelligence just a form of "curve fitting," i.e., is it just finding solutions to very complicated, high-dimensional problems with more and more computation via search and self-play? Note that DL via SGD is a form of learning via search, RL via state-action-reward mechanisms is a form of learning via search, and multi-model/multi-agent DL/RL are forms of learning via search with self-play.) There is sharp disagreement on the answer to this question #2. Is that really all there is to intelligence?

The OP believes the answer to #2 is _no_ : intelligence ought to be more than
"curve fitting" via search and self-play.

Others believe the answer to #2 is _yes_. For a typical example of their
thinking, here's Rich Sutton, Distinguished Research Scientist at DeepMind:
[http://www.incompleteideas.net/IncIdeas/BitterLesson.html](http://www.incompleteideas.net/IncIdeas/BitterLesson.html)

~~~
TheRealPomax
If we're adding weight to Rich Sutton with that last sentence, can we also
mention that Pearl is a highly decorated and distinguished researcher, author,
and professor in the field? Because it seems odd to call him "OP" when his
work has shaped the field of machine learning as much as it has to this very
day.

~~~
cs702
Judea Pearl is NOT the author of the piece.

The author is someone called Kurt Mako.

~~~
jtloong
True but most of the content of the article is based around Pearl's thoughts
on the field.

~~~
cs702
I would disagree. Most of the article consists of the author's own musings on
quotes from Pearl and others.

------
galangalalgol
Depending on how complex a curve, and how many dimensions it is in, couldn't
you argues that is essentially what our brains do as well? Not that I am
defending the massive hype field that is ML today, but curve fitting is a form
of intelligence.

~~~
crazygringo
I was say fundamentally no -- and Douglas Hofstadter is probably the foremost
spokesperson against intelligence being curve fitting.

General intelligence is primarily about _developing_ useful conceptual
categories (not mapping to existing ones) and drawing cause-and-effect
_inferences_ that assist us in achieving _goals_.

Curve fitting is just another name for pattern recognition, mapping to
_previously defined_ categories. I would personally argue there's no
intelligence there whatsoever. Intelligence can't exist _without a foundation_
of pattern recognition, but it isn't the same thing.

Intelligence is fundamentally goal-directed and able to reason, while curve-
fitting is fundamentally not.

(There is also unsupervised learning in deep learning, which doesn't use
previously defined categories, but since it is similarly non-goal-directed, I
would still argue that this is merely dimension reduction as opposed to
intelligence -- useful for sure, but not the same.)

~~~
codingslave
We dont know anything about AGI, and we arent sure how it would work. Its
impossible to make assertions like this

~~~
chongli
We know a great deal more about biological general intelligence than we do
about AGI. Many animals create their own goals and work to achieve them. We
learn what works and what does not, and adapt our strategies to compensate.
Humans do this (obviously) but a lot of other intelligent animals can do it as
well.

One of my favourite examples is the New Caledonian crows who have learned to
use traffic to crack nuts [1]. Here, a crow had no pre-defined objective
function apart from "eat food to stay alive" and has accomplished something
remarkable. It found a food source that it had never had access to before, it
developed a complex model of its urban environment, it combined its knowledge
of the problem (the hard nut shell) with its knowledge of its environment
(cars crush small objects), and it constructed a sophisticated for strategy
for using cars to crack open the nuts and fetching the contents when the
traffic lights indicated it was safe to do so.

This is general intelligence!

[1]
[https://www.youtube.com/watch?v=BGPGknpq3e0](https://www.youtube.com/watch?v=BGPGknpq3e0)

------
lordnacho
I actually started reading the Book of Why and I recommend it to the HN crowd.
Pearl does a really good job of going through the history of causality,
including the quite interesting story behind the now known to be wrong (or at
least incomplete) claim that "correlation is not causality".

He then places causality on a 3-rung scale. The bottom rung is association in
data, which is where he says AI is stuck. Then there's intervention, "what if
I do this thing?", and then there's counterfactuals, "what if I had done some
other thing?"

He then makes a case for what intelligence actually is, and unsurprisingly
it's getting up those three rungs. The method revolves around directed graphs,
which have certain unintuitive properties. For instance if A and B can both
cause C, knowing that A is unlikely make B more likely, given that C happened.
There's a few other stock situations in various graphs that he walks through
as well.

In the end the point seems to be that we could have a causal machine if we'd
spend some more time on it. It would take data and try out some potential
graphs, and some of the graphs would be ruled out by the data. And then some
algo would tell you things like whether a randomized trial is necessary or
even whether you'd need one (yes, this is another revelation).

I think there's also an argument that this is how people actually think, which
makes sense because the graphs are not terribly large and they need to fit in
your meat hardware. I haven't finished it but I would guess that you could
take it in another direction and say this is why some animals sorta have
intelligence in that they learn patterns, but they don't know the higher
rungs.

Really interesting ideas, and at least clarifies what we mean by causality.

~~~
disqard
This is a great summary. Thanks for sharing it!

------
buboard
Thanks for the link to the J.Pearl interview , it 's very interesting. There
are many counterpoints that are not examined though:

\- There's nothing wrong with curve fitting per se. NNs fit hundreds of curves
in parallel and many of them may contain cues about the causal structure of
the data.

\- Deep learning has become part of reinforcement learning, which is trying to
learn a causal structure. The primary determinant of causality is the temporal
order of cause and effect. The question is , do humans use other hints apart
from time for causal inference?

\- There is also not much evidence from neuroscience that wet brains are
causality-inferrence machines, most of the evidence is that they are decision-
making machines. Humans are also pretty bad at inferring causality when it's
not obvious, but we re pretty good at associations/patterns.

\- Reasoning (conscious) is often considered to act on a meta-level, which
observes the internal action of the human brain itself and vocalizes what it
sees. What the brain sees at this level is not the external world, but the
representation, and we don't have evidence there is a model of the world in
there (except perhaps temporary maps of space that exist in hippocampus).
Assuming this is true, it s not impossible that current methods can be
extended so that self-explaining an NN ends up being causal reasoning

the more important question is whether any of these methods can lead to the
remarkable ability of brains to generate extremely intricate and improbable
causal chains. can we get a CNN to start from a photo of maxwell's equations
and output the theory of relativity? who knows

------
umarniz
I think intelligence has for some time already been boiled down to curve
fitting, even for humans. Our current accepted definition of intelligence in
schools is to get a score that is higher than the average to be considered
sufficiently intelligent to proceed to the next grade.

I feel anything that we develop for AI would fundamentally always be inspired
by our own experiences and hence curve fitting is something we understand to
be the best metric to optimize for.

~~~
lordnacho
> I think intelligence has for some time already been boiled down to curve
> fitting, even for humans.

No, the book actually makes an important point about this.

For instance, let's take a counterfactual. How do you know what would have
happened if Barcelona had played Lionel Messi in goal over the latest season?

They've never done this, there are no data points for you to fit. The
situation almost never arises that an outfield player plays in goal, and when
they do it is always in a situation where someone's been sent off or injured,
which is also rare.

And yet you and I and everyone else who can think knows this would result in
Barcelona having a much worse season.

Just to be clear, it's not only because you'd be taking a top player out of
offense where he is worth a lot, which you can surely fit some curves to show.

We can all guess that he'll be a worse keeper than Ter Stegen, but what curve
would you fit that shows this? There's no data about Messi in goal.

Pearl does give a way to work it out via counterfactual analysis though.

~~~
Invictus0
If we say that goalkeeping requires certain skills, and being an outfielder
also requires certain skills, these skills being graphed on a chart, its
simple to see that Messi would not perform well as a goalkeeper, and this is
still curvefitting. If it turns out that the goalkeeper and outfielder have a
lot of overlapping skills, we might fall back and refer to a probability curve
of things that normally happen; "players require practice to excel in their
position" would seem to fall on that curve.

The fact is that everything in the world can be reduced down to curves. It's
just a matter of your perspective.

~~~
lordnacho
> If we say that goalkeeping requires certain skills... If it turns out that
> the goalkeeper and outfielder have a lot of overlapping skills...

That's kinda the problem here. Once you have a model fitting curves is fine.
But you need a structure from somewhere.

> players require practice to excel in their position

The problem with this is there's no data. Nobody gets to play a position they
haven't practiced for. Yet you still somehow came to the right conclusion.

Anyway Pearl is much better at explaining this than I am.

------
wwarner
Some really insightful discussion of the possibilities and limits of ML can be
found on Les Fridman's AI podcast. Especially good were interviews with Yann
LeCunn, Jeff Hawkins, Elon Musk and Francois Chollet. One memorable quote from
LeCun "There is no intelligence without learning." Here's a link to the LeCun
interview, though many aothers are excellent. [https://lexfridman.com/yann-
lecun/](https://lexfridman.com/yann-lecun/)

~~~
tomjakubowski
The interviews with researchers and engineers in the field seem like they'd be
interesting, but I don't see it for Musk. What special insights does he offer
into AI?

~~~
wwarner
Tesla is deeply invested in self driving technology. He discussed the
importance of data in the project, and he believes that Tesla is collecting
20x more data than any other player. He also believes that driverless will be
more than 100x safer than humans fairly soon.

------
abeppu
I suppose there are lots of papers and results that fall into the "only curve
fitting" bucket, but there are many exciting results in recent years that have
been curve fitting + X, where ANNs formed part of a system with other
components. While none of these approach the level of something you could have
a meaningful conversation with (which was an ambitious bar introduced in the
article) some do kind of consider multiple actions, work through consequences
of each, and pick a single action. This looks a lot like a crude form of
reasoning about causality and hypotheticals, though not quite along the lines
Pearl would like to see. But they typically do that in the context of a
specific task, with an enumerable set of available actions.

------
dr_dshiv
So, one view of AI is that is is machine learning. Another view is that it
consists of automated process. So, if expert systems count as AI, or if
production rule systems count as AI, then pretty much any handcrafted if-then
statement counts as AI. And, so too might any automated process..

This is has the advantage of recognizing how human intelligence can be
automated and aggregated into system processes. And, the disadvantage that the
boundaries of the concept start exploding.

I like cybernetics for providing a clear model of what constitutes
intelligence -- a feedback loop between perception and action that achieves
goals or lowers local entropy.

And, cybernetic systems can be artificial, natural or a mix

~~~
wayoutthere
I too hold the view that artificial intelligence is not reliant on computers.
Sufficiently complex business logic is impossible for a single person -- even
the CEO -- to have an end-to-end view (much less have a significant influence
on). The emergent behavior of large companies fits a definition of an AI
today; human "decision makers" are increasingly just reviewing and approving
the output of algorithms.

I think the end state for large corporations will be to automate away so much
of the human input that they end up looking like what we think of as "AI" in a
broader cultural context. But we already live in a world controlled by AI, and
we have for 3+ decades.

~~~
dr_dshiv
Yeah, I mean, it's a slippery slope, but one worth sliding down. Is autopilot
AI? If so, it was invented in 1914. Are speed governors AI? That was James
Watt. Are if-then statements AI? That's the basis of human laws, dating back
thousands of years.

Are corporations AI, or rather superintelligences, because they are groups of
people bound by bylaws? If so, the whole damn world is AI through and through
and machine learning is really just the tip of the iceberg.

~~~
wayoutthere
Yeah, I guess that's my point. Humanity has been guided by largely autonomous
systems for much of our existence, but only recently have those systems become
sufficiently complex as to remove the _possibility_ of human intervention in
some processes due to the complexity involved -- nobody can intervene if
nobody can understand the whole story.

I think this is why late-stage capitalism feels so bad to live under: these
artificial systems (corporations), by design, stand in direct opposition to
our humanity. They exist to prevent our humanity from getting in the way of
their perpetual growth by abstracting any ethical problems away so that no
human is faced with an ethical dilemma. Which explains why the world so often
feels like a dystopian nightmare.

------
Lambdanaut
I'm pretty sure that I'm just a curve trying to fit what it means to survive
as a human.

~~~
virmundi
The difference is that you have the metacognition. You can think about your
fitting. It’s possible to say that such a process is just another set of
parameters, but they impact retro actively. At that point the metaphor of
curve fitting might break down.

------
scottlegrand2
When I was working on recommenders at Amazon. I didn't really perceive this as
modeling the mind of some sort of Oracle.

Instead I saw it as a change of basis from the space of previous purchases to
the space of recent purchases. That approach led to an entire AI framework and
it still makes Amazon quite a bit of money annually apparently if Jeff Wilke's
2019 Re:MARS speech is to be believed.

I'd love to work on something more ambitious like AlphaGo or AlphaFold but
those require tremendous resources and I'm really focused on bang for buck.
But even then I'd see it as the marriage of classical search with modeling the
probability of victory.

If someone says that the AGI is almost upon us I pretty much bozo bit them no
matter how prestigious or fancy they may be otherwise.

------
madacol
"Humans today and tomorrow are mostly about curve fitting, not intelligence"

That would still makes sense

------
cjauvin
To me, this debate is akin to wondering whether the essence of computing
resides in the lower-level, Turing-like operations of a microprocessor or
rather in the higher-level constructs and abstractions that we're able to
build way "above" them. Whatever it is, intelligence, whether implemented
artificially or embodied in the substrate of a living being, is likely built
on a ladder of subsystems interoperating at different levels of abstraction.

~~~
visarga
It's like a form of incredulity that a higher level can exist on top of a
lower level. It's all 0's and 1's, so it can't be anything more.

------
psv1
I tend to avoid using the term AI at all and I mostly agree with the article,
but I still don't see this as any kind of drawback. Computers are great at
curve-fitting and there are many good use-cases for these tasks. We just need
to be clear that we are very far from the sci-fi vision of a single computer
which can understand, learn and know everything. Which is absolutely fine.

------
mark_l_watson
I use the term “AI” but couch it in the context that we are probably ten to a
hundred years away from general artificial intelligence. I have a lower bar on
my definition of general artificial intelligence as something that can be
narrow but has a world view model for its activities and the environment and
this model changes with experience, the AGI can explain what it is doing in
its narrow field of expertise, and can creatively develop new techniques for
solving problems in its narrow domain of expertise.

I have been paid to work in the field of AI since 1982 so I have experienced
AI winters. I almost hope we have another to act sort-of like simulated
annealing to get us out of the highly effective local minimum of deep
learning. I have been paid to work with deep learning for about the last five
years (except I retired six months ago) and I love it, a big fan, but it won’t
get us to where I want to be. Perhaps hybrid symbolic AI and deep learning? I
don’t know.

------
sjg007
Some DNNs compute posterior probabilities for classification. You could feed
those inputs into a Bayesian network for decision making too but you could
also train a NN to make those decisions. Still there may be times when a human
derived controller is better with AI as sensory input than an AI controller
itself.

------
ryeguy_24
It was so refreshing to read the title of this article. This couldn’t be more
true and timely. Everyone and their brother is talking about artificial
intelligence and it’s frankly annoying at this point. These models are, simply
put, just fancy interpolation/ extrapolation approaches.

~~~
lostmsu
And we are not?

~~~
ryeguy_24
Point taken. I don’t have a good response to that.

------
avmich
> Our machines are still incapable of independently coming up with a thought
> or hypothesis, testing it against others and accepting or rejecting its
> validity based on reasoning and experimentation, i.e. following the core
> principles of the scientific method.

[https://www.scientificamerican.com/article/robots-adam-
and-e...](https://www.scientificamerican.com/article/robots-adam-and-eve-ai/)

A prime example of this is "Adam," an autonomous mini laboratory that uses
computers, robotics and lab equipment to conduct scientific experiments,
automatically generate hypotheses to explain the resulting data, test these
hypotheses, and then interpret the results.

------
peter_d_sherman
This is a potentially interesting identity for AI, that is, as a curve-fitting
function...

I postulate (but cannot at the present moment prove!) that if there is proof
such that:

a) AI fits curves

and

b) That if all polynomial numbers (NX^M .. AX^2 + BX^1 + CX^0) are in fact
curves...

then (perhaps)

c) Perhaps there is a link between polynomial numbers (as curves) and AI...
that is, perhaps all AI can be thought of as a function with a polynomial
solution, such that F(X) = NX^M .. AX^2 + BX^1 + CX^0, with F(X) being the AI
in question...

I leave it to mathematicians, logicians, and people who do dimensional
reductions/transformations -- to prove or disprove this...

------
steenreem
There's a lot of talk in this thread about what intelligence is. Personally I
found the AIXI model from algorithmic information theory a very convincing
definition:
[https://en.wikipedia.org/wiki/AIXI](https://en.wikipedia.org/wiki/AIXI)

I'm surprised it's not more generally accepted as _the_ definition. Note that
Markus Hutter the main author is currently working at DeepMind.

------
anonu
In the financial markets, multi linear regression is a very popular tool for
predicting the markets. That tool has been around for over a century and used
extensively to make sense of financial data for decades now.

Ive always eyed companies and startups the boast predictive AI for the
financial markets with suspicion. I just always assumed whatever they were
doing was a glorified regression model... But you have a lot of shameless
people who will gladly slap some AI jargon in their business plan just to get
eyeballs

------
mycall
> Pearl contends that until algorithms and the machines controlled by them can
> reason about cause and effect, or at least conceptualize the difference,
> their utility and versatility will never approach that of humans.

Is not cause and effect a temporal-based acyclic directed graph with mutating
state (wrt CompSci). Anyways, I think ML needs to look closer at simple cases,
like sea squirts which have 8617 neurons. Try to answer how and why do they
work better than some of our algorithms.

------
unityByFreedom
Great article. Why _does_ AI hype persist? Simply because we're seeing
moderate gains?

Newsflash- We're likely to continue seeing gains from ML tech for decades.
Basic techniques may become part of core CS. With all the private siloed data
out there, ML will need to be applied uniquely over and over.

In decades to come, if marketing continues as-is, are salesmen going to keep
laying claim to the moon (AGI) the whole time? It's hard to believe we can
remain on this tilt for so long.

------
huffmsa
I'd say this is outdated. Since its publication, transfer learning has really
started to pick up speed.

Taking a model trained in one subject, and reapplying it to build out new
vertical is pretty much how children learn new things.

We'll have models using models to train new models sooner rather than later.

If a program can create it's own curve functions and decide which one to use
to evaluate a problem, is it that much different than a brain?

~~~
jgalt212
Isn't transfer learning just a faster way to fit a curve by starting with a
curve that's partially fit instead of a random vector. So under this
interpretation, transfer learning is also "curve fitting."

~~~
huffmsa
It is. But progress has been made taking models fit on pretty different topics
and using them for something tangentially related. And that's exactly what
humans do.

You don't start from zero when you learn to paint if you already know how to
draw. You take another model of behavior and use it as a baseline to develop
your painting behavior.

The fundamental thing that you can do that the computer can't right now is
decide when you need a new model, when to use an existing one and when to
transfer off of something.

------
bitL
ML is about function approximation where function can be anything and mostly
something that is practically intractable by usual analytical or numerical
methods, such as observing real-time processes with millions of variables.

In math, via a reduction, human life is just a function of time with many
inputs and many outputs. I.e. you can do curve fitting there as well...

------
antichronology
Where is the proof that human "rational" thinking is anything but curve
fitting? I'm not convinced

------
euske
I tend to think that ML is a kind of automated program generation that
maximizes a certain metrics given by human. Some of them are based on real
data but some aren't (like RL). So it's certainly more than a curve that needs
to be fit. It's more like Turing machine fitting.

------
streetcat1
So RL is true AI. Alpha GO did make moves inconceivable to the best human mind
on the subject.

There was no curve to fit into.

~~~
bitL
The learning from game itself was curve fitting, the Deep in Deep
Reinforcement Learning usually means some difficult function is replaced by a
deep neural network, approximating optimal values (for moves) trained on
gameplay samples, usually in sense of rewards/punishments for reaching certain
states; in games they could rank e.g. good/bad moves, winning states, losing
states etc.

~~~
streetcat1
Right. But the curve itself, was invented by the machine.

~~~
mkl
I think the curve is defined by the rules of the game, and the machine learned
some details of it that humans hadn't figured out yet.

------
jorblumesea
Curve fitting is a useful tool, even in non ML contexts like a simple linear
regression. There's the hype, which will eventually die, and then there's the
business/engineering aspect, which will likely stick around.

------
yzh
Related:
[https://news.ycombinator.com/item?id=21459220](https://news.ycombinator.com/item?id=21459220)

------
jonplackett
Is it feasible to create AI on top of a bunch of other AI?

Current AIs seem more like specific, fairly simple, brain regions. Maybe we
need a level up.

~~~
Lambdanaut
This is kinda just what libraries like pytorch and tensorflow make easy.
Architecting neural networks at a higher level and gluing them together.

~~~
jonplackett
I guess what I mean is at a much larger scale. Like 1000s of networks or maybe
more, like I presume you get in the brain. Is this already what’s happening
currently?

~~~
sgt101
Well - it's what Minsky advocated...
[https://en.wikipedia.org/wiki/Society_of_Mind](https://en.wikipedia.org/wiki/Society_of_Mind)

But, I think that its a bit "hit and hope" \- as an idea it doesn't really
address any questions unless/until it is realised and works.

------
verytrivial
And what if intelligence is actually just curve fitting? Maybe we (or AI) just
need to find the right number of the right curves.

------
le3dh4x0r
"AI is about X, not Y." Maybe we all should consider calling it what it is and
not push marketing buzz words around.

~~~
m_mueller
I call it pattern matching.

------
suyash
Lets repeat this and tell it to everyone :

Deep Neural Nets is not AI, it's just one powerful idea amongst many other
ideas within AI

------
yters
What if human intelligence is not computable? Why does no researcher address
the fundamental assumption of the field?

~~~
robotresearcher
This has been addressed by many authors since modern computation was
formulated, for example by Turing himself. We learn about and discuss Newell
and Simon's Physical Symbol System hypothesis in undergrad classes, ie.
explicitly stating the underlying assumption. Once in a while someone will
assert that computation is not sufficient, eg. Penrose, and generate
discussion.

In practice you can work on AI-the-engineering-discipline without taking a
position on this. Perhaps this is why you feel that researchers don't talk
about it. Disciplined scientists tend not to take public positions on things
they don't know for sure.

~~~
yters
That seems like a pretty uninformed hypothesis. For example, halting oracles
can process symbols, but are not computable. So, maybe it is necessary, but
easy to show symbol processing may not be sufficient (as I just did).

This has significant implications for the engineering discipline. For example,
if the mind is a halting oracle, then you can get much more performant
algorithms by incorporating human interaction.

------
The_rationalist
[https://why19.causalai.net](https://why19.causalai.net)

------
H8crilA
It's still not clear to me that natural intelligence is anything more than
curve fitting.

~~~
trevyn
Indeed; I think things will get interesting once a good number of people start
to realize and internalize this.

------
d--b
Well, the point is to know if intelligence is not about curve fitting after
all!

------
queensnake
Out of date. Reinforcement Learning makes truly ‘smart’ actions, courses of
action. Calling something ‘curve fitting’ pooh poohs it (especially ‘just’
curve fitting) - calling it ‘distilling’ gives it the respect that it
deserves.

------
fionic
Absolutely love this article. Couldn’t agree more.

------
tomthe
Well, most of science is also just curve fitting.

~~~
scottlocklin
Pretty sure Maxwells Equations and the laws of thermodynamics didn't come from
curve fitting!

Experimentalists may have verified these ideas using some kind of curve
fitting, but thinking in abstractions, aka, "a ball rolled out in the street,
maybe a child will follow," is one of the things curve fitting can't do.

~~~
gus_massa
Most of the terms in Maxwell Equations were added one at a time to fit some
restrictive set of experiments.
[https://en.wikipedia.org/wiki/Maxwell%27s_equations#Conceptu...](https://en.wikipedia.org/wiki/Maxwell%27s_equations#Conceptual_descriptions)
The last term was a leap of faith.

The Plank Equation for the black body
[https://en.wikipedia.org/wiki/Planck%27s_law](https://en.wikipedia.org/wiki/Planck%27s_law)
was the most successful curve fitting in history. They need like 30 years to
discover quantum mechanics and understand the details.

------
fionic
I absolutely love this article

------
largespoon
If AI is mostly curve fitting, I'd like to know how curve fitting applies to
AI and chess.

------
atemerev
Well, 95% of human activity is about curve fitting, so what?

------
finnmagic
shhh, let's keep it that way

~~~
quikoa
Why? Afraid of Skynet taking over?

~~~
le3dh4x0r
If you mean with Skynet an army of limited chatbots, then yes. We are all
scared

~~~
quikoa
You joke but it could ruin plenty of forums!

------
tomerbd
also humans

