"Machine learning" used to be a safe haven. You could flee there to escape the Terminators and brain-on-a-chip graphics. Business PR deliberately killed that. They wanted their ML algorithms to be refered to as AI, so they could fully ride the hype train.
AI used to be a tight quirky community. Having the brain as inspiration led to all sorts of anthropomorphizing. This was ok. Researchers understood what was meant with "learning", "intelligence", "to perceive" in the context of AI. Nowadays, it is almost irresponsible to do this, not because you'll confuse your co-researchers, but because popular tech articles will write about chatbots inventing their own language and having to be shutdown.
Still, as a business research lab, it is good to get your name out there, so all the wrong incentives are there: Careful researchers avoid anthropomorphizing, and lose their source of inspiration -- you can not be careful with difficult unsolved problems, you need to be a little crazy and "out there". Meanwhile, profit-seeking business engineers and their PR departments, obfuscate their progress and basic techniques, all to get that juicy article with "an AI taught itself to X and you won't believe what happened next".
The researchers actually busy solving the hard problems of vision, natural language understanding, and common sense, do not have time to write books about how AI is not yet general. Nobody from the research community ever claimed that, nobody came forward to claim they've solved these decade-old problems. It is people selling books railing against the popular reporting of AI. Boring, self-serving, and predictive, and you do not need to fit a curve to see that.
All this quarreling about definitions and Venn diagrams and well-known limitations is dust in the wind. Go figure out what to call it on your Powerpoint presentation by yourself, and quit bothering the community.
I’ve noticed at least as many people under-anthropomorphize as over. People who seem obsessed with human exceptionalism and are personally offended at the idea that plants and animals (and computers!) might have subjective experiences like our own.
But to me it seems obvious we are far more alike “lower” species than we are unlike them. I would say the cases of human exceptionalism are actually extremely rare. The main source of our uniqueness is that we amalgamate other species, not that we have transcended them.
My theory is that we are terrified that we might be simpler than we think, because socially we behave as if we are so singular. If we are simple, and animals and machines are like us, then maybe we should be treating them with more reverence.
But being afraid of that is OK for a random person. For a machine learning researcher I would hope they are more careful about what we have evidence for (the similarities between us) and what we don’t (that there is some ineffable magic about humans).
Anthropomorphizing is dangerous because it leads to metaphor that can both ascribe too much to the subject and create blind spots in the minds of researchers. Saying, for example, "Dogs want love," is fine for the owner but problematic for a researcher because love, as we understand it, is a human state. We'll never really understand what it means for a dog to feel loved. To the ethologist that is not to say that there are not similar emotional processes for dogs, it's to say that they cannot be understood by analogy to the human ones.
It's sort of like the color perception problem [1]. Dogs and machines do see colors, but what do they see?
You should go and read some stuff written by ethologists. Basically everything you said would be vehemently disagreed with by a large group of prominent ethologist. The term anthropodenial has even been coined to criticize your exact thinking and to describe the dangers of not anthropomorphizing enough. Not saying you can't over do it, but the GP's comment is much more in line with thinking by modern ethologist. Frans De Waal is a good place to start.
Right, to be fair to you this was a hotly debated topic in ethology (and still is to an extent), however I would say most modern ethologist have come out on the side of embracing evolutionary parsimony and viewing our human experience as a valuable asset to understanding animals (especially mammals).
Probably the most cited paper regarding this debate is by Marc Bekoff, "Cognitive Ethology: Slayers, Skeptics, and Proponents" (http://cogprints.org/160/1/199709005.html). Your original comment would be categorized as a "slayer" a position which is widely criticized. In fact Bekoff's focus is on canines and he used your exact example with dogs, but to opposite affect.
Phew, I'm surprised to see such an emotionally-charged article on the subject. Everyone who is uncomfortable with anthropomorphism is biased and misguided in some way, but extremist proponents are merely overly enthusiastic.
I do wonder about the theoretical bird scientist trying to figure out the "fixed action patterns" of other animals. If anthropomorphism is the way to go, surely it goes in the other direction in some way.
A review I just read (https://www.frontiersin.org/articles/10.3389/fpsyg.2018.0220...) suggests both of our viewpoints and seems to allow for a continuum of approaches without resorting to name-calling. I think that there's definitely stupidity in the history of "anti-anthropomorphism" if it's really true that people dismissed an article that started by saying bees appear to dance. After all, the fact that they have a behavior like that suggests something interesting is going on. It's also really easy to go overboard in simplifying animal behaviors to our own poorly-understood human behaviors.
We've seen that threshold crossed with neural agents like AlphaGo which can be reasonably described as thinking. It decides if moves are good or bad after a little pause for processing, its decisions improve with time, it has an opinion on the state of play, the opinion is formed using basically the same data as a human, different iterations of the neural network can have a different opinion but there is a link between it and the previous one.
I don't see a test that majorly distinguishes it from a human. It appears to be following the same process with a few tweaks around the edges. There are some exceptions in the 2-5 situations in Go where a human can actually use optimised logic to determine what will happen; but they aren't the meat of the game.
> We've seen that threshold crossed with neural agents like AlphaGo which can be reasonably described as thinking.
I don't recall ever reading in a technical paper, or in an interview, a leader in the field of ANNs claim they were thinking. If you have, I'd like to see a reference. Most are fairly honest about the differences between artificial neurons and real ones, and between human cognition and what ANNs are doing with data.
Chess is one of those areas where humans have developed computer-like abilities, such as exhaustive search. What's interesting is the appearance of intuition-like movement in modern chess computers, but is it ... intuition?
They are both a problem, people do think human are somehow exceptional. We all agree that we are apes but none of us want to admit when we get horny in public.
But ML, AFAIK, is so simple; its literally a glorified polynomial functions. The only thing it get going for it is the large data set that we can train it on. It cannot "learn" anything from a small data set and extract any information out of it without a human imposing his/her knowledge on it.
For instance, take the concept of an even number. This simple knowledge is so powerful in solving algorithmic problems. But, its very hard to make a machine learn of this concept in general.
I think the problem is really overestimating how "intelligent" human are. We are only as intelligent with respect to our imagination. Its possible that there is an entire class of intelligent outside of our imagination that we cannot fully grasp its intelligent. Similarly, I am only conscious with respect to my own consciousness, but there may be another class of consciousness that is unimaginable to this monkey's brain.
Very well said. Also, curve fitting is not a corner case. Most relevant and intelligent things we care about can be solved with "just" curve fitting + extrapolation.
I think curve fitting is an important component of future AGI. But it definitely needs causal reasoning baked in, which leads to better models with less data [1,2].
My intuition is that there's a lot of important work to be done using logical representations of models and transforming them back and forth using well understood semantics operators. Deep functions will be part of said models, but the whole model does not necessarily need to be deep. We can already see hints of the field going in this direction in deep generative models [3].
Casual reasoning is one thing that is lacking. But what about creativity? What about drive and desire? What about belief and the will to fail on the road to success? What about collective intelligence and the need to peer up in efforts? What about emotional intelligence?
I personally do not believe in AGI since I also do not believe in psychology, sociology or neurobiology being anywhere near understanding the holistic nature of our own intelligence. We are getting better at emulating human traits for specific tasks with ML. We lack the specific knowledge of what the algorithm should mimic to become equal to us in terms of our intellect though.
>> But what about creativity? What about drive and desire? What about belief and the will to fail on the road to success? What about collective intelligence and the need to peer up in efforts? What about emotional intelligence?
All this resulted from evolutionary processes. Any approximation of AI which will deal with other agents will develop something like that and more in order to be competitive, collaborate and survive.
> All this resulted from evolutionary processes. Any approximation of AI which will deal with other agents will develop something like that and more in order to be competitive, collaborate and survive.
How can we assume that a simulated evolutionary process of a simple mathematical model or some arbitrarily sized multi-dimensional matrices yields similar evolutionary results?
Just think of the ongoing debate about quantum entanglement effects inside the neural signaling process. On a rather onthological level, we are still unable to formulate a mere definition of our consciousness or things like creativity that lasts longer than a few academic decades..
> Causal reasoning is one thing that is lacking. But what about creativity? What about drive and desire? What about belief and the will to fail on the road to success? What about collective intelligence and the need to peer up in efforts? What about emotional intelligence?
Hi, I work at one of the intersections of machine learning with certain schools of thought in neuroscience. The following is based entirely on my own understanding, but is at least based on an understanding.
Your list here really only has three problems in it: causal reasoning, theory of mind, and "emotional intelligence". Emotional intelligence works in the service of "drive and desire", considered broadly. Creativity likewise works for the emotions. To be creative, you need aesthetic criteria.
Most of that, we're still really working on putting into mathematical and computational terms.
Admittedly, that list is an arbitrary poke into areas of debate in your fields of profession.
As a take on your interpretation of creativity: I would argue that the act of forming new and valuable propositions is not related to emotion or aesthetics per se.
Aesthetic theory is observing a very narrow subset of creative processes. And even there, our transition from modernism into the uncertainty of the post-modernist world defies any sound definition of the "aesthetic criteria". Yet we perceive aesthetic human-creativity all the time.
In similar vain is the application of generative machine learning that spurs debate about computational aesthetics today. Nothing proofs better the incapability of modern ML forming real creativity than the imitating nature of adversarial networks spitting out (quite beautiful) permutations of simplified data structures underlying the body of Bach's compositions.
Now we could start on the assumed role of complex neurotransmitters in the creative process of the brain and the trivial way reinforcement learning rewards artificial agents, but that would push the scope of this comment.
>Now we could start on the assumed role of complex neurotransmitters in the creative process of the brain and the trivial way reinforcement learning rewards artificial agents, but that would push the scope of this comment.
You can't really separate emotion and aesthetics from the neurotransmitters helping to implement them! They're considerably more complex than anyone usually gives credit for.
Likewise, to form a valuable proposition, you need a sense of value, which is rooted in the same neurological functionality that creates emotion and aesthetics.
Wow. I want to thank you for engaging on that point! The "Hume's guillotine" dichotomization between "cognitive" processing and "affective" processing tends to be the thing our lab receives the most pushback on.
> The researchers actually busy solving the hard problems of vision, natural language understanding, and common sense, do not have time to write books about how AI is not yet general.
I've come to terms with the hype. There are still researchers doing the hard theoretical work, and they will still be toiling away after the next economic downturn. We can all choose every day whether to find fulfillment through seeking attention from other people, money, or satisfying our curiosity to solve problems.
> Nobody from the research community ever claimed that [AGI], nobody came forward to claim they've solved these decade-old problems. It is people selling books railing against the popular reporting of AI. Boring, self-serving, and predictive, and you do not need to fit a curve to see that.
Hear hear! That said, this is a good article by a respected researcher. Here's what LeCun had to say about it,
> ...In general, I think a lot of people who see the field from the outside criticize the current state of affair without knowing that people in the field actively work on fixing the very aspects they criticize.
> That includes causality, learning from unlabeled data, reasoning, memory, etc. [1]
This is currently true for almost all human endeavors. We're beset with PR people deliberately promoting misconceptions and out right lies.
A recent article about "beewashing" is another good example of subverting human attention from real issues by over simplifying for the purpose of corporate PR.
We are constantly bombarded by noise and lies so we won't be able to make sound and rational decision about anything.
In recent years this transformed from a side effect of bottom line mentality to out right weaponization by powerful entities political and corporate.
Everything is a lie, until you're tautological. Machine learning itself seems a bit of misnomer. High dimensional curve fitting is a good description, imho.
"The researchers actually busy solving the hard problems of vision, natural language understanding, and common sense, do not have time to write books about how AI is not yet general."
Stuart Russell recently published a non technical book on AI. I really hope tech journalists take note
Honest question, aren’t the consequences for “real” researchers keeping their heads down quite severe? Won’t we have important policy decisions both public and private and billions in funding misdirected for years when they could best be put elsewhere? Sure the “real” researchers will have easier access to funding, which perhaps is a key motivating factor to not push back on the hype, but isn’t there a large opportunity cost to allowing hype and or bullshit to go unchecked because “they don’t have the time to write a book”?
The consequences of technical subject matter experts dabbling in policy are often pretty bad.
You can get involved in this, but it takes real work (i.e. time taken away from your research area) and an honest understanding that the policy issues their own deep specialty, and you are likely to be quite naive about it going in.
On the plus side, it makes it fairly easy to ask cocktail-party-caliber questions and quickly suss out whether you conversation partner knows what the hell they're talking about.
You haven't proven this statement. It's possible within your own brain is nothing more than a rudimentary curve fitting algorithm that allowed you to see this pattern.
The article is less awful than the title. In short, the thesis is that ML seems only able to learn associations, rather than stronger, causal models.
It should be fairly obvious that ‘curve fitting’ is a misleading category—these models are clearly learning highly meaningful latent spaces that no prior approaches ever did. But I would agree that the actual high-level ability to make causal inferences seems to be lacking.
Where I disagree with Pearl is simply with the idea that these stronger models won't emerge through future research. It's too early to say this, after barely a decade of large-scale AI research that has been undergoing continual rapid progress. Greater generality and more powerful models are some of the most well-established goals of the field.
> But I would agree that the actual high-level ability to make causal inferences seems to be lacking.
That should be expected. Humans also lack the ability to make causal inference. The vast majority of us have extremely primitive causal reasoning abilities and get even simple causality wrong. Reasoning about causality in complex systems still isn't a solved problem for humans, and we have entire fields within philosophy trying to make sense of the hundreds of paradoxes within causal reasoning. It's not a solved problem, and it's not clear that it ever will be.
I am glad to read this, because it really sounds weird to me that so much people blame AI for not being able to do causal reasoning, while us human don't even do in my sense "strict" causal reasoning. I feel like we just learned a small subset of causal rules just like AI algorithms, by having experienced a lot of events that followed those rules.
I don't quite understand why our ability to teach machines causal reasoning should hinge on the lowest common denominator of human ability. If I interpreted your point correctly, then calculators are an obvious counterexample, and a table of common human arithmetic mistakes doesn't have any bearing on our ability to program calculators.
In terms of causal reasoning for computers, it's more of a "common sense" problem than a reasoning one. In nice, closed systems we can do symbolic computation and automated theorem proving without mistakes. The only reason this doesn't work in the real world is the lack of axioms and consistency.
This is interesting, as this appears to be similar to the difference between Daniel Kahneman's System1 and System2 modes of human thought. ML is perhaps beginning to approach our subconscious, associative intelligence. This shows up in its excellence for things like image processing, which we do instantly and automatically. Perhaps growing the tech equivalent of a prefrontal cortex is what's going to be hard, or require a different approach.
We're getting there. I have the feeling more an more people subscribe to the hunch that combining symbolic (GOFAI) with subsymbolic (PDP/NN) techniques is the way forward. Research is going on, an example is the Neuro-Symbolic Concept Learner described in this [1] paper.
Yeah, that seems to be part of the reason that AlphaZero, which has its System2-like alpha-beta search seems so much smarter than AlphaStar which plays like a brilliant somnambulist.
Maybe if we had some way of abstracting out the things a machine learning system implicitly learns so we could deal with them in a more classical AI-like way?
I agree with your comment about System 2 like reasoning not being common right now. I am not an expert in the field but the closest thing I have seen to learned planning is: https://arxiv.org/pdf/1911.08265.pdf
This article conflates two separate, very different issues into one:
* Issue #1: There is a tremendous amount of hype, noise, and snake oil surrounding the moniker "AI." Pretty much everyone agrees with this statement. (And anyone who doesn't agree with it... is probably selling snake oil.)
* Issue #2: Is intelligence just a form of "curve fitting," i.e., is it just finding solutions to very complicated, high-dimensional problems with more and more computation via search and self-play? Note that DL via SGD is a form of learning via search, RL via state-action-reward mechanisms is a form of learning via search, and multi-model/multi-agent DL/RL are forms of learning via search with self-play.) There is sharp disagreement on the answer to this question #2. Is that really all there is to intelligence?
The OP believes the answer to #2 is no: intelligence ought to be more than "curve fitting" via search and self-play.
If we're adding weight to Rich Sutton with that last sentence, can we also mention that Pearl is a highly decorated and distinguished researcher, author, and professor in the field? Because it seems odd to call him "OP" when his work has shaped the field of machine learning as much as it has to this very day.
Depending on how complex a curve, and how many dimensions it is in, couldn't you argues that is essentially what our brains do as well? Not that I am defending the massive hype field that is ML today, but curve fitting is a form of intelligence.
If someone wants to go as far as to claim that any computation is just curve-fitting then your statement is equivalent to Church–Turing thesis. There are no formal arguments against Church–Turing thesis.
From that perspective intelligence is indeed just a curve fitting.
"We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power."
He argues that we should move towards evaluating "Intelligence as skill-acquisition efficiency".
I agree with him. We should move away from benchmarks that involve training and evaluating algorithms on the same datasets. This is indeed more or less "curve fitting". Instead we should focus on benchmarking how efficient algorithms are at solving tasks involving completely new datasets, preferably even unknown to the developers. For example, language model GPT-2 was trained to predict next word given some previous words. After that training GPT-2 was able to do things that were unrelated like question answering, translating etc. GPT-2 is of course doing that very badly, and requires GB of training data, but it is a step towards skill-acquisition efficiency and away from what everyone sees as curve-fitting.
We should benchmark models so that we select for these that are able to do solve tasks they were not build to solve.
P=BPP would contradict the extended Church-Turing thesis. Quantum computers can definitely be simulated by classical ones (with an exponential slowdown) so the CTT, which says nothing about performance, is not threatened by them.
The life time of the universe and the physical extent of possible classical computers limits "all the computing that can be done" quite strongly. Especially with regard to intelligence - where the physical and temporal extent of brains are clear.
What our brains do is curve fitting plus experiments. Causality is learned through experiments - and as children we do a lot of experiments (with moving our limbs etc). With just observations you can have only correlations. For example you can correlate smoke with fire, but only through experiment you can learn that it is fire that causes smoke not the other way around.
This can tease out transfer entropy but still isn't identifying causal factors.
A common issue you find would be confounding. Then, because you haven't identified the latent connection, you may try to increase level A to have effect on output B, and be disappointed.
This is basically Judea Pearls Book of Why's main hypothesis, that E(Y|X) != E(Y|do(X)), where do(X) is when we modify X somehow.
I was say fundamentally no -- and Douglas Hofstadter is probably the foremost spokesperson against intelligence being curve fitting.
General intelligence is primarily about developing useful conceptual categories (not mapping to existing ones) and drawing cause-and-effect inferences that assist us in achieving goals.
Curve fitting is just another name for pattern recognition, mapping to previously defined categories. I would personally argue there's no intelligence there whatsoever. Intelligence can't exist without a foundation of pattern recognition, but it isn't the same thing.
Intelligence is fundamentally goal-directed and able to reason, while curve-fitting is fundamentally not.
(There is also unsupervised learning in deep learning, which doesn't use previously defined categories, but since it is similarly non-goal-directed, I would still argue that this is merely dimension reduction as opposed to intelligence -- useful for sure, but not the same.)
We know a great deal more about biological general intelligence than we do about AGI. Many animals create their own goals and work to achieve them. We learn what works and what does not, and adapt our strategies to compensate. Humans do this (obviously) but a lot of other intelligent animals can do it as well.
One of my favourite examples is the New Caledonian crows who have learned to use traffic to crack nuts [1]. Here, a crow had no pre-defined objective function apart from "eat food to stay alive" and has accomplished something remarkable. It found a food source that it had never had access to before, it developed a complex model of its urban environment, it combined its knowledge of the problem (the hard nut shell) with its knowledge of its environment (cars crush small objects), and it constructed a sophisticated for strategy for using cars to crack open the nuts and fetching the contents when the traffic lights indicated it was safe to do so.
we don't know everything, but we definitely know something about current ML, and we can indeed speculate and make some educated guesses about AGI. for a random comment on the internet, that contained some interesting information. of course, an actual implementation of AGI might be anything, but the comment was broader than that
> General intelligence is primarily about developing useful conceptual categories (not mapping to existing ones) …
There are some algorithms like https://en.wikipedia.org/wiki/K-means_clustering that get a set of data and try to create the categories to better classify them. There are many algorithm and the results don't agree all the time. But this is an open ended task, like the classification of biological species in animals. (Plants are more difficult, and bacterias even more.)
Computer Vision and Pattern Recognition. Was a good name, still will be. Great progress has been made, but it's not AGI as anyone who goes to CVPR will tell you.
It's not like there's a huge difference. Causal inference is a special case of curve-fitting, where one chooses what variables should enter into the fit according to a causal graph.
First, no AI that I know of has at its disposal a full blown model of the world it operates in, whereas most human brains do, and even if the model is imperfect, it is capable of producing fairly accurate simulations (what-if scenarios).
Second, deep learning model, however much we'd like to think they do, aren't capable of doing proper causal inference in a general setting (that is, within the confines of the model) and are therefore far from capable of doing what humans do and will remain so limited for a long time to come.
AGI will require the curve-fitting of deep learning, a general model of the world, the causal inference capabilities of something like AlphaGo, but in a general setting, not the super limited world AlphaGo operates in.
So no, AGI will require much more than just curve-fitting abilities.
> no AI that I know of has at its disposal a full blown model of the world it operates in
This is a field called Model-based Reinforcement learning, and it's quite advanced already -- there are indeed models that have an internal state reflecting the world state.
> deep learning model, however much we'd like to think they do, aren't capable of doing proper causal inference in a general setting
This is also addressed by recent models, somewhat. Once you have an abstract world model, searching for a high reward can be just a matter of running markovian simulation on it using high reward heuristics (given by a network of course), like AG does. This line is also very active right now, one example is the recent MuZero.
Inference at its core really isn't much more than an artful curve fitting (or an artful model search if you like), and it's one of the building blocks of intelligence.
It's all pretty meaningless semantics and guesswork.
Curve fitting means adaptive computation in networks of fairly simple units that allow for fairly general computation, i.e. traversing program space to find a good solution, or equivalently, intelligence is about evolving/searching a program that solves a wide array of tasks. It is about finding programs that maps from sensory space to the space of action sequences, maximizing reward.
But you need the right prior structure such that learning and producing action sequences is efficient or even feasible/reachable. You can see any additional program structure that aids e.g. generation and recall of memories and planning (production of output targeted at solving a goal) as prior structure that limits and defines the searched program space. You can even regard a planning module as part of the curve fitting as it simply concerns the last step of producing the output.
Therefore, intelligence is "curve fitting".
So the actual question is: How much additional structure over just a large number of simple repeated units is necessary? Nobody knows. Possibly not much. Possibly quite a bit.
But all this is curve-fitting in a more general sense - fitting the curve of life, gene reproduction. So it is still curve fitting, it is just that a pottential AGI is certainly not in the hypothesis space of current deep learning models, and those cannot reach AGI by curve-fitting.
"casual inference in a general setting" is just your brain running an input through it's existing thought and decision processes with a low threshold for a passing answer.
So an ML model running an input through a collection of other models to see if it gets a reasonable answer.
I am not accusing you of doing this in a bad way, but it kinda begs the question to argue that all intelligence is curve fitting, because if it is to be looked at that way, then we don't know the characteristics of the hyperspace on which it can be defined as "just" curve fitting, which is an important aspect of being able to carefully say that it is "just" curve fitting. While there almost certainly is such a hyperspace, just by virtue of the fact we're leaving ourselves so many degrees of freedom in our unspecified speculations here that it can't help but cover everything we ever do, we can't be confident that if we did know everything involved that it would be anywhere near the best representation, which is the only standard we can have in terms of whether or not something is "really" curve fitting. It isn't that hard to imagine that while an n-dimensional space can be defined upon which our intelligence is "curve fitting" that there is some better computational paradigm that would both describe it in fewer free parameters (or, in this case, a big-O different number of free parameters, probably) and also be easier to work with computationally, in which case we would have a solid ground to stand on to say that, no, it isn't really just "curve fitting".
Or to bring it down to Earth in another way, consider just the act of writing a program in the modern world. If you work really, really hard, you can define a space in which our act of programming is just "curve fitting"... but it's far from obvious that that is even remotely a sensible way to look at the world. (See "differentiable programming" for the best counterpoint I know to that: https://en.wikipedia.org/wiki/Differentiable_programming but it's a very small niche right now.) When I'm debugging a program there is almost never any utility at all in trying to think about it as a "curve" and trying to get it closer to a the "correct" curve. A Turing-complete-complex space can be described as curves, but those curves are just awfully complicated and I don't see how it would be a help.
My personal suspicion is that while our cognition involves rather less of this "Turing complete" thinking than we'd like to fancy ourselves using, we do irreducibly use elements of it [1], and as long as our best AI models are incapable of representing Turing-complete computations there is simply no chance of them being the answer to true human-scale cognition. (We do have models that can do it, e.g., evolutionary computation, but we lack any sensible idea of how to "update" such models like a neural net. Neural nets themselves in the simplest case aren't Turing complete, and none of the hybrid models seem to get there to me either, though I welcome correction on that point.)
[1]: Evidence: I don't think we could program Turing-complete machines if we were incapable of thinking that way ourselves. We aren't necessarily great at it, our engineering techniques are deeply characterized by the fact we can't really manipulate very many things at once in this manner and we have no choice but to break things up into very small modules and for us to combine them in a way that means that at any given time we have only a very small number of things to keep track of locally, but we are still doing non-trivially more than zero of the Turing-complete style of thinking. It isn't a hard guess from there to think that even if we aren't all that great at a full mathematical manifestation of this style of thinking, we may indeed be doing something somewhere between what our current neural nets do and this full TC-style thinking at a larger scale, and the inability to capture this in our neural nets is a currently-fatal-flaw.
It’s just fancy pattern matching. Multi-stage heuristics. Where it fails is data limitations. This is different from someone coming up with new insights based on different combinations of data inputs and better yet entirely new data/metrics.
That is basically what Chalmers argues in "Facing up the problem of consciousness." Basically, by the general approximation theorem, it is possible to find a neural network that acts externally precisely as you would, up to an arbitrarily small epsilon. However, one wonders if such a thing would be consciousness and if so, where does the consciousness sit, in the matrix multiplication or the graphics card.
So, you can argue something like our brains are nothing but curve fitting machine with enough parameters, but then you are probably forced to argue that consciousness is very closely related to computation, to the point were a coin flip, or a hello world program has some sliver of consciousness.
There are of course two possible ways around that, either one can argue for p-zombies, that is intelligent but not conscious beings, which then seems to require a super natural explanation for consciousness. Or you can argue that the brain is different, and that this gives rise to consciousness, and to general intelligence, which is the explanation that at least corresponds most closely to my subjective experience (but that is precisely what a soulless machine would write, isn't it?)
> it is possible to find a neural network that acts externally precisely as you would, up to an arbitrarily small epsilon. However, one wonders if such a thing would be consciousness and if so, where does the consciousness sit, in the matrix multiplication or the graphics card.
I don't like very much this reverent way of thinking about consciousness, as if it is from another world, or a different essence.
I believe consciousness is the ability of the agent to adapt to the environment in order to protect itself and maximise rewards. It's not just in the matrix multiplication, but in the embodiment, the environment-agent loop. Consciousness is not something that transcends the world and matrices, it's just a power to adapt and survive.
And it feels like something because that feeling has a survival utility, so the agent has a whole neural network to model future possible rewards and actions, which impacts behaviour and outcomes.
> hen you are probably forced to argue that consciousness is very closely related to computation, to the point were a coin flip, or a hello world program has some sliver of consciousness.
Why not? Those things have zero-consciousness that is conscious only of itself and which correctly reflects their lack of self-model.
The argument is, if our brains are just a curve fitting machine, then we can dial in the complexity of the computation. Start with a single layer parameter, then two parameter, and so on until we are at the complexity of the brain. By that procedure, we can ask after each parameter, if the machine is now conscious, and I strongly doubt that there is a good answer.
The way we ascribe consciousness to entities other than ourselves is based on similarity to ourselves.
Obviously other living humans have the highest similarity, so they are automatically deemed conscious. Next are other primates, followed by other domesticated mammals, and other animals.
Furthest from the status of conscious are creatures we see as automata like dung beetles rolling their food, or jellyfish ... jellyfishing.
Presumably we'd apply a similar process to hypothetical AGIs.
That's an ill-defined question. You can do the same thing with far less vague concepts than consciousness, and I could even ask you the same question about a brain and adding neurons.
https://en.wikipedia.org/wiki/Sorites_paradox
My exposure to these subjects has been limited, but the thing that I liked about Dan Dennets rebuttal to Chalmers is that it does away with the special-ness of consciousness.
It is more likely that consciousness is just a property of brains inside bodies. The interesting question to me is the level of complexity of brain and body required to produce something like what we experience and what is it like in other arrangements of brains and bodies.
I also don't know that intelligence is the great thing that we think it is. It's an adaptation. It's an adaptation that lots of other organisms survive just fine without.
Actually I like that solution, but try to show it to any degree of scientific standard. You need to have a good working definition of consciousness, then you need some experimental procedure, etc. etc.
That depends of which precise definition of consciouness you're talking about, which is a flame wars research field. But it is a consensus that consciouness is not equal to inteligence.
There's no need for a supernatural explanation of consciousness.
We are all p-zombies. Problem solved.
The longer we refuse to acknowledge that consciousness is nothing special, the longer it will take to tackle this topic. The only reason we cling to the idea that our minds are somehow special compared to other animals of various complexity is because we refuse to acknowledge that consciousness might exist in something we can't communicate with and that consciousness is a sliding scale rather than a binary property. Ascribing consciousness only to ourselves is hubris.
If one spends a bit of time observing humans, they will inevitably realise that some humans are more 'conscious' than others also.
tl;dr: we're all p-zombies. The fact that we think that each one of us isn't individually doesn't detract from that.
Just as natural sciences left less and less hiding places for god to exist, ML is leaving less and less hiding places for this borderline magical version of human-unique consciousness to exist. Answering this question in any more detail requires a much more rigorous definition of consciousness which is a big can of worms in itself.
You make a strong claim without anything to back it up. I don’t understand why it’s become trendy to deny that consciousness exists; to me it’s isomorphic to saying “We don’t actually exist at all. Prove me wrong.” It’s a vacuous statement, meant to sound evocative, but difficult to respond to in any meaningful way.
If I made a list of everything in order of how certain I am that the item on the list exists, consciousness would be at the top by far. Everything else could just be a nice illusion.
You falsely imply that the parent comment attempts "to deny that consciousness exists". What the parent comment actually says is "consciousness is nothing special" and "consciousness is a sliding scale rather than a binary property". These are different from non-existence.
Claiming that some mysterious and hard to define property that we can't measure even in principle "exists" in some meaningful way strikes me as the stronger claim than the skeptical take does.
Why do you think the burden of proof should be inverted? The mere fact that most humans intuitively feel "something" doesn't count for much of anything, especially once you stipulate that p-zombies would vote the same way.
I couldn't agree more. It also just seems like a continuation of the old story in science where the things that we think are special or magical just aren't.
I actually started reading the Book of Why and I recommend it to the HN crowd. Pearl does a really good job of going through the history of causality, including the quite interesting story behind the now known to be wrong (or at least incomplete) claim that "correlation is not causality".
He then places causality on a 3-rung scale. The bottom rung is association in data, which is where he says AI is stuck. Then there's intervention, "what if I do this thing?", and then there's counterfactuals, "what if I had done some other thing?"
He then makes a case for what intelligence actually is, and unsurprisingly it's getting up those three rungs. The method revolves around directed graphs, which have certain unintuitive properties. For instance if A and B can both cause C, knowing that A is unlikely make B more likely, given that C happened. There's a few other stock situations in various graphs that he walks through as well.
In the end the point seems to be that we could have a causal machine if we'd spend some more time on it. It would take data and try out some potential graphs, and some of the graphs would be ruled out by the data. And then some algo would tell you things like whether a randomized trial is necessary or even whether you'd need one (yes, this is another revelation).
I think there's also an argument that this is how people actually think, which makes sense because the graphs are not terribly large and they need to fit in your meat hardware. I haven't finished it but I would guess that you could take it in another direction and say this is why some animals sorta have intelligence in that they learn patterns, but they don't know the higher rungs.
Really interesting ideas, and at least clarifies what we mean by causality.
Thanks for the link to the J.Pearl interview , it 's very interesting. There are many counterpoints that are not examined though:
- There's nothing wrong with curve fitting per se. NNs fit hundreds of curves in parallel and many of them may contain cues about the causal structure of the data.
- Deep learning has become part of reinforcement learning, which is trying to learn a causal structure. The primary determinant of causality is the temporal order of cause and effect. The question is , do humans use other hints apart from time for causal inference?
- There is also not much evidence from neuroscience that wet brains are causality-inferrence machines, most of the evidence is that they are decision-making machines. Humans are also pretty bad at inferring causality when it's not obvious, but we re pretty good at associations/patterns.
- Reasoning (conscious) is often considered to act on a meta-level, which observes the internal action of the human brain itself and vocalizes what it sees. What the brain sees at this level is not the external world, but the representation, and we don't have evidence there is a model of the world in there (except perhaps temporary maps of space that exist in hippocampus). Assuming this is true, it s not impossible that current methods can be extended so that self-explaining an NN ends up being causal reasoning
the more important question is whether any of these methods can lead to the remarkable ability of brains to generate extremely intricate and improbable causal chains. can we get a CNN to start from a photo of maxwell's equations and output the theory of relativity? who knows
I think intelligence has for some time already been boiled down to curve fitting, even for humans. Our current accepted definition of intelligence in schools is to get a score that is higher than the average to be considered sufficiently intelligent to proceed to the next grade.
I feel anything that we develop for AI would fundamentally always be inspired by our own experiences and hence curve fitting is something we understand to be the best metric to optimize for.
> I think intelligence has for some time already been boiled down to curve fitting, even for humans.
No, the book actually makes an important point about this.
For instance, let's take a counterfactual. How do you know what would have happened if Barcelona had played Lionel Messi in goal over the latest season?
They've never done this, there are no data points for you to fit. The situation almost never arises that an outfield player plays in goal, and when they do it is always in a situation where someone's been sent off or injured, which is also rare.
And yet you and I and everyone else who can think knows this would result in Barcelona having a much worse season.
Just to be clear, it's not only because you'd be taking a top player out of offense where he is worth a lot, which you can surely fit some curves to show.
We can all guess that he'll be a worse keeper than Ter Stegen, but what curve would you fit that shows this? There's no data about Messi in goal.
Pearl does give a way to work it out via counterfactual analysis though.
If we say that goalkeeping requires certain skills, and being an outfielder also requires certain skills, these skills being graphed on a chart, its simple to see that Messi would not perform well as a goalkeeper, and this is still curvefitting. If it turns out that the goalkeeper and outfielder have a lot of overlapping skills, we might fall back and refer to a probability curve of things that normally happen; "players require practice to excel in their position" would seem to fall on that curve.
The fact is that everything in the world can be reduced down to curves. It's just a matter of your perspective.
> If we say that goalkeeping requires certain skills... If it turns out that the goalkeeper and outfielder have a lot of overlapping skills...
That's kinda the problem here. Once you have a model fitting curves is fine. But you need a structure from somewhere.
> players require practice to excel in their position
The problem with this is there's no data. Nobody gets to play a position they haven't practiced for. Yet you still somehow came to the right conclusion.
Anyway Pearl is much better at explaining this than I am.
I agree. In our current neoliberal era, the ones in power have already been replacing various systems in our society with algorithms and computers, and formulating everything into an optimization problem. As a result of this, the reverse has happened: the systems are now shaping humanity into something that could be optimized.
We have been trying to manage governments, public services, and education the same as corporations, creating numerical targets for institutions to optimize for. Education itself was formulated as an optimization problem about how to create more jobs. Public services like healthcare were privatized and became a target for profit optimization. Half of the stock market is controlled by High-Frequency Trading supercomputers, which will do virtually anything to gain an upper-hand in profits. Those methods were all inherited from the management styles of corporations that began in the neoliberal era. As fundamental parts of our society are replaced by those systems, the society now curve-fits the systems rather than the systems curve-fitting the society. We now hyperoptimize ourselves to fit in this neoliberal landscape; our time is told as something to be optimized between work, socializing, exercising, and self-improving, with no space for "actual free time" of our own. We go to college not to learn but to pass exams and get ourselves a good job that can sustain us. And the faults of our systems are now blamed to be individual problems: "You didn't optimize towards the current trends of the job market, it's your fault." And from the view of the corporations, we are just AI agents waiting to be optimized for cash, and we're now becoming one through fitting our bodies and minds to the social media that tries to maximize engagement and ad revenue no matter the real societal cost.
Now, the real problem of AI compared to the algorithms of the past is its data-driven nature: it can only learn from what data you give it for training. We can only accumulate data from the past and never from the future, so the AI systems will just keep repeating the past, no matter what unseen change will come. We will lose the ability to imagine new political, economic alternatives, we will just be feeding ourselves the status quo, and societal advancement will stagnate at the hands of automated systems. The cancellation of the future: this is what I'm ultimately afraid of.
We certainly tend to fall into scripts and follow those scripts for many things in life. GPT-2 demonstrated how much online prose is glorified mad libbing IMO and yet it has no sense of a consistent long-term context nor do its articles make any sort of significant point. Reading its output is impressive but it reminds me of what I hear from someone with dementia.
that said I think if you made its context recurrent in some way between responses and queries you could probably build a really interesting chatbot that could ramble about just about anything and nothing but do so in a sufficiently coherent way to scare up venture capital. Just a guess but it is my guess.
Curve fitting is only adapted to analysing continuous data and properties.
Sparse data such as actual semantics of natural languages or an object oriented database are a mismatch for current machine learning.
Some really insightful discussion of the possibilities and limits of ML can be found on Les Fridman's AI podcast. Especially good were interviews with Yann LeCunn, Jeff Hawkins, Elon Musk and Francois Chollet. One memorable quote from LeCun "There is no intelligence without learning." Here's a link to the LeCun interview, though many aothers are excellent. https://lexfridman.com/yann-lecun/
The interviews with researchers and engineers in the field seem like they'd be interesting, but I don't see it for Musk. What special insights does he offer into AI?
Tesla is deeply invested in self driving technology. He discussed the importance of data in the project, and he believes that Tesla is collecting 20x more data than any other player. He also believes that driverless will be more than 100x safer than humans fairly soon.
I suppose there are lots of papers and results that fall into the "only curve fitting" bucket, but there are many exciting results in recent years that have been curve fitting + X, where ANNs formed part of a system with other components. While none of these approach the level of something you could have a meaningful conversation with (which was an ambitious bar introduced in the article) some do kind of consider multiple actions, work through consequences of each, and pick a single action. This looks a lot like a crude form of reasoning about causality and hypotheticals, though not quite along the lines Pearl would like to see. But they typically do that in the context of a specific task, with an enumerable set of available actions.
So, one view of AI is that is is machine learning. Another view is that it consists of automated process. So, if expert systems count as AI, or if production rule systems count as AI, then pretty much any handcrafted if-then statement counts as AI. And, so too might any automated process..
This is has the advantage of recognizing how human intelligence can be automated and aggregated into system processes. And, the disadvantage that the boundaries of the concept start exploding.
I like cybernetics for providing a clear model of what constitutes intelligence -- a feedback loop between perception and action that achieves goals or lowers local entropy.
And, cybernetic systems can be artificial, natural or a mix
I too hold the view that artificial intelligence is not reliant on computers. Sufficiently complex business logic is impossible for a single person -- even the CEO -- to have an end-to-end view (much less have a significant influence on). The emergent behavior of large companies fits a definition of an AI today; human "decision makers" are increasingly just reviewing and approving the output of algorithms.
I think the end state for large corporations will be to automate away so much of the human input that they end up looking like what we think of as "AI" in a broader cultural context. But we already live in a world controlled by AI, and we have for 3+ decades.
Yeah, I mean, it's a slippery slope, but one worth sliding down. Is autopilot AI? If so, it was invented in 1914. Are speed governors AI? That was James Watt. Are if-then statements AI? That's the basis of human laws, dating back thousands of years.
Are corporations AI, or rather superintelligences, because they are groups of people bound by bylaws? If so, the whole damn world is AI through and through and machine learning is really just the tip of the iceberg.
Yeah, I guess that's my point. Humanity has been guided by largely autonomous systems for much of our existence, but only recently have those systems become sufficiently complex as to remove the possibility of human intervention in some processes due to the complexity involved -- nobody can intervene if nobody can understand the whole story.
I think this is why late-stage capitalism feels so bad to live under: these artificial systems (corporations), by design, stand in direct opposition to our humanity. They exist to prevent our humanity from getting in the way of their perpetual growth by abstracting any ethical problems away so that no human is faced with an ethical dilemma. Which explains why the world so often feels like a dystopian nightmare.
The difference is that you have the metacognition. You can think about your fitting. It’s possible to say that such a process is just another set of parameters, but they impact retro actively. At that point the metaphor of curve fitting might break down.
When I was working on recommenders at Amazon. I didn't really perceive this as modeling the mind of some sort of Oracle.
Instead I saw it as a change of basis from the space of previous purchases to the space of recent purchases.
That approach led to an entire AI framework and it still makes Amazon quite a bit of money annually apparently if Jeff Wilke's 2019 Re:MARS speech is to be believed.
I'd love to work on something more ambitious like AlphaGo or AlphaFold but those require tremendous resources and I'm really focused on bang for buck. But even then I'd see it as the marriage of classical search with modeling the probability of victory.
If someone says that the AGI is almost upon us I pretty much bozo bit them no matter how prestigious or fancy they may be otherwise.
To me, this debate is akin to wondering whether the essence of computing resides in the lower-level, Turing-like operations of a microprocessor or rather in the higher-level constructs and abstractions that we're able to build way "above" them. Whatever it is, intelligence, whether implemented artificially or embodied in the substrate of a living being, is likely built on a ladder of subsystems interoperating at different levels of abstraction.
I tend to avoid using the term AI at all and I mostly agree with the article, but I still don't see this as any kind of drawback. Computers are great at curve-fitting and there are many good use-cases for these tasks. We just need to be clear that we are very far from the sci-fi vision of a single computer which can understand, learn and know everything. Which is absolutely fine.
I use the term “AI” but couch it in the context that we are probably ten to a hundred years away from general artificial intelligence. I have a lower bar on my definition of general artificial intelligence as something that can be narrow but has a world view model for its activities and the environment and this model changes with experience, the AGI can explain what it is doing in its narrow field of expertise, and can creatively develop new techniques for solving problems in its narrow domain of expertise.
I have been paid to work in the field of AI since 1982 so I have experienced AI winters. I almost hope we have another to act sort-of like simulated annealing to get us out of the highly effective local minimum of deep learning. I have been paid to work with deep learning for about the last five years (except I retired six months ago) and I love it, a big fan, but it won’t get us to where I want to be. Perhaps hybrid symbolic AI and deep learning? I don’t know.
Some DNNs compute posterior probabilities for classification. You could feed those inputs into a Bayesian network for decision making too but you could also train a NN to make those decisions. Still there may be times when a human derived controller is better with AI as sensory input than an AI controller itself.
It was so refreshing to read the title of this article. This couldn’t be more true and timely. Everyone and their brother is talking about artificial intelligence and it’s frankly annoying at this point. These models are, simply put, just fancy interpolation/ extrapolation approaches.
> Our machines are still incapable of independently coming up with a thought or hypothesis, testing it against others and accepting or rejecting its validity based on reasoning and experimentation, i.e. following the core principles of the scientific method.
A prime example of this is "Adam," an autonomous mini laboratory that uses computers, robotics and lab equipment to conduct scientific experiments, automatically generate hypotheses to explain the resulting data, test these hypotheses, and then interpret the results.
This is a potentially interesting identity for AI, that is, as a curve-fitting function...
I postulate (but cannot at the present moment prove!) that if there is proof such that:
a) AI fits curves
and
b) That if all polynomial numbers (NX^M .. AX^2 + BX^1 + CX^0) are in fact curves...
then (perhaps)
c) Perhaps there is a link between polynomial numbers (as curves) and AI... that is, perhaps all AI can be thought of as a function with a polynomial solution, such that F(X) = NX^M .. AX^2 + BX^1 + CX^0, with F(X) being the AI in question...
I leave it to mathematicians, logicians, and people who do dimensional reductions/transformations -- to prove or disprove this...
In the financial markets, multi linear regression is a very popular tool for predicting the markets. That tool has been around for over a century and used extensively to make sense of financial data for decades now.
Ive always eyed companies and startups the boast predictive AI for the financial markets with suspicion. I just always assumed whatever they were doing was a glorified regression model... But you have a lot of shameless people who will gladly slap some AI jargon in their business plan just to get eyeballs
> Pearl contends that until algorithms and the machines controlled by them can reason about cause and effect, or at least conceptualize the difference, their utility and versatility will never approach that of humans.
Is not cause and effect a temporal-based acyclic directed graph with mutating state (wrt CompSci). Anyways, I think ML needs to look closer at simple cases, like sea squirts which have 8617 neurons. Try to answer how and why do they work better than some of our algorithms.
Great article. Why does AI hype persist? Simply because we're seeing moderate gains?
Newsflash- We're likely to continue seeing gains from ML tech for decades. Basic techniques may become part of core CS. With all the private siloed data out there, ML will need to be applied uniquely over and over.
In decades to come, if marketing continues as-is, are salesmen going to keep laying claim to the moon (AGI) the whole time? It's hard to believe we can remain on this tilt for so long.
There's a lot of talk in this thread about what intelligence is. Personally I found the AIXI model from algorithmic information theory a very convincing definition: https://en.wikipedia.org/wiki/AIXI
I'm surprised it's not more generally accepted as _the_ definition. Note that Markus Hutter the main author is currently working at DeepMind.
Isn't transfer learning just a faster way to fit a curve by starting with a curve that's partially fit instead of a random vector. So under this interpretation, transfer learning is also "curve fitting."
It is. But progress has been made taking models fit on pretty different topics and using them for something tangentially related. And that's exactly what humans do.
You don't start from zero when you learn to paint if you already know how to draw. You take another model of behavior and use it as a baseline to develop your painting behavior.
The fundamental thing that you can do that the computer can't right now is decide when you need a new model, when to use an existing one and when to transfer off of something.
ML is about function approximation where function can be anything and mostly something that is practically intractable by usual analytical or numerical methods, such as observing real-time processes with millions of variables.
In math, via a reduction, human life is just a function of time with many inputs and many outputs. I.e. you can do curve fitting there as well...
The learning from game itself was curve fitting, the Deep in Deep Reinforcement Learning usually means some difficult function is replaced by a deep neural network, approximating optimal values (for moves) trained on gameplay samples, usually in sense of rewards/punishments for reaching certain states; in games they could rank e.g. good/bad moves, winning states, losing states etc.
I tend to think that ML is a kind of automated program generation that maximizes a certain metrics given by human. Some of them are based on real data but some aren't (like RL). So it's certainly more than a curve that needs to be fit. It's more like Turing machine fitting.
Curve fitting is a useful tool, even in non ML contexts like a simple linear regression. There's the hype, which will eventually die, and then there's the business/engineering aspect, which will likely stick around.
Yes, it's done all the time. Suppose you have a model A that predicts some value for input x and you also have model B for the same problem, and they both work, but are not optimal for all cases, so they do the following:
I guess what I mean is at a much larger scale. Like 1000s of networks or maybe more, like I presume you get in the brain. Is this already what’s happening currently?
It is possible that human intelligence is not computable, but it looks veeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeery difficult to prove it. Nobody is sure. So the options are:
1) Try to prove that human intelligence is not computable.
2) Give up and go to another research field, like particle physics or artificial sweeteners or whatever.
3) Try to see how close to human intelligence can you get with the current technology. Some small areas may have a good approximation, like image classification, playing chess, …
4) Try to make a fully functional bug compatible human simulation.
For example, currently human intelligence accomplishes much more with much less resource usage (many many orders of magnitude) than any AI algorithm we've developed. Someone could look at the trend, and get an idea of the extent of improvement we'd have to reach parity with human intelligence, and see if it seems at all feasible. A simple quantitative analysis of what we know so far.
At any rate, I'm unaware of any sort of significant effort at #1.
This has been addressed by many authors since modern computation was formulated, for example by Turing himself. We learn about and discuss Newell and Simon's Physical Symbol System hypothesis in undergrad classes, ie. explicitly stating the underlying assumption. Once in a while someone will assert that computation is not sufficient, eg. Penrose, and generate discussion.
In practice you can work on AI-the-engineering-discipline without taking a position on this. Perhaps this is why you feel that researchers don't talk about it. Disciplined scientists tend not to take public positions on things they don't know for sure.
That seems like a pretty uninformed hypothesis. For example, halting oracles can process symbols, but are not computable. So, maybe it is necessary, but easy to show symbol processing may not be sufficient (as I just did).
This has significant implications for the engineering discipline. For example, if the mind is a halting oracle, then you can get much more performant algorithms by incorporating human interaction.
It could be possible that it's computable from a theoretic perspective but not a practical one. After all our biological brains are quite different from our electrical computer chips.
Out of date. Reinforcement Learning makes truly ‘smart’ actions, courses of action. Calling something ‘curve fitting’ pooh poohs it (especially ‘just’ curve fitting) - calling it ‘distilling’ gives it the respect that it deserves.
Pretty sure Maxwells Equations and the laws of thermodynamics didn't come from curve fitting!
Experimentalists may have verified these ideas using some kind of curve fitting, but thinking in abstractions, aka, "a ball rolled out in the street, maybe a child will follow," is one of the things curve fitting can't do.
The Plank Equation for the black body https://en.wikipedia.org/wiki/Planck%27s_law was the most successful curve fitting in history. They need like 30 years to discover quantum mechanics and understand the details.
AI used to be a tight quirky community. Having the brain as inspiration led to all sorts of anthropomorphizing. This was ok. Researchers understood what was meant with "learning", "intelligence", "to perceive" in the context of AI. Nowadays, it is almost irresponsible to do this, not because you'll confuse your co-researchers, but because popular tech articles will write about chatbots inventing their own language and having to be shutdown.
Still, as a business research lab, it is good to get your name out there, so all the wrong incentives are there: Careful researchers avoid anthropomorphizing, and lose their source of inspiration -- you can not be careful with difficult unsolved problems, you need to be a little crazy and "out there". Meanwhile, profit-seeking business engineers and their PR departments, obfuscate their progress and basic techniques, all to get that juicy article with "an AI taught itself to X and you won't believe what happened next".
The researchers actually busy solving the hard problems of vision, natural language understanding, and common sense, do not have time to write books about how AI is not yet general. Nobody from the research community ever claimed that, nobody came forward to claim they've solved these decade-old problems. It is people selling books railing against the popular reporting of AI. Boring, self-serving, and predictive, and you do not need to fit a curve to see that.
All this quarreling about definitions and Venn diagrams and well-known limitations is dust in the wind. Go figure out what to call it on your Powerpoint presentation by yourself, and quit bothering the community.